Last Update 8:03 PM September 20, 2020 (UTC)

Identosphere Blog Catcher

Brought to you by Identity Woman and Infominer.
Please do support our collaboration on Patreon!!!

Sunday, 20. September 2020

Just a Theory

The Kushner Kakistocracy

An expertly-reported, deeply disturbing piece by Katherine Eban on Jared Kushner's key role in the colossal federal response to the Covid-19 pandemic.

Katherine Eban, in a deeply reported piece, for Vanity Faire:

Those representing the private sector expected to learn about a sweeping government plan to procure supplies and direct them to the places they were needed most. New York, home to more than a third of the nation’s coronavirus cases, seemed like an obvious candidate. In turn they came armed with specific commitments of support, a memo on the merits of the Defense Production Act, a document outlining impediments to the private-sector response, and two key questions: How could they best help? And how could they best support the government’s strategy?

According to one attendee, Kushner then began to rail against the governor: “Cuomo didn’t pound the phones hard enough to get PPE for his state…. His people are going to suffer and that’s their problem.”

But wait, it gets worse:

Kushner, seated at the head of the conference table, in a chair taller than all the others, was quick to strike a confrontational tone. “The federal government is not going to lead this response,” he announced. “It’s up to the states to figure out what they want to do.”

One attendee explained to Kushner that due to the finite supply of PPE, Americans were bidding against each other and driving prices up. To solve that, businesses eager to help were looking to the federal government for leadership and direction.

“Free markets will solve this,” Kushner said dismissively. “That is not the role of government.”

Seldom have falser words been spoken. These incompetents conflate their failure to lead with their belief that the government cannot lead. The prophecy fulfills itself.

The same attendee explained that although he believed in open markets, he feared that the system was breaking. As evidence, he pointed to a CNN report about New York governor Andrew Cuomo and his desperate call for supplies.

“That’s the CNN bullshit,” Kushner snapped. “They lie.”

“That’s when I was like, We’re screwed,” the shocked attendee told Vanity Fair.

And indeed we sure have been. Nearly 200,000 have died from Covid-19 in the United States to date, with close to 400,000 deaths forecast by January 1.

I’m restraining myself from quoting more; the reporting is impeccable, and the truth of the situation deeply alarming. Read the whole thing, then maybe go for a long walk and practice deep breathing.

And then Vote. And make sure everyone you know is registered and ready to vote.

More about… Katherine Eban Jared Kushner Covid-19 Kakistocracy

Simon Willison

Quoting YouTube’s Plot to Silence Conspiracy Theories

One academic who interviewed attendees of a flat-earth convention found that, almost to a person, they'd discovered the subculture via YouTube recommendations. — YouTube’s Plot to Silence Conspiracy Theories

One academic who interviewed attendees of a flat-earth convention found that, almost to a person, they'd discovered the subculture via YouTube recommendations.

YouTube’s Plot to Silence Conspiracy Theories

Saturday, 19. September 2020

Simon Willison

DuckDB

DuckDB This is a really interesting, relatively new database. It's kind of a weird hybrid between SQLite and PostgreSQL: it uses the PostgreSQL parser but models itself after SQLite in that databases are a single file and the code is designed for use as an embedded library, distributed in a single amalgamation C++ file (SQLite uses a C amalgamation). It features a "columnar-vectorized query exec

DuckDB

This is a really interesting, relatively new database. It's kind of a weird hybrid between SQLite and PostgreSQL: it uses the PostgreSQL parser but models itself after SQLite in that databases are a single file and the code is designed for use as an embedded library, distributed in a single amalgamation C++ file (SQLite uses a C amalgamation). It features a "columnar-vectorized query execution engine" inspired by MonetDB (also by the DuckDB authors) and is hence designed to run analytical queries really quickly. You can install it using "pip install duckdb" - the resulting module feels similar to Python's sqlite3, and follows roughly the same DBAPI pattern.

Via Any thoughts/future plans for using DuckDB?


John Philpin : Lifestream

Dear Apple You release a new OS, add a new feature that a

Dear Apple You release a new OS, add a new feature that allows you to see all your screens on a single page and allow us to hide some of those screens if we so choose. But you don’t offer a way to reorder my screens? Seriously? I am available for consulting engagements!

Dear Apple

You release a new OS, add a new feature that allows you to see all your screens on a single page and allow us to hide some of those screens if we so choose.

But you don’t offer a way to reorder my screens?

Seriously?

I am available for consulting engagements!


Ben Werdmüller

We’ve lost an incredible force for good

Justice Ruth Bader Ginsburg was a force for good who transformed America for the better. She fought for justice, and particularly the rights of women, for her entire life. She was inspiring and impactful; dedicated and fiercely intelligent; a genuinely good person who single-handed lay became one of the cornerstones of our modern democracy. "When I'm sometimes asked 'When will there be enough

Justice Ruth Bader Ginsburg was a force for good who transformed America for the better. She fought for justice, and particularly the rights of women, for her entire life. She was inspiring and impactful; dedicated and fiercely intelligent; a genuinely good person who single-handed lay became one of the cornerstones of our modern democracy.

"When I'm sometimes asked 'When will there be enough (women on the Supreme Court)?' and my answer is: 'When there are nine.' People are shocked. But there'd been nine men, and nobody's ever raised a question about that."

She was an example for all of us. May she have paved the way for many more women to follow.

It's unfortunate that her many accomplishments and her remarkable legacy are overshadowed by our current political situation. Nonetheless, we are forced to consider what will happen now she's gone. Mitch McConnell, ever the ghoulishly ethics-free opportunist, used his statement on Justice Ginsburg's passing to promise a Republican-appointed replacement. Ignoring the obvious hypocrisy of this idea (compare this to his statement on Garland's nomination in 2016), we have to consider what America will look like after decades of not just a Supreme Court dominated by conservatives, but one dominated by this kind of conservative: nationalist verging on fascist, with a desire to undo women's rights and remake the nation to fit an evangelical model.

If you're an American citizen, please check that you're registered to vote - and then make sure you do so. Our democracy can't take four more years of this.


The tech bro whitewash

I'm pretty conflicted about The Social Dilemma. On one hand, anything that contributes to the discourse around the harms knowingly committed in the name of engagement should be applauded. My friend David Jay works at the Center for Humane Technology as their Head of Mobilization, and was involved in this film; I know the people who work there are coming from a genuine place. I think that is adm

I'm pretty conflicted about The Social Dilemma.

On one hand, anything that contributes to the discourse around the harms knowingly committed in the name of engagement should be applauded. My friend David Jay works at the Center for Humane Technology as their Head of Mobilization, and was involved in this film; I know the people who work there are coming from a genuine place. I think that is admirable.

On the other hand, I'll confess to some pretty hard reservations about tech bros who make their fortune at companies like Facebook and then issue mea culpas. The harmful impact of platforms like Facebook were knowable; I know because I, and people like me, knew them well. In 2004, when Facebook was just graduating from being a way to rate the relative attractiveness of women on campus, I was building decentralized social platforms with community health in mind. There were many people like me who understood that creating a centralized place controlled by a single corporate entity for most of the world would get their information was incredibly problematic. It was and is antithetical to both the web and democracy itself.

So coders have been working on these problems, but this isn't really about software. Crucially, the people who have been at the receiving end of these harms have not been silent. Women - particularly women of color - have been sounding the alarm about these harms for years. That we're listening to men who worked to build these systems of abuse, rather than the people who have been calling out the problems this whole time, says a lot about who and what we value. It's not a problem we can code our way out of.

These conversations are vital. But let's be clear: they have been happening this whole time. If they're new to you, you've been listening to the wrong people. And we should consider whether we want to allow the tech bros who created this problem to whitewash their past.


John Philpin : Lifestream

Greg Orme is a business director at The London Business Scho

Greg Orme is a business director at The London Business School and his latest book ‘The Human Edge’ won The Business Book Awards Best Business Book for 2020. He came on to the People First podcast and shared his findings, hope you enjoy. People First Podcast

Greg Orme is a business director at The London Business School and his latest book ‘The Human Edge’ won The Business Book Awards Best Business Book for 2020.

He came on to the People First podcast and shared his findings, hope you enjoy.

People First Podcast


Doc Searls Weblog

Saving Mount Wilson

This was last night: And this was just before sunset tonight: From the Mt. Wilson Observatory website: Mount Wilson Observatory Status Angeles National Forest is CLOSED due to the extreme fire hazard conditions. To see how the Observatory is faring during the ongoing Bobcat fire, check our Facebook link, Twitter link, or go to the HPWREN Tower Cam and click on […]

This was last night:

And this was just before sunset tonight:

From the Mt. Wilson Observatory website:

Mount Wilson Observatory Status

Angeles National Forest is CLOSED due to the extreme fire hazard conditions. To see how the Observatory is faring during the ongoing Bobcat fire, check our Facebook linkTwitter link, or go to the HPWREN Tower Cam and click on the second frame from the left at the top, which looks east towards the fire (also check the south-facing cam and the recently archived timelapse movies below which offer a good look at the latest events in 3-hour segments. Note to media: These images can be used with credit to HPWREN). For the latest updates on the Bobcat fire from U.S. Forest Service, please check out their Twitter page.

Last night the firefighters set a strategic backfire to make a barrier to the fire on our southern flank. To many of us who did not know what was happening it looked like the end. Click here to watch the timelapse. The 12 ground crews up there have now declared us safe. They will remain to make sure nothing gets by as the fires tend to linger in the canyons below. They are our heroes and we owe them our existence. They are true professionals, artists with those backfires, and willing to put themselves at considerable risk to protect us. We thank them!!!!

There will be plenty of stories about how the Observatory and the many broadcast transmitters nearby were saved from the Bobcat Fire. For the curious, start with these links:

Mt. Wilson Observatory website Tweets from the Mount Wilson Observatory Tweets from the Angeles National Forest Space.com’s report on the event

I’ll add some more soon.

Friday, 18. September 2020

Simon Willison

Weeknotes: datasette-seaborn, fivethirtyeight-polls

This week I released Datasette 0.49 and tinkered with datasette-seaborn, dogsheep-beta and polling data from FiveThirtyEight. datasette-seaborn Datasette currently has two key plugins for data visualization: datasette-vega for line, bar and scatter charts (powered by Vega-Lite) and datsette-cluster-map for creating clustered marker maps. I'm always looking for other interesting visualization

This week I released Datasette 0.49 and tinkered with datasette-seaborn, dogsheep-beta and polling data from FiveThirtyEight.

datasette-seaborn

Datasette currently has two key plugins for data visualization: datasette-vega for line, bar and scatter charts (powered by Vega-Lite) and datsette-cluster-map for creating clustered marker maps.

I'm always looking for other interesting visualization opportunities. Seaborn 0.11 came out last week and is a very exciting piece of software.

Seaborn focuses on statistical visualization tools - histograms, boxplots and the like - and represents 8 years of development lead by Michael Waskom. It's built on top of Matplotlib and exhibits meticulously good taste.

So I've started building datasette-seaborn, a plugin which provides an HTTP interface for generating Seaborn charts using data stored in Datasette.

It's still very alpha but early results are promising. I've chosen to implement it as a custom output renderer - so adding .seaborn to any Datasette table (plus some querystring parameters) will output a rendered PNG chart of that data, just like how .json or csv give you that data in different export formats.

Here's an example chart generated by the plugin using the fabulous palmerpenguins dataset (intended as a new alternative to Iris).

I generated this image from the following URL:

https://datasette-seaborn-demo.datasette.io/penguins/penguins.seaborn?_size=1000&_seaborn=kdeplot&_seaborn_x=flipper_length_mm&_seaborn_hue=species&_seaborn_multiple=stack

This interface should be considered unstable and likely to change, but it illustrates the key idea behind the plugin: use ?_seaborn_x parameters to feed in options for the chart that you want to render.

The two biggest issues with the plugin right now are that it renders images on the main thread, potentially blocking the event loop, and it passes querystring arguments directly to seaborn without first validating them which is almost certainly a security problem.

So very much an alpha, but it's a promising start!

dogsheep-beta

Dogsheep Beta provides faceted search across my personal data from a whole variety of different sources.

I demo'd it at PyCon AU a few weeks ago, and promised that a full write-up would follow. I still need to honour that promise! I'm figuring out how to provide a good interactive demo at the moment that doesn't expose my personal data.

I added sort by date in addition to sort by relevance in version 0.7 this week.

fivethirtyeight-polls

FiveThirtEight have long published the data behind their stories in their fivethirtyeight/data GitHub repository, and I've been using that data to power a Datasette demo ever since I first released the project.

They run an index of their data projects at data.fivethirtyeight.com, and this week I noticed that they list US election polling data there that wasn't being picked up by my fivethirtyeight.datasettes.com site.

It turns out this is listed in the GitHub repository as a README.md file but without the actual CSV data. Instead, the README links to external CSV files with URLs like https://projects.fivethirtyeight.com/polls-page/president_primary_polls.csv

It looks to me like they're running their own custom web application which provides the CSV data as an export format, rather than keeping that CSV data directly in their GitHub repository.

This makes sense - I imagine they run a lot of custom code to help them manage their data - but does mean that my Datasette publishing scripts weren't catching their polling data.

Since they release their data as Creative Commons Attribution 4.0 International I decided to start archiving it on GitHub, where it would be easier to automate.

I set up simonw/fivethirtyeight-polls to do just that. It's a classic implementation of the git-scraping pattern: it runs a workflow script four times a day which fetches their latest CSV files and commits them to a repo. This means I now have a commit history of changes they have made to their polling data!

I updated my FiveThirtyEight Datasette script to publish that data as a new polls.db database, which is now being served at fivethirtyeight.datasettes.com/polls.

And since that Datasette instance runs the datasette-graphql plugin, you can now use GraphQL to explore FiveThirtyEight's most recent polling data at https://fivethirtyeight.datasettes.com/graphql/polls - here's an example query.

github-to-sqlite get

github-to-sqlite lets you fetch data from the GitHub API and write it into a SQLite database.

This week I added a sub-command for hitting the API directly and getting back data on the console, inspired by the fetch subcommand in twitter-to-sqlite. This is useful for trying out new APIs, since it both takes into account your GitHub authentication credentials (from an environment variable or an auth.json file) and can handle Link header pagination.

This example fetches all of my repos, paginating across multiple API requests:

github-to-sqlite get /users/simonw/repos --paginate

You can use the --nl option to get back the results as newline-delimited JSON. This makes it easy to pipe them directly into sqlite-utils like this:

github-to-sqlite get /users/simonw/repos --paginate --nl \ | sqlite-utils insert simonw.db repos - --nl

See Inserting JSON data in the sqlite-utils documentation for an explanation of what this is doing.

TIL this week Open a debugging shell in GitHub Actions with tmate Talking to a PostgreSQL service container from inside a Docker container Releases this week dogsheep-beta 0.7.1 - 2020-09-17 dogsheep-beta 0.7 - 2020-09-17 github-to-sqlite 2.6 - 2020-09-17 datasette 0.49.1 - 2020-09-15 datasette-ics 0.5 - 2020-09-14 datasette-copyable 0.3 - 2020-09-14 datasette-atom 0.8 - 2020-09-14 datasette-yaml 0.1 - 2020-09-14 datasette 0.49 - 2020-09-14 datasette 0.49a1 - 2020-09-14 datasette-seaborn 0.1a1 - 2020-09-11 datasette-seaborn 0.1a0 - 2020-09-11

Tim Bouma's Blog

A Simple Ecosystem Model

Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member. In my never-ending quest to come up with super-simple models I came up with this diagram. This post is a slight editorial refactoring of my recent Twitter thread found here. A simple ecosystem model The above illustration is not intended to be an architectura

Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member.

In my never-ending quest to come up with super-simple models I came up with this diagram. This post is a slight editorial refactoring of my recent Twitter thread found here.

A simple ecosystem model

The above illustration is not intended to be an architectural diagram — rather, it helps to 1) clarify conflations, 2) help define scope (the dotted box) and 3) understand motivations — of the parties that exist ‘outside of the system’

For example, ‘Issuer” usually gets conflated with ‘Authority’ — an authority merely ‘Attests’ — if you recognize it, then you can assume it is authoritative.

Anyone can attest to anything and issue something. The point of this model is that everything inside the box is neutral to that and solely focused on specific properties everyone needs regardless of intent or role.

The “Verifier” usually gets conflated with Relying Party. But a Verifier could be an off-the-shelf black box with the firmware baked in to verify against the right DIDs, challenging the holder with Bluetooth or NFC. The “Acceptor” could be logic that simply throws a switch to open a secure door. All done on behalf of a Relying Party.

The Holder can be anyone outside the system. An individual, organization or device, that is the ultimate ‘holder’ of secrets or cryptographic keys that is the basis of their power to convey intention.

Finally, the Registrar, is anyone or anything that is responsible for integrity of the ledger (doesn’t have to be blockchain). This ledger is responsible for two fundamental interactions: validation and transfer. In the case of a permissionless system, the ‘Registrar’ is actually an agreed-on set of rules, and proven (or not yet disproven) cryptographic primitives. For permissioned, or centralized systems, it could be a group of people, or even a single person in the back room with an Excel spreadsheet (not blockchain).

As for the dotted box — you need to determine who/what sits inside or outside of the box. For many outside the box, they may only care about a black box that they trust. This dotted box is also useful when you start thinking about the non-functional properties of the system — black or grey, should it be permissioned, permissionless, restricted access, globally available?

In the end, what I am trying to achieve is the expression of a simple conceptual model to help me express what could serve the wide range of use cases e.g.: opening a door, applying for university, letting someone across the border, etc. The model could also be used to express simply what we need to start building as a new digital infrastructure.

As always, this is a work-in-progress. Constructive comments welcome.


John Philpin : Lifestream

I suddenly feel a very steep learning curve in front of me …

I suddenly feel a very steep learning curve in front of me … I’m looking at you iOS14.

I suddenly feel a very steep learning curve in front of me … I’m looking at you iOS14.

Thursday, 17. September 2020

John Philpin : Lifestream

”Today’s update also removes Adobe Flash.”  

”Today’s update also removes Adobe Flash.”   … I’m not saying that some people have been a little slow on the uptake … I’m just saying that …   ”Today’s update also removes Adobe Flash.”

”Today’s update also removes Adobe Flash.”

 

… I’m not saying that some people have been a little slow on the uptake … I’m just saying that …

 

”Today’s update also removes Adobe Flash.”


Simon Willison

Array programming with NumPy - the NumPy paper

Array programming with NumPy - the NumPy paper The NumPy paper is out, published in Nature. I found this enlightening: for an academic paper it's very understandable, and it filled in quite a few gaps in my mental model of what NumPy is and which problems it addresses, as well as its relationship to the many other tools in the scientific Python stack. Via @numpy_team

Array programming with NumPy - the NumPy paper

The NumPy paper is out, published in Nature. I found this enlightening: for an academic paper it's very understandable, and it filled in quite a few gaps in my mental model of what NumPy is and which problems it addresses, as well as its relationship to the many other tools in the scientific Python stack.

Via @numpy_team


John Philpin : Lifestream

🎶 Wait … Paranoid is 50?

🎶 Wait … Paranoid is 50?

🎶 Wait … Paranoid is 50?


If only all Republicans thought the same way. (Castro pod

If only all Republicans thought the same way. (Castro podcast with Steve Schmidt - Lincoln Project cofounder with Katie Couric)

If only all Republicans thought the same way.

(Castro podcast with Steve Schmidt - Lincoln Project cofounder with Katie Couric)

Wednesday, 16. September 2020

Ben Werdmüller

10 things every founder needs to know in 2020

Being a founder is hard! There are so many things you need to stay on top of. Here are 10 things that every founder, investor, and startup employee needs to know in 2020. ICE is mass-sterilizing women. "When I met all these women who had had surgeries, I thought this was like an experimental concentration camp," one detainee told Project South. It's genocide as defined in the Convention on t

Being a founder is hard! There are so many things you need to stay on top of. Here are 10 things that every founder, investor, and startup employee needs to know in 2020.

ICE is mass-sterilizing women. "When I met all these women who had had surgeries, I thought this was like an experimental concentration camp," one detainee told Project South.

It's genocide as defined in the Convention on the Prevention and Punishment of the Crime of Genocide.

... and it's nothing new. The US has forcibly sterilized over 70,000 prisoners. In 2017, one Tennessee judge offered repeat offenders reduced jail time if they had surgery to prevent them from procreating. Just fifty years ago, around 25% of Native American women and 35% of Puerto Rican women were forcibly sterilized.

There is a surge of covid-19 cases in ICE camps. "You can either be a survivor or die."

23% of 18 to 39 year olds in the US think the Holocaust is a myth. And almost two-thirds of them aren't aware that 6 million Jews were killed in it.

White supremacist groups are up 55% since 2017. The number of anti-LGBTQ hate groups increased by 43%.

One-third of active duty troops and over half of minority service members have seen white supremacy in the ranks. It rose from 22% the year before.

The FBI has documented that white supremacist groups they investigate often have active links to law enforcement officials. "Since 2000, law enforcement officials with alleged connections to white supremacist groups or far-right militant activities have been exposed in Alabama, California, Connecticut, Florida, Illinois, Louisiana, Michigan, Nebraska, Oklahoma, Oregon, Texas, Virginia, Washington, West Virginia, and elsewhere."

Chad Wolf, who oversees the Department of Homeland Security and therefore ICE, watered down language in a report that warned of the threat from white supremacists. We know this from a whistleblower who was punished for non-compliance: "When Murphy refused to implement the changes as directed, [Deputy Secretary] Cuccinelli and Wolf stopped the report from being finished, the source said."

Changes to immigration won't be fully undone by the next President. "Because of the intense volume and pace of changes the Trump administration enacted while in office, even if we have a new administration, Trump will continue to have had an impact on immigration for years to come."

 

Photo by Austin Distel on Unsplash

Tuesday, 15. September 2020

Simon Willison

Datasette 0.49: The annotated release notes

Datasette 0.49 is out. Some notes on what's new. API for writable canned queries Writable canned queries now expose a JSON API, see JSON API for writable canned queries. (#880) I wrote about writable canned queries when they were introduced in Datasette 0.44 back in June. They provide a mechanism for defining a canned SQL query which can make writes (inserts, updates or deletes) to the u

Datasette 0.49 is out. Some notes on what's new.

API for writable canned queries

Writable canned queries now expose a JSON API, see JSON API for writable canned queries. (#880)

I wrote about writable canned queries when they were introduced in Datasette 0.44 back in June. They provide a mechanism for defining a canned SQL query which can make writes (inserts, updates or deletes) to the underlying SQLite database. They can be protected by Datasette authentication or you can leave them open - for example if you want unauthenticated users to be able to post comments or leave feedback messages.

The missing feature was API support. Datasette 0.49 adds that, so now you can define a canned query that writes to the database and then call it as a JSON API - either without authentication or protected by a mechanism such as that provided by the datasette-auth-tokens plugin.

This feature works with magic parameters, so you can define an API that accepts API traffic and automatically logs things like the incoming IP address. Here's a canned query defined in a metadata.yml file that logs user agent and IP addresses:

databases: logs: queries: log: sql: |- INSERT INTO logs ( user_agent, datetime ) VALUES ( :_header_user_agent, :_now_datetime_utc ) write: true

Create a SQLite database file called logs.db with the correct table like this:

$ sqlite-utils create-table logs.db logs id integer user_agent text datetime text --pk=id

Confirm the created schema with:

$ sqlite3 logs.db .schema CREATE TABLE [logs] ( [id] INTEGER PRIMARY KEY, [user_agent] TEXT, [datetime] TEXT );

Now start Datasette like so:

$ datasette logs.db -m metadata.yml

And visit http://127.0.0.1:8001/logs/log. You can click the "Run SQL" button there to insert a new log row, which you can then view at http://127.0.0.1:8001/logs/logs.

Next, the API. You can request a JSON response by adding ?_json=1 to the URL, so try this with curl:

$ curl -XPOST 'http://127.0.0.1:8001/logs/log?_json=1' {"ok": true, "message": "Query executed, 1 row affected", "redirect": null}

You can also set the Accept: application/json header on your request, like so:

$ curl -XPOST 'http://127.0.0.1:8001/logs/log' -H 'Accept: application/json' {"ok": true, "message": "Query executed, 1 row affected", "redirect": null} ~ %

Or by passing _json=1 as part of a POST submission. Let's try that using requests:

$ ipython In [1]: import requests In [2]: requests.post("http://127.0.0.1:8001/logs/log", data={"_json": "1"}).json() Out[2]: {'ok': True, 'message': 'Query executed, 1 row affected', 'redirect': None} Path parameters for custom page templates

New mechanism for defining page templates with custom path parameters - a template file called pages/about/{slug}.html will be used to render any requests to /about/something. See Path parameters for pages. (#944)

I added custom page support in Datasette 0.41 back in May, based on the needs of my Niche Museums site. I wanted an easy way to create things like an /about page that returned content from a custom template.

Custom page templates work as a fallback for Datasette 404s. If /about fails to resolve and Datasette was provided a --template-dir on startup, Datasette will check to see if a template exists called templates/pages/about.html.

Datasette 0.49 adds support for path parameters, partially inspired by cookiecutter (which showed me that it's OK to create files with curly braces in their name). You can now create templates with {slug} style wildcards as part of their filenames, and Datasette will route matching requests to that template.

I shipped a new release of Niche Museums today that takes advantage of that feature. I wanted neater URLs for museum pages - to shift from https://www.niche-museums.com/browse/museums/101 to https://www.niche-museums.com/101.

Here's how it works. I have a template file called templates/pages/{id}.html. That template takes advantage of the datasette-template-sql plugin, which adds a sql() function that can be called within the template. It starts like this:

<!DOCTYPE html> <html> {% set rows = sql("select * from museums where id = :id", {"id": id}) %} {% if not rows %} {{ raise_404("Museum not found") }} {% endif %} <head> <meta charset="utf-8"> {% set museum = rows[0] %} ... rest of the template here ...

Datasette made the variable id available to the context having captured it from the {id}.html template matching the incoming URL.

I then use the sql() function to execute a query, passing in id as a query parameter.

If there are no matches, I call the brand new raise_404() template function which cancels the rendering of this template and falls back to Datasette's default 404 handling.

Otherwise, I set the museum variable to rows[0] and continue rendering the page.

I've basically reinvented PHP and ColdFusion in my custom 404 handler. Hooray!

A few other notable changes

register_output_renderer() render functions can now return a Response. (#953)

The register_output_renderer() plugin hook was designed before Datasette had a documented Response class. It asked your plugin to return a custom {"content_type": "...", "body": "...", "status_code": 200, "headers": {}} dictionary.

You can now return a Response instead, and I plan to remove the dictionary version before Datasette 1.0. I released new versions of datasette-ics, datasette-atom, datasette-copyable and the new datasette-yaml that use the new return format.

New --upgrade option for datasette install. (#945)

I added the datasette install datasette-cluster-map command as a thin wrapper around pip install.

This means you can install new plugins without first figuring out which virtual environment your Datasette is running out - particularly useful if you install Datasette using Homebrew.

I then realized that this could be used to upgrade Datasette itself - but only if you could run pip install -U. So now datasette install -U datasette will upgrade Datasette in place.

New datasette --pdb option. (#962)

This is useful if you are working on Datasette itself, or a Datasette plugin. Pass the --pdb option and Datasette will start an interactive Python debugger session any time it hits an exception.

datasette --get exit code now reflects the internal HTTP status code. (#947)

I'm excited about the pattern of using datasette --get for running simple soundness checks, e.g. as part of a CI suite. Now that the exit code reflects the status code for the page you can write test scripts that look like this:

# Fail if homepage returns 404 or 500 datasette . --get /

New raise_404() template function for returning 404 errors. (#964)

Demonstrated in the Niche Museums example above.

And the rest:

datasette publish heroku now deploys using Python 3.8.5 Upgraded CodeMirror to 5.57.0. (#948) Upgraded code style to Black 20.8b1. (#958) Fixed bug where selected facets were not correctly persisted in hidden form fields on the table page. (#963) Renamed the default error template from 500.html to error.html. Custom error pages are now documented, see Custom error pages. (#965)

In writing up these annotated release notes I spotted a bug with writable canned queries, which I have now fixed in Datasette 0.49.1.


Quoting Sophie Zhang

A manager on Strategic Response mused to myself that most of the world outside the West was effectively the Wild West with myself as the part-time dictator – he meant the statement as a compliment, but it illustrated the immense pressures upon me. — Sophie Zhang

A manager on Strategic Response mused to myself that most of the world outside the West was effectively the Wild West with myself as the part-time dictator – he meant the statement as a compliment, but it illustrated the immense pressures upon me.

Sophie Zhang


“I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation

“I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation Sophie Zhang worked as the data scientist for the Facebook Site Integrity fake engagement team. She gave up her severance package in order to speak out internally about what she saw there, and someone leaked her memo to BuzzFeed News. It's a hell of a story: she saw bots and coordinated manual account

“I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation

Sophie Zhang worked as the data scientist for the Facebook Site Integrity fake engagement team. She gave up her severance package in order to speak out internally about what she saw there, and someone leaked her memo to BuzzFeed News. It's a hell of a story: she saw bots and coordinated manual accounts used to influence politics in countries all around the world, and found herself constantly making moderation decisions that had lasting political impact. “With no oversight whatsoever, I was left in a situation where I was trusted with immense influence in my spare time". This sounds like a nightmare - imagine taking on responsibility for protecting democracy in so many different places.

Via @bcrypt


Ben Werdmüller

The trough of sorrow

Every startup goes through the trough of sorrow. I've found it to be a useful way to describe the period that comes after initial enthusiasm and before things start to work out. It turns out it's quite a useful metaphor for non-startup life, too. There are lots of drawings of it out there on the internet. Here's my interpretation: Every new big endeavor comes with an initial rush of enthusia

Every startup goes through the trough of sorrow. I've found it to be a useful way to describe the period that comes after initial enthusiasm and before things start to work out. It turns out it's quite a useful metaphor for non-startup life, too.

There are lots of drawings of it out there on the internet. Here's my interpretation:

Every new big endeavor comes with an initial rush of enthusiasm. You're elated by the possibilities. This is going to be amazing!

Then reality sets in, and the deep slide. "Oh fuck," you'll ask yourself. "What do I do now?"

And that's when you start to experiment. You have to. The thing you thought would work probably won't. Your initial ideas are probably wrong. The story you told yourself during that initial rush of enthusiasm was just that: a story.

You could stay in this trough of sorrow. Many startups, and many people embarking on creative projects, do just that. They cling too needily to their initial idea, or are ineffective in their experimentation. They run out of steam. Sometimes, when more than one person is involved, they start to fight with each other. (65% of early-stage startups fail because of preventable human dynamics. I would bet that more fail because they run out of hope.)

You've got to be willing to experiment more rapidly than you're probably comfortable with, using real people (not aggregate statistics or sales figures) as the arbiter of what will work. You've got to be willing to make decisions based on horribly imperfect, qualitative data. You've got to be willing to take a leap of faith. And you've got to be more invested in the journey than in the end product.

Then maybe - just maybe - you'll make it.

I've been through the trough of sorrow for virtually every startup I've ever worked at: the two I founded, the two where I was first employee, and the one with a hundred million dollars in the bank. Some made it; some didn't.

I've also been through the trough of sorrow for every creative project I've ever made. For some of them, I was able to persevere and make it work; others, I abandoned.

It's about experimentation, it's about luck, it's about treating yourself and your team well, and it's about being able to let go of your precious ideas. If you treat the endeavor as a fait accompli, or go about it as you might in a large organization where you've already found your feet, you will certainly fail. On the other hand, if you embrace a spirit of creative curiosity, there's everything to play for.


reb00ted

Know more about COVID-19 than your public health authority

Hack a large symptom dataset and win $50k in this COVID-19 data science competition! Here is the webinar.

Hack a large symptom dataset and win $50k in this COVID-19 data science competition! Here is the webinar.

Monday, 14. September 2020

Virtual Democracy

The new Nobel: celebrating science events, their teams, and the history of discovery

The idea of giving out prizes is not itself obsolete; yet all award practices need to be refactored occasionally to capture the heart of the process of doing science, as this expands and changes in the coming decades. And, if it’s time to refactor the Nobel Prize, what does that suggest for the prizes your learned society hands out? Adding an ecosystem of badges (to show off skills and accomplishme
The idea of giving out prizes is not itself obsolete; yet all award practices need to be refactored occasionally to capture the heart of the process of doing science, as this expands and changes in the coming decades. And, if it’s time to refactor the Nobel Prize, what does that suggest for the prizes your learned society hands out? Adding an ecosystem of badges (to show off skills and accomplishments) to the recognition landscape helps to replace prizes as a central feature of open science. Since prizes celebrate brilliant work, and as celebrations as a whole add positive affect to your culture, let the prizes continue. But give them some new thought. What is your idea for Nobel 2.0?

Identity Praxis, Inc.

Braze Privacy Data Report

The Braze Privacy Data Report provides useful insights toward understanding U.S. consumers’ (n=2,000) and marketer executives’ (n=500) opinions regarding personal data usage and privacy.  The findings are insightful.  Marketers should start to take action now, and prepare themselves for the rise of the self-sovereign individual.  Individuals expect transparency According to the stud

The Braze Privacy Data Report provides useful insights toward understanding U.S. consumers’ (n=2,000) and marketer executives’ (n=500) opinions regarding personal data usage and privacy. 

The findings are insightful. 

Marketers should start to take action now, and prepare themselves for the rise of the self-sovereign individual. 

Individuals expect transparency

According to the study, individuals want transparency. They want to know, 

how their data will be used (94%) who it will be shared with (74%) what will be done with it (74%) what data has been collected (70%) how long it will be retained (59%) who is storing it (56%)

Marketers agree (99%), but what consumers and marketers don’t agree on is the value exchange and who should be in control of the policies and rules for privacy and personal information management & oversight.

Ignoring privacy concerns will hurt the bottom line

Marketers beware!

The Braze report notes that 84% of U.S. adults have decided against engaging a brand due to personal information requests, and 71% did so more than once; and, 75% stopped engaging with a company all together over privacy concerns. 

The message is clear, properly handling privacy will have a positive impact on the bottom line, and not doing so will be detrimental. 

Value exchange is possible

People are open to exchanging their personal information, but they want something in return; 71% of consumers will share their personal information in exchange for value: 

60% in exchange for cash; a 2018 study by SYZYGY suggested this could be as much as $150 USD (The Price of Personal Data, 2018) 26% for product and incentives 21% for free content. 

We see a gap here between consumers and marketing executives, as only 31% of marketing executives believe consumers should receive cash in exchange for personal data. 

The privacy expectation gap

Again, we have a privacy expectations gap in the U.S.; the report finds that 82% of U.S. adults say privacy is important to them, while only 29% of marketing executives hold the same opinion. 

It is time for the market to listen and begin to respect the sovereignty of the individual.

Rules and regulations

As for who should set the rules and regulations for personal information management & oversight and exchange and privacy protection, it’s not clear.

Marketing executives (88%) find the state-by-state sectoral approach in the US burdensome and believe that federal direction could provide clearer directions. And yet, just 52% of marketers and consumers believe the Federal governments should do more. 

The Bottom line

The bottom line is clear. People are waking up to the fact that their personal information has value. They’re open to exchange, but they expect transparency and compensation. Clearly, from a consumer attitudinal perspective, the personal information economy is at hand, but it is also clear there are industry and regulatory gaps that must first start to close; moreover, we need to develop and deploy the tools to help people safely and securely engage in the exchange of their personal information.  

Reference

Data Privacy Report (p. 4~12). (2020). Wakefield Research. https://info.braze.com/rs/367-GUY-242/images/BrazeDataPrivacyReport.pdf

The Price of Personal Data (pp. 1–17). (2018). SYZYGY. https://media.szg.io/uploads/media/5b0544ac92c3a/the-price-of-personal-data-syzygy-digital-insight-survey-2018.pdf 

The post Braze Privacy Data Report appeared first on Identity Praxis, Inc..


Project Your Privacy: Consider Blurring Your House on Google Maps

Proactively managing your privacy is no longer a luxury, it is a must and it takes conscious effort. One step you may not have considered is blurring your house, face, car or license plate, or some other object on Google Maps? Why would you want to do this, you might ask? Well, here are a […] The post Project Your Privacy: Consider Blurring Your House on Google Maps appeared first on Identity Pr

Proactively managing your privacy is no longer a luxury, it is a must and it takes conscious effort. One step you may not have considered is blurring your house, face, car or license plate, or some other object on Google Maps?

Why would you want to do this, you might ask? Well, here are a few reasons; I’m sure there are more.

You have street-facing windows from your bedroom or living room You’ve been a victim and want to minimize your digital footprint You generally want to maintain your privacy 

If you decide to do it, it is surprisingly easy.  

Steps to Blur your house on Google Maps

You can follow the steps below to blur your house on Google Maps.

Warning: According to Google, the blurring of an image on Google Maps is permanent; you can’t reverse it. So, be sure this is what you want to do before you proceed. 

Go to Google Maps and enter your home address Click on the picture of your house to enter Street View mode Click “Report a problem” link in the bottom-right corner of the screen Center the red box on the object you want to blur, like your car or home Click on the object type in the “Request blurring” list Provide an explanation of why you want to blur the object, in the filed that appears Enter your email address and hit Sumit.

Google will evaluate your request. The team (or systems) at Google will consider your request; it is not clear how long this will take. Once they’ve finished reviewing your request, they will let you know if it has been approved or rejected. 

The post Project Your Privacy: Consider Blurring Your House on Google Maps appeared first on Identity Praxis, Inc..

Sunday, 13. September 2020

DustyCloud Brainstorms

Spritely Goblins v0.7 released!

I'm delighted to say that Spritely Goblins v0.7 has been released! This is the first release featuring CapTP support (ie, "capability-secure distributed/networked programming support"), which is a huge milestone for the project! Okay, caveat... there are still some things missing from the CapTP stuff so far; you can …

I'm delighted to say that Spritely Goblins v0.7 has been released! This is the first release featuring CapTP support (ie, "capability-secure distributed/networked programming support"), which is a huge milestone for the project!

Okay, caveat... there are still some things missing from the CapTP stuff so far; you can only set up a bidirectional connection between two machines, and can't "introduce" capabilities to other machines on the network. Also setting up connections is an extremely manual process. Both of those should be improved in the next release.

But still! Goblins can now be used to easily write distributed programs! And Goblins' CapTP code even includes such wild features as distributed garbage collection!

As an example (also mentioned in a recent blogpost), I recently wrote a short chat program demo. Both the client and server "protocol" code were less than 250 lines of code, despite having such features as authenticating users during subscription to the chatroom and verifying that messages claimed by the chatroom came from the users it said it did. (The GUI code, by contrast, was a little less than 300 lines.) I wrote this up without writing any network code at all and then tested hooking together two clients over Tor Onion Services using Goblins' CapTP support, and it Just Worked (TM):

What's interesting here is that not a single line of code was added to the backend or GUI to accomodate networking; the host and guest modules merely imported the backend and GUI files completely unchanged and did the network wiring there. Yes, that's what it sounds like: in Goblins you can write distributed asynchronous programs

This is the really significant part of Goblins that's starting to become apparent, and it's all thanks to the brilliant design of CapTP. Goblins continues to stand on the shoulders of giants; thank you to everyone in the ocap community, but especially in this case Michael FIG, Mark S. Miller, Kevin Reid, and Baldur Jóhannsson, all of whom answered an enormous amount of questions (some of them very silly) about CapTP.

There are more people to thank too (too many to list here), and you can see some of them in this monster thread on the captp mailing list which started on May 18th (!!!) as I went through my journey of trying to understand and eventually implement CapTP. I actually started preparing a few weeks before which really means that this journey took me about four and a half months to understand and implement. As it turns out, CapTP is a surprisingly simple protocol protocol in its coneptualization once you understand what it's doing (though implementing it is a bit more complex). I do hope to try to build a guide for others to understand and implement on their own systems... but that will probably wait until Goblins is ported to another language (due to the realative simplicity of the task due to the language similarities, the current plan is to port to Guile next).

Anyway. This is a big deal, a truly exciting moment for goblinkind. If you're excited yourself, maybe join the #spritely channel on irc.freenode.net.

OH! And also, I can't believe I nearly forgot to say this, but if you want to hear more about Spritely in general (not just Goblins), we just released a Spritely-centric episode of FOSS and Crafts. Maybe take a listen!

Saturday, 12. September 2020

John Philpin : Lifestream

Double Trouble - so fixed.

Double Trouble - so fixed.

Double Trouble - so fixed.


The UK got suckered by BoJo, Gove, Cummings et al … turns ou

The UK got suckered by BoJo, Gove, Cummings et al … turns out that the EU are not so slow. … discuss.

The UK got suckered by BoJo, Gove, Cummings et al … turns out that the EU are not so slow.

… discuss.

Friday, 11. September 2020

Boris Mann's Blog

On the Sunshine Coast near Roberts Creek for a short bit of downtime. This was public beach access down a steep set of stairs by Marlene Road.

On the Sunshine Coast near Roberts Creek for a short bit of downtime. This was public beach access down a steep set of stairs by Marlene Road.


John Philpin : Lifestream

Groove, Podia, Kajabi, ClickFunnels and all the rest … anyon

Groove, Podia, Kajabi, ClickFunnels and all the rest … anyone have any feedback as to what they like (or dislike) and why? To me they are all the same - and all different. Very confusing.

Groove, Podia, Kajabi, ClickFunnels and all the rest … anyone have any feedback as to what they like (or dislike) and why?

To me they are all the same - and all different.

Very confusing.


I just want to thank Alaska for alerting me to the fact that

I just want to thank Alaska for alerting me to the fact that I will have earned triple miles on all flights that I took through the Summer. Really was very kind of them. Generous. Unexpected. Just wondering if they’d noticed how empty their flights were through the Summer.

I just want to thank Alaska for alerting me to the fact that I will have earned triple miles on all flights that I took through the Summer. Really was very kind of them. Generous. Unexpected.

Just wondering if they’d noticed how empty their flights were through the Summer.


Simon Willison

Weeknotes: datasette-dump, sqlite-backup, talks

I spent some time this week digging into Python's sqlite3 internals. I also gave two talks and recorded a third, due to air at PyGotham in October. sqlite-dump and datasette-backup I'm running an increasing number of Datasette instances with mutable database files - databases that are updated through a variety of different mechanisms. So I need to start thinking about backups. Prior to this

I spent some time this week digging into Python's sqlite3 internals. I also gave two talks and recorded a third, due to air at PyGotham in October.

sqlite-dump and datasette-backup

I'm running an increasing number of Datasette instances with mutable database files - databases that are updated through a variety of different mechanisms. So I need to start thinking about backups.

Prior to this most of my database files had been relatively disposable: they're built from other sources of data (often by scheduled GitHub Actions) so backups weren't necessary since I could always rebuild them from their point of truth.

Creating a straight copy of a SQLite database file isn't enough for robust backups, because the file may be accepting writes while you are creating the copy.

SQLite has various mechanisms for backups. There's an online backup API and more recent SQLite versions support a VACUUM INTO command which also optimizes the backed up database.

I figured it would be useful to expose this functionality by a Datasette plugin - one that could allow automated backups to be directly fetched from Datasette over HTTPS. So I started work on datasette-backup.

For the first backup mode, I decided to take advantage of the connection.iterdump() method that's built into Python's sqlite3 module. This method is an iterator that outputs plain text SQL that can recreate a database. Crucially it's a streaming-compatible mechanism - unlike VACUUM INTO which would require me to create a temporary file the same as the database I was backing up.

I started experimenting with it, and ran into a big problem. I make extensive use of SQLite full-text search, but the .sql dumps generated by .iterdump() break with constraint errors if they include any FTS tables.

After a bit of digging I came across a 13 year old comment about this in the cPython source code itself!

The implementation for .iterdump() turns out to be entirely in Python, and way less complicated than I had expected. So I decided to see if I could get FTS table exports working.

In a classic case of yak shaving, I decided to create a Python library called sqlite-dump to solve this problem. And since my existing cookiecutter templates only cover Datasette Plugins or Click apps I first needed to create a new python-lib template in order to create the library I needed for my plugin.

I got it working! Install the datasette-backup plugin on any Datasette instance to get a /-/backup/name-of-database.sql URL that will produce a streaming SQL dump of any attached database.

A weird bug with SQLite FTS and triggers

While working on datasette-backup I noticed a weird issue with some of my SQLite full-text search enabled databases: they kept getting bigger. Way bigger than I would expect them to.

I eventually noticed that the licenses_fts table in my github-to-sqlite demo database had 7 rows in it, but the accompanying licenses_fts_docsize table had 9,141. I would expect it to only have 7 as well.

I was stumped as to what was going on, so I turned to the official SQLite forum. I only recently discovered how useful this is as a resource. Dan Kennedy, one of the three core SQLite maintainers, replied within an hour and gave me some useful hints. The root cause turned out to be the way SQLite triggers work: by default, SQLite runs in recursive_triggers=off mode (for backwards compatibility with older databases). This means that an INSERT OR REPLACE update to a table that is backed by full-text search may not correctly trigger the updates needed on the FTS table itself.

Since there doesn't appear to be any disadvantage to running with recursive_triggers=on I've now set that as the default for sqlite-utils, as-of version 2.17.

I then added a sqlite-utils rebuild-fts data.db command in version 2.18 which can rebuild the FTS tables in a database and fix the _fts_docsize problem.

Talks

I presented Build your own data warehouse for personal analytics with SQLite and Datasette at PyCon AU last week. The video is here and includes my first public demo of Dogsheep Beta, my new combined search engine for personal analytics data imported using my Dogsheep family of tools. I took questions in this Google Doc, and filled out more detailed answers after the talk.

I gave a talk at PyRVA a couple of days called Rapid data analysis with SQLite and Datasette. Here's the video and Google Doc for that one.

I also pre-recorded my talk for PyGotham: Datasette - an ecosystem of tools for working with Small Data. The conference is in the first week of October and I'll be hanging out there during the talk answering questions and chatting about the project, safe from the stress of also having to present it live!

TIL this week Very basic tsc usage Display EC2 instance costs per month Basic strace to see what a process is doing Releases this week datasette-dns 0.1a1 - 2020-09-10 datasette-dns 0.1a0 - 2020-09-10 dogsheep-beta 0.7a0 - 2020-09-09 sqlite-utils 2.18 - 2020-09-08 sqlite-utils 2.17 - 2020-09-07 datasette-backup 0.1 - 2020-09-07 sqlite-dump 0.1.1 - 2020-09-07 sqlite-dump 0.1 - 2020-09-07 sqlite-dump 0.1a - 2020-09-06 datasette-backup 0.1a - 2020-09-06 datasette-block-robots 0.3 - 2020-09-06 datasette-block-robots 0.2 - 2020-09-05 dogsheep-beta 0.6 - 2020-09-05 dogsheep-beta 0.5 - 2020-09-04

John Philpin : Lifestream

Human Trafficking In Sports - a veritable who’s who of spons

Human Trafficking In Sports - a veritable who’s who of sponsors, click on the link to register - it’s free - it’s important. Friday 11 September 17.00 CET

Human Trafficking In Sports - a veritable who’s who of sponsors, click on the link to register - it’s free - it’s important.

Friday 11 September 17.00 CET


Simon Willison

Stories of reaching Staff-plus engineering roles

Stories of reaching Staff-plus engineering roles Extremely useful collection of career stories from staff-level engineers at a variety of different companies, collected by Will Larson. Via Amy Unger

Stories of reaching Staff-plus engineering roles

Extremely useful collection of career stories from staff-level engineers at a variety of different companies, collected by Will Larson.

Via Amy Unger


Doc Searls Weblog

On fire

The white mess in the image above is the Bobcat Fire, spreading now in the San Gabriel Mountains, against which Los Angeles’ suburban sprawl (that’s it, on the right) reaches its limits of advance to the north. It makes no sense to build very far up or into these mountains, for two good reasons. One […]

The white mess in the image above is the Bobcat Fire, spreading now in the San Gabriel Mountains, against which Los Angeles’ suburban sprawl (that’s it, on the right) reaches its limits of advance to the north. It makes no sense to build very far up or into these mountains, for two good reasons. One is fire, which happens often and awfully. The other is that the mountains are geologically new, and falling down almost as fast as they are rising up. At the mouths of valleys emptying into the sprawl are vast empty reservoirs—catch basins—ready to be filled with rocks, soil and mud “downwasting,” as geologists say, from a range as big as the Smokies, twice as high, ready to burn and shed.

Outside of its northern rain forests and snow-capped mountains, California has just two seasons: fire and rain. Right now we’re in the midst of fire season. Rain is called Winter, and it has been dry since the last one. If the Bobcat fire burns down to the edge of Monrovia, or Altadena, or any of the towns at the base of the mountains, heavy winter rains will cause downwasting in a form John McPhee describes in Los Angeles Against the Mountains:

The water was now spreading over the street. It descended in heavy sheets. As the young Genofiles and their mother glimpsed it in the all but total darkness, the scene was suddenly illuminated by a blue electrical flash. In the blue light they saw a massive blackness, moving. It was not a landslide, not a mudslide, not a rock avalanche; nor by any means was it the front of a conventional flood. In Jackie’s words, “It was just one big black thing coming at us, rolling, rolling with a lot of water in front of it, pushing the water, this big black thing. It was just one big black hill coming toward us.

In geology, it would be known as a debris flow. Debris flows amass in stream valleys and more or less resemble fresh concrete. They consist of water mixed with a good deal of solid material, most of which is above sand size. Some of it is Chevrolet size. Boulders bigger than cars ride long distances in debris flows. Boulders grouped like fish eggs pour downhill in debris flows. The dark material coming toward the Genofiles was not only full of boulders; it was so full of automobiles it was like bread dough mixed with raisins. On its way down Pine Cone Road, it plucked up cars from driveways and the street. When it crashed into the Genofiles’ house, the shattering of safety glass made terrific explosive sounds. A door burst open. Mud and boulders poured into the hall. We’re going to go, Jackie thought. Oh, my God, what a hell of a way for the four of us to die together.

Three rains ago we had debris flows in Montecito, the next zip code over from our home in Santa Barbara. I wrote about it in Making sense of what happened to Montecito. The flows, which destroyed much of the town and killed about two dozen people, were caused by heavy rains following the Thomas Fire, which at 281,893 acres was biggest fire in California history at the time. The Camp Fire, a few months later, burned a bit less land but killed 85 people and destroyed more than 18,000 buildings, including whole towns. This year we already have two fires bigger than the Thomas, and at least three more growing fast enough to take the lead. You can see the whole updated list on the Los Angeles Times California Wildfires Map.

For a good high-altitude picture of what’s going on, I recommend NASA’s FIRMS (Fire Information for Resource Management System). It’s a highly interactive map that lets you mix input from satellite photographs and fire detection by orbiting MODIS and VIIRS systems. MODIS is onboard the Terra and Aqua satellites; and VIIRS is onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) spacecraft. (It’s actually more complicated than that. If you’re interested, dig into those links.) Here’s how the FIRMS map shows the active West Coast fires and the smoke they’re producing:

That’s a lot of cremated forest and wildlife right there.

I just put those two images and a bunch of others up on Flickr, here. Most are of MODIS fire detections superimposed on 3-D Google Earth maps. The main thing I want to get across with these is how large and anomalous these fires are.

True: fire is essential to many of the West’s wild ecosystems. It’s no accident that the California state tree, the Coast Redwood, grows so tall and lives so long: it’s adapted to fire. (One can also make a case that the state flower, the California Poppy, which thrives amidst fresh rocks and soil, is adapted to earthquakes.) But what’s going on here is something much bigger. Explain it any way you like, including strange luck.

Whatever you conclude, it’s a hell of a show. And vice versa.


John Philpin : Lifestream

Farewell Diana. Missed already.

Farewell Diana. Missed already.

Farewell Diana. Missed already.

Thursday, 10. September 2020

Bill Wendel's Real Estate Cafe

Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices?

Sent this email two weeks ago, and still have not received a response even though a similar effort caused Barron’s to tone down their headline… The post Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices? first appeared on Real Estate Cafe.

Sent this email two weeks ago, and still have not received a response even though a similar effort caused Barron’s to tone down their headline…

The post Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices? first appeared on Real Estate Cafe.


Simon Willison

15 rules for blogging, and my current streak

15 rules for blogging, and my current streak Matt Webb is on a 24 week streak of blogging multiple posts a week and shares his rules on how he's doing this. These are really good rules. A rule of thumb that has helped me a lot is to fight back against the temptation to make a post as good as I can before I publish it - because that way lies a giant drafts folder and no actual published content.

15 rules for blogging, and my current streak

Matt Webb is on a 24 week streak of blogging multiple posts a week and shares his rules on how he's doing this. These are really good rules. A rule of thumb that has helped me a lot is to fight back against the temptation to make a post as good as I can before I publish it - because that way lies a giant drafts folder and no actual published content. "Perfect is the enemy of shipped".

Via @intrcnnctd

Wednesday, 09. September 2020

Boris Mann's Blog

My bmannconsulting site is now a Jekyll-with-backlinks public notes garden #secondbrain, and it’s running on @FissionCodes. 🚧 I’m doing some funky DNS things so likely a little slow. 🚧

My bmannconsulting site is now a Jekyll-with-backlinks public notes garden #secondbrain, and it’s running on @FissionCodes.

🚧 I’m doing some funky DNS things so likely a little slow. 🚧


reb00ted

Did the world end and somebody forgot to tell me?

It is 11am and so dark that you cannot do anything without the lights on. The sky is a color that is hard to describe. The photo does not do it justice. Perhaps the color you would expect in a sci-fi movie on a doomed, collapsed, evil planet.o And that is how it is in Sunnyvale, California, and probably many other places on the North American West coast this morning.

It is 11am and so dark that you cannot do anything without the lights on. The sky is a color that is hard to describe. The photo does not do it justice. Perhaps the color you would expect in a sci-fi movie on a doomed, collapsed, evil planet.o

And that is how it is in Sunnyvale, California, and probably many other places on the North American West coast this morning.


Simon Willison

AVIF has landed

AVIF has landed AVIF support landed in Chrome 85 a few weeks ago. It's a new lossy royalty-free image format derived from AV1 video and it's really impressive - it can achieve similar results to JPEG using a quarter of the file size! Jake digs into AVIF in detail, providing lots of illustrative examples created using the Squoosh online compressor, which now supports AVIF encoding. Jake used the

AVIF has landed

AVIF support landed in Chrome 85 a few weeks ago. It's a new lossy royalty-free image format derived from AV1 video and it's really impressive - it can achieve similar results to JPEG using a quarter of the file size! Jake digs into AVIF in detail, providing lots of illustrative examples created using the Squoosh online compressor, which now supports AVIF encoding. Jake used the same WebAssembly encoder from Squoosh to decode AVIF images in a web worker so that the demos in his article would work even for browsers that don't yet support AVIF natively.


Ben Werdmüller

On resiliency at work

I use Range every day with my team - so I was delighted to chat with them about resilience at work. Culture is the most important thing in any team. By a mile. Your collective norms, beliefs, and practices will define how everyone acts and reacts, how safe they feel to be themselves at work, and as a direct result, how high quality the work itself is. You'll hear about my own journey, and mo

I use Range every day with my team - so I was delighted to chat with them about resilience at work.

Culture is the most important thing in any team. By a mile. Your collective norms, beliefs, and practices will define how everyone acts and reacts, how safe they feel to be themselves at work, and as a direct result, how high quality the work itself is.

You'll hear about my own journey, and most importantly, how creating a high-performing team means supporting the whole human.

You can read the whole interview here.


Boris Mann's Blog

I just backed @doctorow’s Kickstarter for an audio book version of Attack Surface, the third Little Brother book. DRM-free to fight Audible, which has 90% of the market.

I just backed @doctorow’s Kickstarter for an audio book version of Attack Surface, the third Little Brother book.

DRM-free to fight Audible, which has 90% of the market.

Tuesday, 08. September 2020

Doc Searls Weblog

The smell of boiling frog

I just got this email today: Which tells me, from a sample of one (after another, after another) that Zoom is to video conferencing in 2020 what Microsoft Windows was to personal computing in 1999. Back then one business after another said they would only work with Windows and what was left of DOS: Microsoft’s […]

I just got this email today:

Which tells me, from a sample of one (after another, after another) that Zoom is to video conferencing in 2020 what Microsoft Windows was to personal computing in 1999. Back then one business after another said they would only work with Windows and what was left of DOS: Microsoft’s two operating systems for PCs.

What saved the personal computing world from being absorbed into Microsoft was the Internet—and the Web, running on the Internet. The Internet, based on a profoundly generative protocol, supported all kinds of hardware and software at an infinitude of end points. And the Web, based on an equally generative protocol, manifested on browsers that ran on Mac and Linux computers, as well as Windows ones.

But video conferencing is different. Yes, all the popular video conferencing systems run in apps that work on multiple operating systems, and on the two main mobile device OSes as well. And yes, they are substitutable. You don’t have to use Zoom (unless, in cases like mine, where talking to my doctors requires it). There’s still Skype, Webex, Microsoft Teams, Google Hangouts and the rest.

But all of them have a critical dependency through their codecs. Those are the ways they code and decode audio and video. While there are some open source codecs, all the systems I just named use proprietary (patent-based) codecs. The big winner among those is H.264, aka AVC-1, which Wikipedia says “is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019.” Also,

H.264 is perhaps best known as being the most commonly used video encoding format on Blu-ray Discs. It is also widely used by streaming Internet sources, such as videos from NetflixHuluPrime VideoVimeoYouTube, and the iTunes Store, Web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSCISDB-TDVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2) systems.

H.264 is protected by patents owned by various parties. A license covering most (but not all) patents essential to H.264 is administered by a patent pool administered by MPEG LA.[9]

The commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA and other patent owners. MPEG LA has allowed the free use of H.264 technologies for streaming Internet video that is free to end users, and Cisco Systems pays royalties to MPEG LA on behalf of the users of binaries for its open source H.264 encoder.

This is generative, clearly, but not as generative as the Internet and the Web, which are both end-to-end by design. .

More importantly, AVC-1 in effect slides the Internet and the Web into the orbit of companies that have taken over what used to be telephony and television, which are now mooshed together. In the Columbia Doctors example, Zoom the new PBX. The new classroom is every teacher and kid on her or his own rectangle, “zooming” with each other through the new telephony. The new TV is Netflix, Disney, Comcast, Spectrum, Apple, Amazon and many others, all competing for wedges our Internet access and entertainment budgets.

In this new ecosystem, you are less the producer than you were, or would have been, in the early days of the Net and the Web. You are the end user, the consumer, the audience, the customer. Not the producer, the performer. Sure, you can audition for those roles, and play them on YouTube and TikTok, but those are somebody else’s walled gardens. You operate within them at their grace. You are not truly free.

And maybe none of us ever were, in those early days of the Net and the Web. But it sure seemed that way. And it does seem that we have lost something.

Or maybe just that we are slowly losing it, in the manner of boiling frogs.

Do we have to? I mean, it’s still early.

The digital world is how old? Decades, at most.

And how long will it last? At the very least, more than that. Centuries or millennia, probably.

So there’s hope.

[Later…] For some of that, dig OBS—Open Broadcaster Software’s OBS Studio: Free and open source software for video recording and live streaming. HT: Joel Grossman (@jgro).

Also, though unrelated, why is Columbia Doctors’ Telehealth leaking patient data to advertisers? See here.


Boris Mann's Blog

Had a great get-to-know-you call with @JacobSayles, intro’d by @LeeLefever. Jacob has a long history with coworking, and we ended up jamming on Community Land Trusts and related models for #Vancouver. If you’re interested in creating new shared housing models — get in touch!

Had a great get-to-know-you call with @JacobSayles, intro’d by @LeeLefever.

Jacob has a long history with coworking, and we ended up jamming on Community Land Trusts and related models for #Vancouver.

If you’re interested in creating new shared housing models — get in touch!


Altmode

Line voltage fluctuations

This past July, we replaced our roof and at the same time updated our solar panels and inverter (I’ll write about the new solar equipment in the near future). I was monitoring the new equipment somewhat more closely than usual, and noticed on one warm August day that the inverter had shut down due to […]

This past July, we replaced our roof and at the same time updated our solar panels and inverter (I’ll write about the new solar equipment in the near future). I was monitoring the new equipment somewhat more closely than usual, and noticed on one warm August day that the inverter had shut down due to low line voltage. Having home solar generation shut down on a warm day with a high air conditioning load is the opposite of what the utility, Pacific Gas & Electric (PG&E), should want to happen. In addition to shutting down solar power inverters, low line voltage can be hard on power equipment, such as motors.

At a time when our voltage was particularly low, I opened a low line voltage case with PG&E. This resulted in a call from a field technician that told me several things:

PG&E has been aware of the voltage regulation problem in my neighborhood for some time The problem is likely to be due to the older 4-kilovolt service in my part of town. Newer areas have 12-kilovolt service that would be expected to have about 1/9 the voltage drop with an equivalent load. Another possible cause is the pole transformer that feeds our house and nearby neighbors that the technician told me is overloaded. [Other neighbors that aren’t as close are reporting these problems as well, so they would have to have similarly overloaded transformers.] Line voltage at my home is supposed to be between 114 and 126 VAC.

Another technician from PG&E came out a couple of days later to install a voltage monitor on the line. But it occurred to me that I have been collecting data since 2007 from my solar inverter that includes voltage data. A total of about 3.2 million data points. So I thought I’d check to see what I can find out from that.

My data are in a MySQL database that I can query easily. So asked it how many days there have been where the line voltage went below 110 VAC (giving PG&E some margin here) and the solar inverter was fully operating. There were 37 such days, including very brief voltage dips (<10 minutes) up to over 5 hours undervoltage on September 2, 2017. The line voltage that day looked like this:

A more recent representative sample is this:

Part of my concern is that this problem seems to be getting worse. Here is a table of the number days where <110 VAC lasted for more than 10 minutes:

YearDays with
UndervoltageUndervoltage
Minutes2007002008002009114201000201100201200201300201400201511920161102017141386201800201975612020 (to June 30)2160

And as I mentioned above, the problem seems to occur on particularly hot days (which is when others run their air conditioners; we don’t have air conditioning). Fortunately, the NOAA National Centers for Environmental Information provide good historical data on high and low temperatures. I was able to download the data for Los Altos and relate it to the days with the outages. Indeed, the days with the most serious voltage problems are very warm (high of 110 on 9/2/2017 and 100 degrees on 6/3/2020 shown above).

Does that mean we’re seeing purely a temperature effect that is happening more often due to global warming? It doesn’t seem likely because there have been very warm days in past years with little voltage drop. Here’s a day with a recorded high temperature of 108 in 2009:

My street, and the City of Los Altos more generally, has seen a lot of extensive home renovations and tear-down/rebuilds the past few years. The section of the street I live on, which has about 50 homes, currently has three homes being completely rebuilt and currently unoccupied. So this is only going to get worse.

The ongoing renovations and rebuilds in Los Altos are all considerably larger than the homes (built in the 1950s) that they replace, and I expect nearly all have air conditioning while the original homes didn’t. This is resulting in a considerably higher electrical load on infrastructure that wasn’t designed for this. While this is mitigated somewhat by the prevalence of solar panels in our area, the City needs to require that PG&E upgrade its infrastructure before issuing new building permits that will exacerbate this problem.

SolarEdge inverter display

John Philpin : Lifestream

Just spotted on the twitter banner of @tim_walters - too imp

Just spotted on the twitter banner of @tim_walters - too important not to share.

Just spotted on the twitter banner of @tim_walters - too important not to share.


🎶 Douglas Rushkoff talks to Michael Nesmith - yes … THAT Mic

🎶 Douglas Rushkoff talks to Michael Nesmith - yes … THAT Michael Nesmith … I knoooow .. right?

🎶 Douglas Rushkoff talks to Michael Nesmith - yes … THAT Michael Nesmith … I knoooow .. right?


This week’s People First Podcast features LaTonya Peoples -

This week’s People First Podcast features LaTonya Peoples - a classical violinist who composed the music I use on the podcast. Turns out there is a whole lot to LaTonya that I did not know. Call It Serendipity Enjoy … and let me know what you think.

This week’s People First Podcast features LaTonya Peoples - a classical violinist who composed the music I use on the podcast. Turns out there is a whole lot to LaTonya that I did not know.

Call It Serendipity

Enjoy … and let me know what you think.


Came across this through a different collection of people -

Came across this through a different collection of people - but believe that there are sufficient people in here to share and solicit thoughts. Local-first software - You own your data, in spite of the cloud

Came across this through a different collection of people - but believe that there are sufficient people in here to share and solicit thoughts.

Local-first software - You own your data, in spite of the cloud

Monday, 07. September 2020

John Philpin : Lifestream

I doubt you did - but in case you did do that ‘Ancestry DNA

I doubt you did - but in case you did do that ‘Ancestry DNA thing’ … and you wondered ‘why not - what could possibly go wrong’ … this is what possibly could go wrong ….. Lawyer’s warning over family tree DNA tests as Blackstone buys Ancestry for $4.7b

I doubt you did - but in case you did do that ‘Ancestry DNA thing’ … and you wondered ‘why not - what could possibly go wrong’ … this is what possibly could go wrong …..

Lawyer’s warning over family tree DNA tests as Blackstone buys Ancestry for $4.7b


Facebook, The PR Firm - Margins by Ranjan Roy and Can Duruk

Facebook, The PR Firm - Margins by Ranjan Roy and Can Duruk — ”One of the jokes² at Uber was that if things ever did not work out as a ride-sharing app, we could just pivot to becoming a law or a consulting firm since we were so good at introducing ride-sharing to many cities and finding creative ways to increase awareness and gain market share. The darker way to read the joke is that Uber

Facebook, The PR Firm - Margins by Ranjan Roy and Can Duruk

”One of the jokes² at Uber was that if things ever did not work out as a ride-sharing app, we could just pivot to becoming a law or a consulting firm since we were so good at introducing ride-sharing to many cities and finding creative ways to increase awareness and gain market share. The darker way to read the joke is that Uber is less of a tech company, but a financial entity composed to find regulatory arbitrage opportunities and suck the profits dry until it can find other sources of revenue. The joke is funny because it is true.”


reb00ted

The weather has already gone crazy at 1 degree warming

“Denver is under a winter storm watch two days after the city hit 101 degrees”. Now imagine if we get to 2 or more degrees of warming, which will happen during the lifetime of our children.

“Denver is under a winter storm watch two days after the city hit 101 degrees”. Now imagine if we get to 2 or more degrees of warming, which will happen during the lifetime of our children.


John Philpin : Lifestream

”Judge a man by his questions rather than his answers.”

”Judge a man by his questions rather than his answers.” Voltaire … reminded of this in today’s Stowe Boyd newsletter, which in turn reminded me … (now don’t judge me) … but boy - do I have some questions !

”Judge a man by his questions rather than his answers.”

Voltaire

… reminded of this in today’s Stowe Boyd newsletter, which in turn reminded me … (now don’t judge me) … but boy - do I have some questions !


Aaron Parecki

How to make an RTMP Streaming Server and Player with a Raspberry Pi

In this tutorial we'll use a Raspberry Pi to build an RTMP server that plays out any video stream it receives over the Raspberry Pi's HDMI port automatically. This effectively turns a Raspberry Pi into a Blackmagic Streaming Bridge.

In this tutorial we'll use a Raspberry Pi to build an RTMP server that plays out any video stream it receives over the Raspberry Pi's HDMI port automatically. This effectively turns a Raspberry Pi into a Blackmagic Streaming Bridge.

You can use this to stream from OBS or an ATEM Mini across your local network or the internet, and convert that to an HDMI signal in your studio to mix with other HDMI sources locally.

Parts

Here's a list of all the parts you'll need for this.

Of course, you'll need a Raspberry Pi. It doesn't need a ton of RAM, I got one with 4GB but it would probably work fine with 2GB as well. I prefer to buy the parts individually rather than the full kits, but either way is fine. If you get the bare Raspberry Pi you'll need to make sure to get a good quality power supply like this one.

I have tested this on a Raspberry Pi 3, and it does work, but there's much more of a delay, so I definitely recommend doing this with a Raspberry Pi 4 instead.

Get a good quality SD card for the Pi. We won't be doing anything super disk intensive, but it will generally perform a lot better with an SD Card with an "A1" or "A2" rating. You don't need much disk space, 16gb, 32gb or 64gb cards are all good options. The "A1" or "A2" ratings mean the card is optimized for running applications rather than storing photos or videos. I like the Sandisk cards, either the 32gb A1 or the slightly faster 64gb A2.

You'll need a case for the Pi as well. I like this one which is also a giant heat sink so that it's completely silent.

The Raspberry Pi 4 has a Micro HDMI port rather than a full size, so you'll need a cable to plug that in to a regular size HDMI port like this one.

Make sure you have your Raspberry Pi and whatever device you're streaming from connected via a wired network. While this will probably work over wifi, I wouldn't count on wifi to be reliable or fast for this.

Prepare the Raspberry Pi SD Card

First, head over to raspberrypi.org/downloads to download the Raspberry Pi Imager app. This app makes it super easy to create an SD card with the Raspberry Pi OS.

When you choose the operating system to install, select "Other"

then choose "Lite"

We don't need a desktop environment for this so it will be easier to use the command line.

Go ahead and write this to the SD card, then take out the SD card and put it into the Raspberry Pi.

Configure the OS

The first time you boot it up it will take a few seconds and then it will prompt you to log in.

Log in using the default username and password. Enter the username "pi", and then type the password "raspberry". You won't see the password as you're typing it.

It's a good idea to change the password to something else, so go ahead and do that now. Type:

sudo raspi-config

and choose the first option in the menu by pressing "Enter". When you type the new password, you won't see it on the screen, but it will ask you to type it twice to confirm. Press "tab" twice to select "Finish" to close out of this menu.

Next we need to configure the video mode so we know what kind of signal the Raspberry Pi will be sending on the HDMI port. You'll need to edit a text file to make these changes.

sudo nano /boot/config.txt

This will launch a text editor to edit this file. We need to change the following things in the file. These may be commented out with a # so you can delete that character to uncomment the line and make the necessary changes. These options are documented here.

# Make sure the image fits the whole screen disable_overscan=1 # Set HDMI output mode to Consumer Electronics Association mode hdmi_group=1 # Enable audio over HDMI hdmi_drive=2 # Set the output resolution and frame rate to your desired option # 1920x1080 60fps hdmi_mode=16 # 1920x1080 25fps hdmi_mode=33 # 1920x1080 30fps hdmi_mode=34

To save your changes, press Ctrl+X and then "Y" to confirm, then "Enter". At this point we need to reboot to make the changes take effect, so type:

sudo reboot

and wait a few seconds for it to reboot.

Install and Configure Software

We'll be using nginx with the RTMP module as the RTMP server, and then connect omxplayer to play out the stream over the HDMI port.

Install the necessary software by typing:

sudo apt update sudo apt install omxplayer nginx libnginx-mod-rtmp

We need to give nginx permission to use the video port, so do that with the following command:

sudo usermod -aG video www-data

Now we need to set up an RTMP server in nginx. Edit the main nginx config file:

sudo nano /etc/nginx/nginx.conf

Scroll all the way to the bottom and copy the below text into the config file:

rtmp { server { listen 1935; application live { # Enable livestreaming live on; # Disable recording record off; # Allow only this machine to play back the stream allow play 127.0.0.1; deny play all; # Start omxplayer and play the stream out over HDMI exec omxplayer -o hdmi rtmp://127.0.0.1:1935/live/$name; } } }

The magic sauce here is the exec line that starts omxplayer. omxplayer is an application that can play an RTMP stream out over the Raspberry Pi's HDMI port. The exec line will run this command whenever a new RTMP stream is received. The stream key will be set to the $name variable. Note that this means any stream key will work, there is no access control the way we have it configured here. You can read up on the RTMP module if you'd like to learn how to lock down access to only specific stream keys or if you want to enable recording the stream.

Save this file by pressing ctrl+X, then Y, then enter.

To test the config file for errors, type:

sudo nginx -t

If that worked, you can reload nginx to make your changes take effect:

sudo nginx -s reload

At this point the Raspberry Pi is ready! You can now stream to this box and it will output the received stream over HDMI! Any stream key will work, and you can stream using any sort of device or software like OBS. You'll need to find the IP address of the Raspberry Pi which you can do by typing

hostname -I

To stream to the Raspberry Pi, use the RTMP URL: rtmp://YOUR_IP_ADDRESS/live and anything as the stream key.

Setting up the ATEM Mini

We'll now walk through setting up an ATEM Mini Pro to stream to the Raspberry Pi.

If you're familiar with customizing your ATEM Software's Streaming.xml file, you can add a new entry with the Raspberry Pi's IP address. But there is another way which I like better, which is to create a custom streaming file that you can send to a remote guest and they can add it in their Software Control app without needing to edit any XML.

Create a new XML file with the following contents. This is the contents of one of the <service> blocks from the Streaming.xml file, wrapped with a <streaming> element.

<streaming> <service> <name>Raspberry Pi</name> <servers> <server> <name>Primary</name> <url>rtmp://RASPBERRY_PI_IP/live</url> </server> </servers> <profiles> <profile> <name>Streaming High</name> <config resolution="1080p" fps="60"> <bitrate>9000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> <config resolution="1080p" fps="30"> <bitrate>6000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> </profile> <profile> <name>Streaming Low</name> <config resolution="1080p" fps="60"> <bitrate>4500000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> <config resolution="1080p" fps="30"> <bitrate>3000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> </profile> </profiles> </service> </streaming>

Replace the IP address with your own, and you can customize the <name> as well which will show up in the ATEM Software Control. Save this file with an .xml extension.

In the ATEM Software Control app, click "Stream" from the menu bar, choose "Load Streaming Settings", and select the XML file you created.

This will create a new option in the "Live Stream" section where you can stream to the Raspberry Pi instead of YouTube!

Go ahead and enter a streaming key now, it doesn't matter what you enter since there is no access control on the Raspberry Pi. Click "On Air" and in a few seconds you should see the image pop up on the Raspberry Pi!


Identity Praxis, Inc.

Geofencing Warrants

Today in Wired, I read about geofencing warrants. Geofence Warrants are a law enforcement practice. Law enforcement submits a request to tech companies, notably Google, Apple, Uber, Facebook, for a list of all the devices in, at, or near a location during a specific period of time. The objective of this practice is to identify people […] The post Geofencing Warrants appeared first on Identi

Today in Wired, I read about geofencing warrants.

Geofence Warrants are a law enforcement practice. Law enforcement submits a request to tech companies, notably Google, Apple, Uber, Facebook, for a list of all the devices in, at, or near a location during a specific period of time. The objective of this practice is to identify people to be interviewed as part of an investigation. 

Example,

To identify devices within a few hundred yards to a mile of a murder scene, or accident.

As of the summer of 2020, this practice is coming under scrutiny, as some have raised privacy concerns about the practice; the practice could cause harm. It could have an adverse effect on individuals’ right to privacy and civil liberties. 

For instance, it could be used

to identify people during a protest in violation of the protesters’ First Amendment rights, i.e. the right to free speech. in densely populated areas in violation of individuals’ Fourth Amendment rights, namely their right to “secure in their persons, houses, papers, and effects, against unreasonable searches and seizure.”

I’m all for the development of technology and the use of technology to maintain societal safety and democracy, but I do have my concerns. I want to believe that most people and organizations have well-meaning intentions, that they want to create value for people and themselves, but capabilities like geofencing have so many applications that can be used for malign purposes. We need to be careful.

I think it is crucial that we develop a wide range of technology that leverages personal data, like an individual’s location, to generate value for both society and for the individual, but it is imperative that we find ways to ensure individual users of this technology have the ability to maintain their agency and self-sovereignty.

The post Geofencing Warrants appeared first on Identity Praxis, Inc..


Ben Werdmüller

Crypto-unions and lobster rolls

Happy Labor Day. While the rest of the world celebrates its labor movements on May Day, America chose its date to disassociate with a massacre of labor protesters by police in 1886. A further protest in Chicago's Haymarket Square devolved two days later: a bomb was exploded by an unknown person and killed a police officer, and the cops again indiscriminately opened fire. Ultimately, socialists

Happy Labor Day. While the rest of the world celebrates its labor movements on May Day, America chose its date to disassociate with a massacre of labor protesters by police in 1886. A further protest in Chicago's Haymarket Square devolved two days later: a bomb was exploded by an unknown person and killed a police officer, and the cops again indiscriminately opened fire. Ultimately, socialists were blamed, as they always are, and the country succumbed to martial law.

The one country to have not chosen May Day to commemorate what became known as the Haymarket Affair is the one it happened in. The reason we drink beer and eat summer picnic food on Labor Day instead of considering its meaning is not an accident: it was a deliberate choice to bury the past and redirect our energy towards a holiday that celebrates "the dignity of work".

What insane, radical, unworkable idea were the protesters taking to the streets to advocate? It turns out it was the eight hour workday - something we think of as more or less normal today. In the cold light of 2020, it's hard to imagine guns being drawn over a 40-hour workweek.

Of course, that's how it works: what seemed radical then is normal now. What seems radical now will be completely normal in the future. While Labor Day itself is less a celebration of the labor movement and more a commemoration of an attempt to diffuse it, history shows that it tends to resist that diffusion. The march towards equality is not inevitable, but it has been unstoppable.

Unions are an important part of that struggle: a counter-balancing force to corporate power that allows workers to organize together and meaningfully negotiate for better working conditions. While not every union is good, the idea of unions is very good. 65% of Americans continue to support unions, but only 10% are actually a member of one. Meanwhile, the stagnation of worker wages is directly connected to the decline of unions.

I have no doubt that unions have been intentionally scuppered since at least 1974, when the Taft-Hartley Act banned sympathy boycotts and made "right to work" laws possible. But they've also been in need of the kind of change and innovation we've seen in other organizations over the last few decades. What does it mean to have a union for a remote workforce? Or for gig workers? And how does the idea of a union change when everyone is connected by the internet and can communicate instantly with one another?

Kati Sipp's excellent site Hack the Union has been expertly covering these kinds of changes for years. I think it's also time for technologists - particularly open source and decentralization advocates - to think about how their skills could be brought to bear in order to create new kinds of transparent unions.

Movements like Occupy Wall Street and modern anti-fascists use a headless, non-hierarchical leadership structure rooted in transparency and consensus, making them harder to infiltrate or eradicate. What if unions learned from them and used tools inspired by Open Collective to organize dues in the open?

Decentralized Autonomous Organizations (DAOs) were created to take this leaderless approach and apply it to new kinds of businesses. Here, a blockchain is used to keep track of "who" is a member (using pseudonymous tokens instead of real-world identities), who can vote, and items put to a vote. Resources can be allocated based on what the organization decides. Instead of leaders, rules are maintained by code.

While DAOs were built to support a kind of libertarian ideal for business, what if they could be harnessed to support modern unions? The privacy and anonymity of individual members could be maintained while allowing any member to vote. Available resources could be inspected by anybody. There would be little potential for embezzlement and corruption, because of the unbreakable rules governing resource allocation, and membership could be spread organically.

I'm not a blockchain zealot, and there's no hard need for a potential solution for unions to be decentralized in this way. (Crypto-unions are just one suggestion.) What I think is needed is a conversation about how best to organize in the 21st century, so that the labor movement can continue its good work, so that worker rights can improve, and wages can break free of their stagnation. What's needed is a stronger opposite force to corporate power that allows ordinary working people to once again have a voice. The result will be to break more people out of poverty and create a more equal society for all.

In the meantime, enjoy your lobster rolls.

 

Photo from the Kheel Center archive.

Sunday, 06. September 2020

John Philpin : Lifestream

Cory has just read Shoshanna and has a position’ It inclu

Cory has just read Shoshanna and has a position’ It includes him reading an extract from his new, free ‘book’; How To Destroy Surveillance Capitalism The irony of publishing your position on Medium goes unrecognized - but heh - each to his own!

Cory has just read Shoshanna and has a position’

It includes him reading an extract from his new, free ‘book’;

How To Destroy Surveillance Capitalism

The irony of publishing your position on Medium goes unrecognized - but heh - each to his own!


reb00ted

Another collapse cartoon

Not wrong. Over the first weeks of the crisis several cartoons went viral, trying to capture what's still ahead of us. In my view all of them were incomplete. I got in touch with cartoonist Stefan Roth to draw a 'complete' version for my forthcoming book 'The Corona Chronicles‘. Here it is: pic.twitter.com/agytTX77S7 — Ralph Thurm (@aheadahead1) September 5, 2020

Not wrong.

Over the first weeks of the crisis several cartoons went viral, trying to capture what's still ahead of us. In my view all of them were incomplete. I got in touch with cartoonist Stefan Roth to draw a 'complete' version for my forthcoming book 'The Corona Chronicles‘. Here it is: pic.twitter.com/agytTX77S7

— Ralph Thurm (@aheadahead1) September 5, 2020

John Philpin : Lifestream

The store is ‘full’, everybody wearing masks except 2 … the

The store is ‘full’, everybody wearing masks except 2 … the same 2 that decide that the cash till lines do not apply to them and just walk up to the front. Couldn’t decide which I was experiencing ‘exceptionalism’, ‘arrogance’, ‘privelege’ or just plain old ‘idiocy’,

The store is ‘full’, everybody wearing masks except 2 … the same 2 that decide that the cash till lines do not apply to them and just walk up to the front.

Couldn’t decide which I was experiencing

‘exceptionalism’, ‘arrogance’, ‘privelege’

or just plain old ‘idiocy’,


Boris Mann's Blog

Picnic in the park yesterday. Smashed cucumber & pasta salad recipes on @ATBRecipes.

Friday, 04. September 2020

reb00ted

Crowdsourced symptom data for faster COVID-19 diagnosis?

There’s a big data challenge going on. $50,000 first prize. Seems like a good idea. Intro webinar on Tuesday.

There’s a big data challenge going on. $50,000 first prize. Seems like a good idea. Intro webinar on Tuesday.


Ben Werdmüller

In support of Miranda

On her podcast, Miranda Pacchiana has opened up about the aftermath of her lawsuit against her brother Adam Savage for sexual abuse. Miranda is my cousin, and I believe her. I think her statement is an act of bravery; the impact on her has been significant, which she discusses in the episode. I know some of you know Adam, have been employed by him, or have friends and family who do. It's a diff

On her podcast, Miranda Pacchiana has opened up about the aftermath of her lawsuit against her brother Adam Savage for sexual abuse.

Miranda is my cousin, and I believe her. I think her statement is an act of bravery; the impact on her has been significant, which she discusses in the episode. I know some of you know Adam, have been employed by him, or have friends and family who do. It's a difficult thing to think about, let alone discuss. All I ask is that you listen to her story.

Thursday, 03. September 2020

Simon Willison

Weeknotes: airtable-export, generating screenshots in GitHub Actions, Dogsheep!

This week I figured out how to populate Datasette from Airtable, wrote code to generate social media preview card page screenshots using Puppeteer, and made a big breakthrough with my Dogsheep project. airtable-export I wrote about Rocky Beaches in my weeknotes two weeks ago. It's a new website built by Natalie Downe that showcases great places to go rockpooling (tidepooling in American Englis

This week I figured out how to populate Datasette from Airtable, wrote code to generate social media preview card page screenshots using Puppeteer, and made a big breakthrough with my Dogsheep project.

airtable-export

I wrote about Rocky Beaches in my weeknotes two weeks ago. It's a new website built by Natalie Downe that showcases great places to go rockpooling (tidepooling in American English), mixing in tide data from NOAA and species sighting data from iNaturalist.

Rocky Beaches is powered by Datasette, using a GitHub Actions workflow that builds the site's underlying SQLite database using API calls and YAML data stored in the GitHub repository.

Natalie wanted to use Airtable to maintain the structured data for the site, rather than hand-editing a YAML file. So I built airtable-export, a command-line script for sucking down all of the data from an Airtable instance and writing it to disk as YAML or JSON.

You run it like this:

airtable-export out/ mybaseid table1 table2 --key=key

This will create a folder called out/ with a .yml file for each of the tables.

Sadly the Airtable API doesn't yet provide a mechanism to list all of the tables in a database (a long-running feature request) so you have to list the tables yourself.

We're now running that command as part of the Rocky Beaches build script, and committing the latest version of the YAML file back to the GitHub repo (thus gaining a full change history for that data).

Social media cards for my TILs

I really like social media cards - og:image HTML meta attributes for Facebook and twitter:image for Twitter. I wanted them for articles on my TIL website since I often share those via Twitter.

One catch: my TILs aren't very image heavy. So I decided to generate screenshots of the pages and use those as the 2x1 social media card images.

The best way I know of programatically generating screenshots is to use Puppeteer, a Node.js library for automating a headless instance of the Chrome browser that is maintained by the Chrome DevTools team.

My first attempt was to run Puppeteer in an AWS Lambda function on Vercel. I remembered seeing an example of how to do this in the Vercel documentation a few years ago. The example isn't there any more, but I found the original pull request that introduced it.

Since the example was MIT licensed I created my own fork at simonw/puppeteer-screenshot and updated it to work with the latest Chrome.

It's pretty resource intensive, so I also added a secret ?key= mechanism so only my own automation code could call my instance running on Vercel.

I needed to store the generated screenshots somewhere. They're pretty small - on the order of 60KB each - so I decided to store them in my SQLite database itself and use my datasette-media plugin (see Fun with binary data and SQLite) to serve them up.

This worked! Until it didn't... I ran into a showstopper bug when I realized that the screenshot process relies on the page being live on the site... but when a new article is added it's not live when the build process works, so the generated screenshot is of the 404 page.

So I reworked it to generate the screenshots inside the GitHub Action as part of the build script, using puppeteer-cli.

My generate_screenshots.py script handles this, by first shelling out to datasette --get to render the HTML for the page, then running puppeteer to generate the screenshot. Relevant code:

def png_for_path(path): # Path is e.g. /til/til/python_debug-click-with-pdb.md page_html = str(TMP_PATH / "generate-screenshots-page.html") # Use datasette to generate HTML proc = subprocess.run(["datasette", ".", "--get", path], capture_output=True) open(page_html, "wb").write(proc.stdout) # Now use puppeteer screenshot to generate a PNG proc2 = subprocess.run( [ "puppeteer", "screenshot", page_html, "--viewport", "800x400", "--full-page=false", ], capture_output=True, ) png_bytes = proc2.stdout return png_bytes

This worked great! Except for one thing... the site is hosted on Vercel, and Vercel has a 5MB response size limit.

Every time my GitHub build script runs it downloads the previous SQLite database file, so it can avoid regenerating screenshots and HTML for pages that haven't changed.

The addition of the binary screenshots drove the size of the SQLite database over 5MB, so the part of my script that retrieved the previous database no longer worked.

I needed a reliable way to store that 5MB (and probably eventually 10-50MB) database file in between runs of my action.

The best place to put this would be an S3 bucket, but I ind the process of setting up IAM permissions for access to a new bucket so infuriating that I couldn't bring myself to do it.

So... I created a new dedicated GitHub repository, simonw/til-db, and updated my action to store the binary file in that repo - using a force push so the repo doesn't need to maintain unnecessary version history of the binary asset.

This is an abomination of a hack, and it made me cackle a lot. I tweeted about it and got the suggestion to try Git LFS instead, which would definitely be a more appropriate way to solve this problem.

Rendering Markdown

I write my blog entries in Markdown and transform them into HTML before I post them on my blog. Some day I'll teach my blog to render Markdown itself, but so far I've got by through copying and pasting into Markdown tools.

My favourite Markdown flavour is GitHub's, which adds a bunch of useful capabilities - most notably the ability to apply syntax highlighting. GitHub expose an API that applies their Markdown formatter and returns the resulting HTML.

I built myself a quick and scrappy tool in JavaScript that sends Markdown through their API and then applies a few DOM manipulations to clean up what comes back. It was a nice opportunity to write some modern vanilla JavaScript using fetch():

async function render(markdown) { return (await fetch('https://api.github.com/markdown', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({'mode': 'markdown', 'text': markdown}) })).text(); } const button = document.getElementsByTagName('button')[0]; const output = document.getElementById('output'); const preview = document.getElementById('preview'); button.addEventListener('click', async function() { const rendered = await render(input.value); output.value = rendered; preview.innerHTML = rendered; }); Dogsheep Beta

My most exciting project this week was getting out the first working version of Dogsheep Beta - the search engine that ties together results from my Dogsheep family of tools for personal analytics.

I'm giving a talk about this tonight at PyCon Australia: Build your own data warehouse for personal analytics with SQLite and Datasette. I'll be writing up detailed notes in the next few days, so watch this space.

TIL this week Converting Airtable JSON for use with sqlite-utils using jq Minifying JavaScript with npx uglify-js Start a server in a subprocess during a pytest session Looping over comma-separated values in Bash Using the gcloud run services list command Debugging a Click application using pdb Releases this week dogsheep-beta 0.4.1 - 2020-09-03 dogsheep-beta 0.4 - 2020-09-03 dogsheep-beta 0.4a1 - 2020-09-03 dogsheep-beta 0.4a0 - 2020-09-03 dogsheep-beta 0.3 - 2020-09-02 dogsheep-beta 0.2 - 2020-09-01 dogsheep-beta 0.1 - 2020-09-01 dogsheep-beta 0.1a2 - 2020-09-01 dogsheep-beta 0.1a - 2020-09-01 airtable-export 0.4 - 2020-08-30 datasette-yaml 0.1a - 2020-08-29 airtable-export 0.3.1 - 2020-08-29 airtable-export 0.3 - 2020-08-29 airtable-export 0.2 - 2020-08-29 airtable-export 0.1.1 - 2020-08-29 airtable-export 0.1 - 2020-08-29 datasette 0.49a0 - 2020-08-28 sqlite-utils 2.16.1 - 2020-08-28

Ben Werdmüller

The generational trauma of 2020

I've noticed more blog posts on my feeds talking about mental health, and more tweets talking about anxiety in the face of this year's challenges. I'm certainly feeling it too. This week I've been building a contingency plan for what happens if I have to take a leave of absence from work because of my mother's health, which has been an emotionally difficult task on top of an already emotionally

I've noticed more blog posts on my feeds talking about mental health, and more tweets talking about anxiety in the face of this year's challenges. I'm certainly feeling it too. This week I've been building a contingency plan for what happens if I have to take a leave of absence from work because of my mother's health, which has been an emotionally difficult task on top of an already emotionally challenging context.

2020 as a whole is a collective trauma. The thing about serious trauma is that it ripples. Its effects are felt in the lives of the people who lived through it; not just as they live through it, but forever. And then it's felt in their children. And finally, in their children.

My father is one of the youngest survivors of the Japanese concentration camps in Indonesia. He and his older siblings were kept alive by my grandmother. As a 12 year old, my aunt snuck out of the camp and swam through the sewers to find food for them to eat. My grandmother would gather snails and secretly cook them. Around them all - my grandmother, my aunts, my toddler father - was death and brutality. People in the camp were routinely tortured and murdered.

My grandmother wailed in her sleep every night until the day she died. The trauma certainly affected her children; my father has suffered from its effects in ways that he only became consciously aware of later in life. In turn, his anxieties affected his children - partially through the effect of his actions, but there is also significant evidence that trauma can be passed down epigenetically. My dad is both younger than most of his siblings and had children later in life, but I've seen the effects of this trauma spread to the fourth and fifth generations in my aunts' branches of the family.

The implications for families that have been split up through draconian immigration policies, or suffered at the hands of trigger-happy police, or been caught by a racist criminal justice system are obvious. The trauma of poverty, too, creates epigenetic changes that span generations. But during this terrible year, more of us than ever before have seen our relatives die or had our homes destroyed at the hands of natural disasters. We've lived under a kind of fear we thought was a thing of the past.

So, no wonder we're all feeling kind of terrible. The thing is, it won't just be for the moment. The impact of 2020 - and, yes, I'm afraid to say, 2021 too - is likely to be with many of us for the rest of our lives. If we're not careful, it'll be with our children, too, and their children.

The good news is that these traumatic effects can be reversed. Exercise, intense learning, and anti-depressants can help. But that implies that we'll all need systemic help: mental wellness support and a far stronger social safety net. Without this support, the hidden effects of the pandemic (and everything else that's happened this year) may be with us for a very long time to come.


FACILELOGIN

What is Customer IAM (CIAM)?

Customer Identity and Access Management (CIAM) over the time has become a bit of an overloaded term. There are multiple articles, blogs, analyst reports explaining what CIAM is and defining it in different ways. The objective of this blog is to provide my perspective of CIAM in one-line definition. Before defining what it is, let’s be clear on why we need to worry about CIAM. Why CIAM? Tra

Customer Identity and Access Management (CIAM) over the time has become a bit of an overloaded term. There are multiple articles, blogs, analyst reports explaining what CIAM is and defining it in different ways. The objective of this blog is to provide my perspective of CIAM in one-line definition.

Before defining what it is, let’s be clear on why we need to worry about CIAM.

Why CIAM?

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. CIAM is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience.

The rest of the blog is based on the above statement. I believe that’s fair enough, and haven’t seen that been questioned much. We can safely assume that’s a well-accepted definition of the objective of CIAM. It might not be in the same words, but still, many who talk about CIAM, share a similar/closer view.

Gartner Definition

In one of it’s reports Gartner defines CIAM in a lighter way as,

In my view it’s not strong enough and does not carry enough depth to reach the objective of CIAM. CIAM is more than managing customer identities in a traditional way. It needs to be the facilitator to leverage identity data to catalyze business growth.

More CIAM Definitions

If you Google, you can find more definitions of CIAM. Not that all of them are wrong, but none of them IMO put enough weight on the objective of CIAM. Here I list few of them. Then again, these are different view points, and none of them are wrong or bad.

Customer-focused IAM

Rather calling CIAM, managing customer identities, I would like to call it customer-focused IAM. IAM is a well-defined term. As per Gartner,

The customer-focused IAM adds a lot of depth into the definition of IAM. For example, unlike traditional IAM, when you focus on customers, you probably start working with millions of impatient users, who get annoyed by filling lengthy forms, and cannot wait at least 2 seconds to log into a system. Even a small glitch in your system, they will take it to the social media and will make a big buzz. The slightest of leaked customer information could take a big slice of your share price down.

Yahoo!, for example, was in the middle of a series of data breaches a few years back, that exposed the PII data of more than 1 billion users. That did cost the company $350 million. They had to lower the sales price of its email and other digital services, which they sold to Verizon from $4.83 billion to $4.48 billion to account for the potential backlash from the data breaches.

Beyond Customer-focused IAM

Calling CIAM, the customer-focused IAM, still does not add enough weight on the emphasis that it should catalyze business growth. Unlike traditional IAM, a CIAM system should have the capability to integrate with customer relationship management (CRM) systems, marketing platforms, e-commerce platforms, content management systems (CMS), data management platforms and many more. A customer-focused IAM system, with no business integrations adds little value in terms of the business growth that we expect from having a CIAM solution.

What CIAM is?

Customer-focused IAM does not necessarily mean you only manage customer identities. That’s why I preferred customer-focused IAM instead of managing customer identities. In a typical CIAM solution in addition to direct customers, you also need to manage identities of employees who have direct access to the CIAM solution, or should integrate with an IAM system that manages employee identities. The latter is the preferred option. Also, not all the CIAM solutions are just B2C (business-to-consumer), it can also be B2B (business-to-business) or B2B2C (business-to-business-consumer) as well.

What is Customer IAM (CIAM)? was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


Boris Mann's Blog

Hello #pickling friends! A quick stovetop pickle with golden beets.

Hello #pickling friends! A quick stovetop pickle with golden beets.


Simon Willison

Render Markdown tool

Render Markdown tool I wrote a quick JavaScript tool for rendering Markdown via the GitHub Markdown API - which includes all of their clever extensions like tables and syntax highlighting - and then stripping out some extraneous HTML to give me back the format I like using for my blog posts. Via @simonw

Render Markdown tool

I wrote a quick JavaScript tool for rendering Markdown via the GitHub Markdown API - which includes all of their clever extensions like tables and syntax highlighting - and then stripping out some extraneous HTML to give me back the format I like using for my blog posts.

Via @simonw

Wednesday, 02. September 2020

reb00ted

The fire form of global warming

A before and after view of California’s oldest state park, one of my favorite places of all time with all its humongous redwood trees, waterfalls, creeks and hills. Not much left of it. Credit: Save the Redwoods, Sandip Bhattcharya and CAL FIRE.

A before and after view of California’s oldest state park, one of my favorite places of all time with all its humongous redwood trees, waterfalls, creeks and hills. Not much left of it.

Credit: Save the Redwoods, Sandip Bhattcharya and CAL FIRE.


Boris Mann's Blog

Fall 2020 Chromebooks for back to school in Canada

I always work across multiple machines and operating systems. I wrote up my laptop choices back in Feb 2020, and I ended up dipping my toes back into Windows with a laptop. Before that, I had bought two Chromebooks in a row, and I still think they are some of the best value. Ryan recently asked me a question about Chromebooks: I’m thinking of getting my 12 year old a Chromebook for back to

I always work across multiple machines and operating systems. I wrote up my laptop choices back in Feb 2020, and I ended up dipping my toes back into Windows with a laptop. Before that, I had bought two Chromebooks in a row, and I still think they are some of the best value.

Ryan recently asked me a question about Chromebooks:

I’m thinking of getting my 12 year old a Chromebook for back to school and wanted your input. For context, most of what they’ll be doing is google docs driven and she’s not a gamer.

Yeah, the Chromebooks are solid. And now that you can put Linux on them as a built in feature, there’s lots more that can be done with it.

You can also game on it: with streaming services like Stadia or Geforce Now, or Steam and some Linux games.

I usually follow the Wirecutter recommendations – I have personal experience with buying two ASUS Chromebooks and have been very impressed.

Looks like the ASUS Flip C434 is available at London Drugs for $700CAD.

The Lenovo model that is current WireCutter top rating looks to be good and available for about $560CAD.

If you see Chromebooks for less than $500CAD – they are usually way too underpowered.

And then Greg emailed me, so I’m turning this whole thing into a little Chromebook FAQ:

We’re wanting/needing to get something for my kids to use for school, and since they use the GSuite at school a Chromebook seems like a good idea. However, they still want to be able to play Minecraft.

Is there a site you would recommend for me to go to, in order to figure out what Chromebook to order? What Chromebook are you running?

Today, you don’t need to “dual boot” into Linux any more. Like WSL for Windows, you run any flavour of Linux that you want and you can run graphical apps like Minecraft no problem. This site on installing Minecraft for Chromebooks has too many ads on it, but basically – Linux stuff installs directly on Chromebooks these days.

Having a really solid browser environment, plus basic apps you can install, makes for a good stable system that is very inexpensive for what you get. You need to pay more for a Windows or Mac laptop – approximately 2-4 times the price of a solid Chromebook – to get equivalent performance.

My current Chromebook is the ASUS C434, but as I said at the top, I’m also switching between an ASUS Windows laptop and a Mac desktop.

I still wouldn’t pay more than $1000CAD for a Chromebook (and even that is pretty high).


Simon Willison

The "await me maybe" pattern for Python asyncio

I've identified a pattern for handling potentially-asynchronous callback functions in Python which I'm calling the "await me maybe" pattern. It works by letting you return a value, a callable function that returns a value OR an awaitable function that returns that value. Background Datasette has been built on top of Python 3 asyncio from the very start - initially using Sanic, and as-of Datase

I've identified a pattern for handling potentially-asynchronous callback functions in Python which I'm calling the "await me maybe" pattern. It works by letting you return a value, a callable function that returns a value OR an awaitable function that returns that value.

Background

Datasette has been built on top of Python 3 asyncio from the very start - initially using Sanic, and as-of Datasette 0.29 using a custom mini-framework on top of ASGI 3, usually running under Uvicorn.

Datasette also has a plugin system, built on top of Pluggy.

Pluggy is a beautifully designed mechanism for plugins. It works based on decorated functions, which are called at various points by Datasette itself.

A simple plugin that injects a new JavaScript file into a page coud look like this:

from datasette import hookimpl @hookimpl def extra_js_urls(): return [ "https://code.jquery.com/jquery-3.5.1.min.js" ]

Datasette can then gather together all of the extra JavaScript URLs that should be injected into a page by running this code:

urls = [] for url in pm.hook.extra_js_urls( template=template.name, datasette=datasette, ): urls.extend(url)

What's up with the template= and datasette= parameters that are passed here?

Pluggy implements a form of dependency injection, where plugin hook functions can optionally list additional parameters that they would like to have access to.

The above simple example didn't need any extra information. But imagine a plugin that only wants to inject jQuery on the table.html template page:

@hookimpl def extra_js_urls(template): if template == "table.html": return [ "https://code.jquery.com/jquery-3.5.1.min.js" ]

Datasette actually provides several more optional argument for these plugin functions - see the plugin hooks documentation for full details.

What if we need to await something?

The datasette object that can be passed to plugin hooks is special: it provides an object that can be used for the following:

Executing SQL against databases connected to Datasette Looking up Datasette metadata and configuration settings, including plugin configuration Rendering templates using the template environment configured by Datasette Performing checks against the Datasette permissions system

Here's the problem: many of those methods on Datasette are awaitable - await datasette.render_template(...) for example. But Pluggy is built around regular non-awaitable Python functions.

If my def extra_js_urls() plugin function needs to execute a SQL query to decide what JavaScript to include, it won't be able to - because you can't use await inside a regular Python function.

That's where the "await me maybe" pattern comes in.

The basic idea is that a function can return a value, OR a function-that-returns-a-value, OR an awaitable-function-that-returns-a-value.

If we want our extra_js_urls(datasette) hook to execute a SQL query in order to decide what URLs to return, it can look like this:

@hookimpl def extra_js_urls(datasette): async def inner(): db = datasette.get_database() results = await db.execute("select url from js_files") return [r[0] for r in results] return inner

Note that Python lets you define an async def inner() function inside the body of a regular function, which is what we're doing here.

The code that calls the plugin hook in Datasette can then look like this:

urls = [] for url in pm.hook.extra_js_urls( template=template.name, datasette=datasette, ): if callable(url): url = url() if asyncio.iscoroutine(url): url = await url urls.append(url)

I use this pattern in a bunch of different places in Datasette, so today I refactored that into a utility function:

import asyncio async def await_me_maybe(value): if callable(value): value = value() if asyncio.iscoroutine(value): value = await value return value

This commit includes a bunch of examples where this function is called, for example this code which gathers extra body scripts to be included at the bottom of the page:

body_scripts = [] for extra_script in pm.hook.extra_body_script( template=template.name, database=context.get("database"), table=context.get("table"), columns=context.get("columns"), view_name=view_name, request=request, datasette=self, ): extra_script = await await_me_maybe(extra_script) body_scripts.append(Markup(extra_script))

FACILELOGIN

Advanced Authentication Flows with Identity Server

WSO2 Identity Server ships with more than 35 connectors to support different authentication requirements. If you visit store.wso2.com, you can find all of them, and download and install into the product. Just like the product, all these connectors too, are released under the open source Apache 2.0 license. Identity Server supports passwordless authentication with FIDO 2.0 — and mobile push b

WSO2 Identity Server ships with more than 35 connectors to support different authentication requirements. If you visit store.wso2.com, you can find all of them, and download and install into the product. Just like the product, all these connectors too, are released under the open source Apache 2.0 license.

Identity Server supports passwordless authentication with FIDO 2.0 — and mobile push based authentication with Duo and mePin. Also, we have partnered with Veridium and Aware biometrics to support biometric authentication. In addition to that Identity Server also supports RSA SecurID, TOTP, which you can use with the Google Authenticator mobile app, and then OTP over SMS and Email.

During a login flow, you can orchestrate between these authenticators by writing an adaptive authentication script in JavaScript. With that you can define how you want to authenticate a user based on environmental attributes (e.g: any HTTP header, geo-location), user attributes / roles (e.g: admins always log with MFA), user behaviors (e.g.: number of failed login attempts, geo-velocity), a risk score and more.

In the above video, I discuss a set of use cases and show you how you can apply adaptive authentication policies to address advanced authentication requirements. If you’d like to know how to set things up from scratch please join our slack channel for any help.

Advanced Authentication Flows with Identity Server was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


John Philpin : Lifestream

images.philpin.com seems a more appropriate URL for my se

images.philpin.com seems a more appropriate URL for my second blot site than pictures.philpin.com

images.philpin.com

seems a more appropriate URL for my second blot site than

pictures.philpin.com

Tuesday, 01. September 2020

Simon Willison

Quoting Sarah Milstein

Simply put, if you’re in a position of power at work, you’re unlikely to see workplace harassment in front of you. That’s because harassment and bullying are attempts to exert power over people with less of it. People who behave improperly don’t tend to do so with people they perceive as having power already. — Sarah Milstein

Simply put, if you’re in a position of power at work, you’re unlikely to see workplace harassment in front of you. That’s because harassment and bullying are attempts to exert power over people with less of it. People who behave improperly don’t tend to do so with people they perceive as having power already.

Sarah Milstein


Ben Werdmüller

Reading, watching, playing, using: August 2020

This is my monthly roundup of the tech and media I consumed and found interesting. Here's my list for August. Books Educated, by Tara Westover. I realized about halfway through that the abuse that seems to ahave punctuated Westover's life were not going to stop. This is a brave story, although her unwillingness to condemn the church or the core of her family's beliefs leave us to join some of

This is my monthly roundup of the tech and media I consumed and found interesting. Here's my list for August.

Books

Educated, by Tara Westover. I realized about halfway through that the abuse that seems to ahave punctuated Westover's life were not going to stop. This is a brave story, although her unwillingness to condemn the church or the core of her family's beliefs leave us to join some of the dots ourselves.

Streaming

Nice White Parents. A limited run podcast by the studio behind Serial, about the relationship between wealthy white parents and the public schools they claim to support. Eye-opening.

Mrs America. The story of the Equal Rights Amendment, rendered as a gripping, human story. There's no doubt that the feminist pro-ERA characters are in the right, but it's worth reading Gloria Steinem and Eleanor Smeal's critical editorial about the series. It's certainly true that the financial forces backing the Stop ERA movement are underplayed.

Lovecraft Country. Just spectacular. I'm only two episodes in, but I was hooked from the first minute.

Arlo Parks. I've become absolutely addicted to her music. Perfect for long walks and late nights by myself.

Notable Articles Black Lives Matter

Together, You Can Redeem the Soul of Our Nation. John Lewis wrote an editorial to be published upon his death. If you click through to just one article in this post, please make it this one.

Pollution Is Killing Black Americans. This Community Fought Back. "Black communities like Grays Ferry shoulder a disproportionate burden of the nation’s pollution — from foul water in Flint, Mich., to dangerous chemicals that have poisoned a corridor of Louisiana known as Cancer Alley — which scientists and policymakers have known for decades."

Louisiana Supreme Court upholds Black man's life sentence for stealing hedge clippers more than 20 years ago. "A Black Louisiana man will spend the rest of his life in prison for stealing hedge clippers, after the Louisiana Supreme Court denied his request to have his sentence overturned last week." Only one judge - the only Black person on the court - dissented, pointing out that the sentence was grossly disproportionate to the crime.

Black troops were welcome in Britain, but Jim Crow wasn’t: the race riot of one night in June 1943. "The town did not share the US Army’s segregationist attitudes. According to the author Anthony Burgess, who spent time in Bamber Bridge during the war, when US military authorities demanded that the town’s pubs impose a colour bar, the landlords responded with signs that read: “Black Troops Only”."

Revisiting an American Town Where Black People Weren’t Welcome After Dark. I'm ashamed to say that sundown towns were new to me as a concept.

‘Were your grandparents slaves?’ On the very white-dominated world of venture funding.

The Pandemic

Children May Carry Coronavirus at High Levels, Study Finds. "Infected children have at least as much of the coronavirus in their noses and throats as infected adults, according to the research. Indeed, children younger than age 5 may host up to 100 times as much of the virus in the upper respiratory tract as adults, the authors found."

A Covid Patient Goes Home After a Rare Double Lung Transplant. "The surgery is considered a desperate measure reserved for people with fatal, irreversible lung damage. Doctors do not want to remove a person’s lungs if there is any chance they will heal." I'm writing this from my parents' house, where I'm supporting my mother in the aftermath of her double lung transplant. You don't want one. Please, please, please wear a mask.

How the Pandemic Defeated America. "Since the pandemic began, I have spoken with more than 100 experts in a variety of fields. I’ve learned that almost everything that went wrong with America’s response to the pandemic was predictable and preventable. A sluggish response by a government denuded of expertise allowed the coronavirus to gain a foothold." They need to go.

In A Twist On Loyalty Programs, Emirates Is Promising Travelers A Free Funeral If Infected With Covid. Innovative.

We thought it was just a respiratory virus. UCSF's report shows damage to the heart, gut, skin and more. The virus may weaponize our own immune systems against us.

Secret Gyms And The Economics Of Prohibition. "What Evelyn uncovered can only be described as a speakeasy gym. You know, illegal, hush hush, like the underground bars during the Prohibition era. These underground gyms appear to be popping up everywhere, from LA to New Jersey."

Trump's America

The cost of becoming a U.S. citizen just went up drastically. And asylum is no longer free. "The Trump administration announced on Friday an exorbitant increase in fees for some of the most common immigration procedures, including an 81% increase in the cost of U.S. citizenship for naturalization. It will also now charge asylum-seekers, which is an unprecedented move."

How the Media Could Get the Election Story Wrong. We shouldn't expect an election night this year. It'll take weeks, and there's a real possibility the election will stretch until January. But the media is set up for a big announcement.

A bipartisan group secretly gathered to game out a contested Trump-Biden election. It wasn’t pretty. Unless Biden has a landslide victory - which, to be honest, he probably won't - there may be violence on the streets and a political stalemate. In a year that's been plenty nasty already, we shouldn't expect this to go anything close to well.

With their visas in limbo, journalists at Voice of America worry that they’ll be thrown out of America. "VOA has long employed journalists who are citizens of other countries because they offer specific knowledge and expertise, including fluency in English and one or more of the 47 languages in which VOA broadcasts. In addition to their language skills, they are steeped in the history, culture and recent politics of the countries they report on, and they often have hard-to-replace sources and contacts among dissident communities." And now their visas are in jeopardy and they worry about having to leave - some to oppressive regimes.

The Truth Is Paywalled But The Lies Are Free. Some of the best journalism in the country is paywalled, offered up to a limited, wealthy audience, but disinformation is available to all. The effects of this disparity of information may be profound. (I like patronage models like The Guardian's.)

Trump Might Try to Postpone the Election. That’s Unconstitutional. I just have no way to gauge if this is something that is actually going to happen or if we're all just engaging in hyperbole. Reality just seems so spongey at this point. Maybe both?

The myth of unemployment benefits depressing work. "If anything, research to date suggests the federal benefit supplement has boosted macroeconomic activity and, therefore, likely supported hiring. That’s because these benefits have supported consumer spending, which in turn helps retailers, landlords and other businesses keep workers on their own payrolls." Benefits are not some drag on productivity. They boost the economy and help people in real need.

As election looms, a network of mysterious ‘pink slime’ local news outlets nearly triples in size. "The run-up to the 2020 November elections in the US has produced new networks of shadowy, politically backed “local news websites” designed to promote partisan talking points and collect user data. In December 2019, the Tow Center for Digital Journalism reported on an intricately linked network of 450 sites purporting to be local or business news publications. New research from the Tow Center shows the size of that network has increased almost threefold over the course of 2020, to over 1,200 sites."

What ARGs Can Teach Us About QAnon. "QAnon is not an ARG. It’s a dangerous conspiracy theory, and there are lots of ways of understanding conspiracy theories without ARGs. But QAnon pushes the same buttons that ARGs do, whether by intention or by coincidence. In both cases, “do your research” leads curious onlookers to a cornucopia of brain-tingling information. In other words, maybe QAnon is… fun?" Also see Dan Hon's excellent deep-dive exploration of this idea.

Ronald Reagan Wasn’t the Good Guy President Anti-Trump Republicans Want You to Believe In. Ronald Reagan was a terrible President. I love that this is just the latest in a series of really high quality explorations in Teen Vogue.

The Unraveling of America. Wade Davis in Rolling Stone on the situation we find ourselves in. Not just the proximal one, but the existential situation that's been building for decades.

'Christianity Will Have Power'. "Evangelicals did not support Mr. Trump in spite of who he is. They supported him because of who he is, and because of who they are. He is their protector, the bully who is on their side, the one who offered safety amid their fears that their country as they know it, and their place in it, is changing, and changing quickly. White straight married couples with children who go to church regularly are no longer the American mainstream. An entire way of life, one in which their values were dominant, could be headed for extinction. And Mr. Trump offered to restore them to power, as though they have not been in power all along."

Noam Chomsky wants you to vote for Joe Biden and then haunt his dreams. Sold.

U.S. Government Contractor Embedded Software in Apps to Track Phones. "A small U.S. company with ties to the U.S. defense and intelligence communities has embedded its software in numerous mobile apps, allowing it to track the movements of hundreds of millions of mobile phones world-wide, according to interviews and documents reviewed by The Wall Street Journal."

Postal Service warns 46 states their voters could be disenfranchised by delayed mail-in ballots. "Anticipating an avalanche of absentee ballots, the U.S. Postal Service recently sent detailed letters to 46 states and D.C. warning that it cannot guarantee all ballots cast by mail for the November election will arrive in time to be counted — adding another layer of uncertainty ahead of the high-stakes presidential contest."

Society and Culture

How a Cheese Goes Extinct. "There are countless ways for a cheese to disappear. Some, like Holbrook’s, die with their makers. Others fall out of favor because they’re simply not good: one extinct Suffolk cheese, “stony-hard” because it was made only with skimmed milk, was so notoriously bad that, in 1825, the Hampshire Chronicle reported that one ship’s cargo of grindstones was eaten by rats while the neighboring haul of Suffolk cheese escaped untouched."

The Global God Divide. I'm on Team Godless. But 44% of Americans say you need to believe in God to be moral.

Indian Matchmaking Exposes the Easy Acceptance of Caste. "The pervasiveness of caste in Indian communities, even beyond the ambit of arranged marriages, has dangerous consequences for those of us born into “lower” castes."

Lilly Wachowski finally confirms that, yes, The Matrix is an allegory for the trans experience. I think this is super-cool.

Lorenzo Wilson Milam, Guru of Community Radio, Is Dead at 86. What an inspiring human being.

Bat Boy Lives! An Oral History of Weekly World News. I used to delight in seeing Weekly World News headlines when I traveled to the US. This history was fascinating to me.

‘Bel-Air’: Drama Series Take On ‘The Fresh Prince Of Bel-Air’ From Morgan Cooper & Westbrook Studios Heats Up Streaming Marketplace. I cannot overstate how amazing this is.

To the future occupants of my office at the MIT Media Lab. "He was very happy to hear from the current resident of our office, and explained that it should be no problem to get the window up and running. I’d need to set up a dedicated Linux box and download some Python to control the climate logic, but it shouldn’t be that hard to debug. He was willing to help."

Dead plots. Charles Stross on plots no longer available to authors in 2020.

Living in Switzerland ruined me for America and its lousy work culture. I'm a Swiss citizen. Sometimes I think I just might make the jump ... But a lot of what's listed here are things I recognize from Scotland, too.

“This Plane Is Not Going to Land in Cairo”: Saudi Prince Sultan Boarded a Flight in Paris. Then, He Disappeared. Surreal, and evil.

Technology

Women Are Leading Latin America’s Fintech Revolution. "Including women entrepreneurs equally could boost the global economy by $5 trillion, and companies with women founders generate 2.5x more revenue for every dollar invested than male-led companies. They also have higher stock prices and a 35 percent higher return on investment."

TikTok and the Law: A Primer (In Case You Need to Explain Things to Your Teenager). Ageism aside, this is a pretty good primer on the legal issues behind the forced TikTok sale.

TikTok and Microsoft’s Clock. "If Microsoft is able to buy the service and users of just the countries listed, how are they going to separate them from the rest of TikTok? Understatement: this sounds extremely complicated. How long will it take to do that? Weeks? Months? Will it operate as-is until that’s completed?"

Ad Industry Launches New Organization, Will Push Google And Apple On Tracking. Pfffft. Good luck with that. Doc Searls, who I hugely respect, wrote a great post on the subject, too.

Can Killing Cookies Save Journalism? "Instead, the company found that ads served to users who opted out of cookies were bringing in as much or more money as ads served to users who opted in. The results were so strong that as of January 2020, NPO simply got rid of advertising cookies altogether. And rather than decline, its digital revenue is dramatically up, even after the economic shock of the coronavirus pandemic."

The Need for Speed, 23 Years Later. "The internet is faster, but websites aren't". Instead of embracing speed, we've layered our pages with more and more cruft.

The UX of LEGO Interface Panels. An exploration of UX ideas using LEGO as a cipher. Sure, why not. (It's delightful.)

Scientists rename human genes to stop Microsoft Excel from misreading them as dates. Oops.

Facebook Fired An Employee Who Collected Evidence Of Right-Wing Pages Getting Preferential Treatment. "Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s “respectful communication policy.”" Inexcusable stuff.

Facebook algorithm found to 'actively promote' Holocaust denial. "Last Wednesday Facebook announced it was banning conspiracy theories about Jewish people “controlling the world”. However, it has been unwilling to categorise Holocaust denial as a form of hate speech, a stance that [the Institute for Strategic Dialogue] describe as a “conceptual blind spot”." Understating it somewhat, I would say.

To Head Off Regulators, Google Makes Certain Words Taboo. A surely losing battle to ensure that internal communications revealed during discovery don't suggest monopoly control.

Design Docs at Google. Here heard second hand, but worth studying.

Judge Agrees to End Paramount Consent Decrees. Netflix and its cousins are now free to run movie theater chains.

Google's secret home security superpower: Your smart speaker with its always-on mics. Either super-cool or super-creepy, or maybe creepy-super-cool. Google Home has the ability to listen to your smoke alarm, or for broken glass, and then tell you about it.

tech brain. "what is tech brain? there are lots of things to point to, but if i had to come up with a thesis it would be that tech brain is a sort of constant willful reductionism: an addiction to easy answers combined with a wholesale cultural resistance to any kind of complexity."

Twitter launches new API as it tries to make amends with third-party developers. Once bitten ... but I really appreciate this new, non-advertising-centric direction.

RFC 8890: The Internet is for End Users. "As the Internet increasingly mediates essential functions in societies, it has unavoidably become profoundly political; it has helped people overthrow governments, revolutionize social orders, swing elections, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying that of others. All of this raises the question: For whom do we go through the pain of gathering rough consensus and writing running code?"

A Kenosha Militia Facebook Event Asking Attendees To Bring Weapons Was Reported 455 Times. Moderators Said It Didn’t Violate Any Rules. "In a companywide meeting on Thursday, Facebook CEO Mark Zuckerberg said that a militia page advocating for followers to bring weapons to an upcoming protest in Kenosha, Wisconsin, remained on the platform because of “an operational mistake.”" People are dead.


Boris Mann's Blog

Custom Bags and Shipping IP vs Products

I’ve just ordered myself a custom Timbuk2 messenger bag. Custom? Yes, custom: you pick and choose fabrics and colours and various other options. This is mine. I have had a great red/gray reversible messenger bag for many years that my sister Gaby gave me. First one of the inside clips broke so it needed to stay gray, now the outside closing clip broke. I still use it while walking, but op

I’ve just ordered myself a custom Timbuk2 messenger bag.

Custom? Yes, custom: you pick and choose fabrics and colours and various other options. This is mine.

I have had a great red/gray reversible messenger bag for many years that my sister Gaby gave me. First one of the inside clips broke so it needed to stay gray, now the outside closing clip broke.

I still use it while walking, but open bag flaps and biking don’t mix.

Asking the Internet about bags is hard, so I went to Wirecutter and they said Timbuk2.

When I was looking earlier the custom options weren’t as obvious, and I kind of wandered off. Did I want a bag the same as everybody else, especially in drab colours?

Which led me over to Freitag, which my current bag is sometimes confused for.

Colourful, unique, up-cycled bags? Yes! Well, except for two things. 1. Fashion is pricey — about $350CAD before shipping 2. Did I really want to ship up-cycled bags across the ocean?

For a well-made, relatively unique bag that I intend to keep for a decade, price isn’t the barrier.

But (2) got me thinking: can we ship IP rather than products?

Especially as the pandemic has people thinking about supply chains and supporting the local economy, what would it take to collaborate with someone locally in Vancouver and make a bag?

Vancouver has lots of apparel, outer wear, and other gear designers, so that’s a plus.

And in fact, when I asked around and shared the idea a bit, both of the people I talked to had a 1-degree connection to people who had made bags. And then I even found a 1-degree connection of my own who had made his own bag and was making more.

So let’s say I budget $400-$500 for a customer one of a kind bag. Could I find 10 or 12 other people locally who would be interested?

And once I did this, could I make the design (and sourcing of materials and manufacturing/sewing, etc) available for others to do in their local areas?

Yes, I could. And we might just have a little network of locally made goods. Never mind connected links of makers and supporters interested in this sort of thing.

Are you interested in the Vancouver custom bag experiment? Is there another custom thing you’d like to see created locally? Let me know!

More on co-op models and small business peers and shipping IP another time.

Monday, 31. August 2020

John Philpin : Lifestream

Caveat - not tried Superhuman BUT thought you all might want

Caveat - not tried Superhuman BUT thought you all might wanted to see what this new email solution is all about.

Caveat - not tried Superhuman BUT thought you all might wanted to see what this new email solution is all about.


Hands up if you think the .new domain is a good idea. Ok

Hands up if you think the .new domain is a good idea. Ok now leave your hand up if you trust google to run it. Thought so.

Hands up if you think the .new domain is a good idea.

Ok now leave your hand up if you trust google to run it.

Thought so.


Virtual Democracy

It’s time to eliminate patents in universities: Step up to Open.

“It is true that many people in science will scoff if you try to tell them about a scientific community in which ideas are treated as gifts. This has not been their experience at all. They will tell you a story about a stolen idea. So-and-so invented the famous such and such, but the man … Continue reading It’s time to eliminate patents in universities: Step up to Open.
“It is true that many people in science will scoff if you try to tell them about a scientific community in which ideas are treated as gifts. This has not been their experience at all. They will tell you a story about a stolen idea. So-and-so invented the famous such and such, but the man … Continue reading It’s time to eliminate patents in universities: Step up to Open.

Boris Mann's Blog

Bike Ride to Riley Park and Van Mural Fest at River District

Starting from the north end of Commercial Drive where we live, we did a grand loop and various adventures on the e-bike today. We started at Woodland and Venables, going up Woodland to the 10th Ave bike route and headed west. Federal Store, Quebec at 10th First stop was at The Federal Store, on 10th just before Ontario. Rachael’s tea latte cup had a lovely little stamp. My cappucci

Starting from the north end of Commercial Drive where we live, we did a grand loop and various adventures on the e-bike today.

We started at Woodland and Venables, going up Woodland to the 10th Ave bike route and headed west.

Federal Store, Quebec at 10th

First stop was at The Federal Store, on 10th just before Ontario.

Rachael’s tea latte cup had a lovely little stamp. My cappuccino just had “The Federal Store” stamp. Note: The Federal Store has Johnny’s Pops and is open most days until 6pm.

Then headed south on Ontario Street.

Main at 28th

Rachael went across the street to Jasmine Mediterranean Foods for fresh limit (Turkish bagel) and picked up a few other things.

Riley Park (Ontario at 30th)

We lay in the grass at Riley Park. It was a gloriously sunny day, but with a bit of a breeze blowing. We had our coffee & tea drinks with limit and hung out for a bit.

Looking at the map, it seemed like a pretty straight forward route south on Ontario and then east on Kent to the River District, which is a new Van Mural Fest neighbourhood.

Rachael and I have never been down in that area at all, so seemed like a good adventure destination.

East Kent Avenue

We drove down Ontario until we hit Marine Drive. We passed by Coupland’s Infinite Tire and continued on a couple of blocks into an industrial area and a set of east - west train tracks. There’s a great bike path all along Kent Avenue.

It’s a bit confusing at times which side of the tracks you need to be on. Both are East Kent Avenue, labeled S or N. In some places there is a clearly marked and dedicated bike path, in others you’re going along the road.

Gladstone-Riverside Park

We stopped at Gladstone-Riverside Park – we made it to the Fraser River! Across the river is Richmond, so this is the southern edge of City of Vancouver.

There are a variety of both bike paths and walking paths, again on either a northern “road” path (which had bike paths, too), or a mixed pedestrian - bike path walk ways on parks and green space that is right next to the river. Several other bikers and e-scooters were stopping and looking at maps.

You can see in the map the dotted green paths along the river, as well as East Kent N and S. We stayed on the road (which had a separated two way bike path), as the park paths had a lot of pedestrians.

It was very interesting to see the mix of single family homes but really quite a lot of condos and townhouses either newly built or under construction. This is an area that I know nothing about. It was nice to have bike paths, but it seemed like where we were there was no transit at all, and no retail either. I guess all of that runs along Marine Way.

River District Murals

Here’s a custom Map label for roughly where the murals are (you can see it in the screenshot above on the right hand edge).

It’s in the middle of a ton of construction, and there are detours and fences that will guide you in a loop through the construction. If you’re following audio map directions, they will be very confused and insist that you u-turn :)

Rachael did a much better job capturing the murals and artists, so I’ll embed her Instagram here:

View this post on Instagram

@bmann and I did an epic ride to the River District to see @vanmuralfest murals. A few are finished but most are still works in progress. They are all fantastic. It’s an area we’d never explored before along the Fraser River… . #getoutside #vancouvermuralfestival #vancouvermural #publicart #streetart #vancouverart #yvrart #vanmuralfest #explorevmf

A post shared by Rachael Ashe - Paper Artist (@rachael_ashe) on Aug 30, 2020 at 3:48pm PDT

The murals are great and many of the artists are at the end of their epic week or so of painting. Apparently they will be up in this temporary space for about a year or so – installed along fencing and the path that detours through the construction site.

The photos I captured below really were about me thinking about this construction and neighbourhood and some of the contrasts.

I asked this artist how they got here, and they said Uber. I continue to have questions about how we’re still building car oriented dense housing in Vancouver.

Caitlin Mcdonagh

Instagram/@northweststyles

This and the next murals were along an east-west path, fencing off a construction site with a few abandoned industrial buildings still remaining.

Rachael looks at an unfinished mural

The sketched out design looked super cute!

Instagram/@luke046_art

Overhead Power and Construction

You see old, original poles, skinny and weathered brown, and the new, larger, treated greenish poles. And this ridiculous jumble of overhead wires that we have all throughout Vancouver. Even in this new area, I guess they’re not burying lines and laying fibre for Internet?!?!?

Van Mural Fest River District Sign

Lists all the artists. More discarded building materials and some sort of hangar or industrial building in the fenced off area.

Looking west, Rachael looks at a mural

The fence to the north (right in this photo) blocks off rail tracks, then East Kent Ave N, with condos, townhouses going up on the north side of that avenue.

Blackberries, Barbed Wire, Power, and Construction

Kerr Avenue

After seeing all the murals, it was time to head home. There are a couple of different south / east routes, the most straightforward looked to be heading back along East Kent, and then going north on Kerr Avenue.

Well, it turns out Kerr is an incredibly steep climb from East Kent to Marine Drive, and then keeps going up with Fraserview Golf Course on one side and Everett Crowley Park on the other. The area at the top is called Champlain Heights :)

Anyway, got off the bike and walked it up a good chunk of this. Not enough power for getting both of us up here. There are a couple of epic hills in Vancouver, where a slightly more powerful motor would really help.

We stayed on Kerr until it turns into Rupert next to Killarney Park, then west on East 45th which is a bike route.

Then right and north on Earles Street, crossing Kingsway at the Purdy’s Chocolate factory.

Left and west on Vanness, which turns into BC Parkway path, and a little left along the edge of Slocan Park.

Right and north on Slocan, which is a long down slope. You’ll pass Banana Grove Market at 22nd, and keep going down and north until you hit South Grandview Highway.

Cross that, and you’ll hang a left and be heading west on North Grandview Highway, which is the Central Valley Greenway.

The CVG has been another common route for us, we’ll often head out to Burnaby Lake. But heading home and west we go to Lakewood, and then head right and south until we hit Adanac, and then turn left and west until we’re back at Commercial Drive and home.

We were at one bar of power by the time we made it home, so one of the longest trips we’ve made. Doing a rough map calculation shows about 30km. There are a couple of mega hills in there as well as just long continuous slopes that are rough with two of us on the bike.

Wesgroup

At home now, I’m doing a little research on this River District. This is a “planned community” being built by Wesgroup. Here’s one article:

The River District is located along the Fraser River, just off Marine Way, west of Boundary. It is a brand new, award-winning, master-planned community created by Wesgroup – a Vancouver based builder that has been building in the Lower Mainland for over 50 years. Wesgroup has spent the last decade carefully planning River District, the last waterfront development in the city. Spanning 130 acres, it is three times the size of Granville Island and will soon become a vibrant destination for living and shopping when complete in 2017…

It’s now 2020, and there are lots of new homes completed, but as you can see from my photos, lots of new buildings still going up.

Sunday, 30. August 2020

FACILELOGIN

Speedle+ for Authorization

Speedle+ is a general purpose authorization engine. It allows users to construct their policy model with user-friendly policy definition language and get authorization decision in milliseconds based on the policies. It is based on the Speedle open source project and maintained by previous Speedle maintainers. Speelde was born at Oracle couple of years back, but didn’t get much community adoption.

Speedle+ is a general purpose authorization engine. It allows users to construct their policy model with user-friendly policy definition language and get authorization decision in milliseconds based on the policies. It is based on the Speedle open source project and maintained by previous Speedle maintainers. Speelde was born at Oracle couple of years back, but didn’t get much community adoption. Both Speelde and Speedle+ try to address a similar set of use cases like the Open Policy Agent (OPA).

In our 31st Silicon Valley IAM meetup, we invited William Cai, the Speedle project lead and the founder of Speedle+ to talk about Speedle+. It was a very informative session, followed by an insightful demo. Please find below the recording of the meetup.

Speedle+ for Authorization was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


John Philpin : Lifestream

Facebook’s Kenosha Guard Militia Event Was Reported 455 Time

Facebook’s Kenosha Guard Militia Event Was Reported 455 Times. Moderators Said It Was Fine. CEO Mark Zuckerberg said that the reason the militia page and an associated event remained online after a shooting that killed two people was due to “an operational mistake.”

Facebook’s Kenosha Guard Militia Event Was Reported 455 Times. Moderators Said It Was Fine.

CEO Mark Zuckerberg said that the reason the militia page and an associated event remained online after a shooting that killed two people was due to “an operational mistake.”


Ben Werdmüller

I like it a lot so far! ...

I like it a lot so far! But ask me again once I’ve actually written a whole book with it ;)

I like it a lot so far! But ask me again once I’ve actually written a whole book with it ;)

Saturday, 29. August 2020

Simon Willison

airtable-export

airtable-export I wrote a command-line utility for exporting data from Airtable and dumping it to disk as YAML, JSON or newline delimited JSON files. This means you can backup an Airtable database from a GitHub Action and get a commit history of changes made to your data.

airtable-export

I wrote a command-line utility for exporting data from Airtable and dumping it to disk as YAML, JSON or newline delimited JSON files. This means you can backup an Airtable database from a GitHub Action and get a commit history of changes made to your data.


FACILELOGIN

Five Pillars of CIAM

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. Customer Identity and Access Management (CIAM) is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience. Today’s increasingl

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. Customer Identity and Access Management (CIAM) is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience.

Today’s increasingly sophisticated consumers now view digital interactions as the primary mechanism for interacting with brands and, consequently, expect deeper online relationships delivered simply and unobtrusively. CIAM turns customer data into Gold! Scalability, Security & Privacy, Usability, Extensibility, and APIs & Integration are the five pillars of CIAM. The following video talks about these five pillars in detail.

Five Pillars of CIAM was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


DustyCloud Brainstorms

If you can't tell people anything, can you show them?

The other day I made a sadpost on the fediverse that said: "simultaneously regularly feel like people don't take the directions I'm trying to push seriously enough and that I'm not worth taking seriously". (Similarly, I've also joked that "imposter syndrome and a Cassandra complex are a hell of a …

The other day I made a sadpost on the fediverse that said: "simultaneously regularly feel like people don't take the directions I'm trying to push seriously enough and that I'm not worth taking seriously". (Similarly, I've also joked that "imposter syndrome and a Cassandra complex are a hell of a combo" before.) I got a number of replies from people, both publicly and privately, and the general summary of most of them are, "We do care! The stuff you're working on seems really cool and valuable! I'll admit that I don't really know what it is you're talking about but it sounds important!" (Okay, and I just re-read, and it was only a portion of it that even said the latter part, but of course, what do I emphasize in my brain?) That was nice to hear that people care and are enthusiastic, and I did feel much better, but it did also kind of feel like confirmation that I'm not getting through to people completely either.

But then jfred made an interesting reply:

Yeah, that feels familiar. Impostor syndrome hits hard. You're definitely worth taking seriously though, and the projects you're working on are the most exciting ones I've been following.

As for people not taking the directions you're pushing seriously... I've felt the same at work, and I think part of it is that there's only so much one person can do. But also part of it is: http://habitatchronicles.com/2004/04/you-cant-tell-people-anything/

...it's hard to get ideas across to someone until they can interact with it themselves

So first of all, what a nice post! Second of all, it's kind of funny that jfred replied with this because out of everyone, jfred is one of the people who's picked up and understood what's happening in Spritely Goblins in particular the most, often running or creating demos of things on top of it using things I haven't even documented yet (so definitely not a person I would say isn't taking me seriously or getting what the work is doing).

But third, that link to Habitat Chronicles is right on point for a few reasons: first of all, Spritely is hugely influenced by the various generations of Habitat, from the original first-ever-graphical-virtual-worlds Habitat (premiering on the Commodore 64 in the mid 1980s, of all things!) to Electric Communities Habitat, especially because that's where the E programming language came from, which I think it's safe to say has had a bigger influence on Spritely Goblins than anything (except maybe this paper by Jonathan Rees, which is the first time I realized that "oh, object capability security is just normal programming flow"). But also, that blogpost in particular was so perfect about this subject: You can't tell people anything...!

In summary, the blogpost isn't saying that people aren't foolishly incapable of understanding things, but that people in general don't understand well by "being explained to". What helps people understand is experiences:

Eventually people can be educated, but what you have to do is find a way give them the experience, to put them in the situation. Sometimes this can only happen by making real the thing you are describing, but sometimes by dint of clever artifice you can simulate it.

This really congealed for me and helped me feel justified in an approach I've been taking in the Spritely project. In general, up until now I've spent most of my time between two states: coding the backend super-engineering stuff, and coding demos on top of it. You might in the meanwhile see me post technobabble onto my fediverse or birdsite accounts, but I'm not in general trying too hard to write about the structurally interesting things going on until it comes time to write documentation (whether it be for Goblins, or the immutable storage and mutable storage writeups). But in general, the way that I'm convinced people will get it is not by talk but by first, demonstration, and second, use.

Aside from the few people that have picked up and played with Goblins yet, I don't think I've hit a sufficient amount of "use" yet in Spritely. That's ok, I'm not at that stage yet, and when I am, it'll be fairly clear. (ETA: one year from now.) So let's talk about demonstration.

The first demo I wrote was the Golem demo, that showed roughly that distributed but encrypted storage could be applied to the fediverse. Cute and cool, and that turned the heads of a few fediverse implementers.

But let's face it, the best demo I've done yet was the Terminal Phase time travel demo. And it didn't hurt that it had a cool looking animated GIF to go with it:

Prior to this demo, people would ask me, "What's this Goblins thing?" And I'd try to say a number of things to them... "oh, its a distributed, transactional, quasi-functional distributed programming system safe to run in mutually suspicious networks that follows object capability security and the classic actor model in the style of the E programming language but written in Scheme!" And I'd watch as their eyes glaze over because why wouldn't their eyes glaze over after a statement like that, and then I'd try to explain the individual pieces but I could tell that the person would be losing interest by then and why wouldn't they lose interest but even realizing that I'd kind of feel despair settling in...

But when you show them a pew pew space lasers game and oh wow why is there time travel, how did you add time travel, is it using functional reactive programming or something? (Usually FRP systems are the only other ones where people have seen these kinds of time travel demos.) And I'd say nope! It doesn't require that. Mostly it looks like writing just straightahead code but you get this kind of thing for free. And the person would say, wow! Sounds really cool! How much work does it take to add the time travel into the game? And I just say: no extra work at all. I wrote the whole game without testing anything about time travel or even thinking about it, then later I just threw a few extra lines to write the UI to expose the time travel part and it just worked. And that's when I see peoples' heads explode with wonder and the connections start to be made about what Goblins might be able to do.

But of course, that's only a partial connection for two reasons. One is that the time travel demo above only shows off a small, minute part of the features of Goblins. And actually, the least interesting of them! It doesn't show off the distributed programming or asynchronous programming parts, it doesn't show off the cool object capability security that's safe to run in mutually suspicious networks. But still: it gave a taste that something cool is happening here. Maybe Chris hasn't just been blowing a bunch of time since finishing the ActivityPub standardization process about two and a half years ago. (Yikes, two and a half years ago!?!)

To complete the rest of that demonstration of the other things in the system requires a different kind of demo. Terminal Phase was a demo to show off the synchronous half of Goblins, but where Goblins really shines is in the asynchronous, distributed programming stuff. That's not ready to show off yet, but I'll give you the first taste of what's in progress:

(Actually a bit more has progressed since I've recorded that GIF, multiple chatrooms and etc, but not really worth bothering to show off quite yet.)

Hmm, that's not really all that thrilling. A chatroom that looks about the same level of featureful, maybe less, than IRC? Well, it could be more exciting if you hear that the full chat protocol implementation is only about 250 lines of code, including authenticating users and posts by users. That's smaller even than its corresponding GUI code, which is less than 300 lines of code. So the exciting thing there is how much heavy lifting Goblins takes care of for you.

But that's hardly razzle-dazzle exciting. In order for me to hint at the rest of what's happening here, we need to put out an asynchronous programming demo that's as or more interesting than the time travel demo. And I expect to do that. I hope soon enough to show off stuff that will make people go, "Oh, what's going on here?"

But even that doesn't complete the connection for people, because showing is one thing but to complete the loop, we need people to use things. We need to get this stuff in the hands of users to play with and experiment themselves. I have plans to do that... and not only that, make this stuff not intimidating for newcomers. When Spritely guides everyday people towards extending Spritely from inside of Spritely as it runs, that's when it'll really click.

And once it clicks sufficiently, it'll no longer become exciting, because people will just come to expect it. A good example of that comes from the aforementioned You can't tell people anything article:

Years ago, before Lucasfilm, I worked for Project Xanadu (the original hypertext project, way before this newfangled World Wide Web thing). One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?” or “why would anyone want to put documents online?” Alas, many things really must be experienced to be understood. We didn’t have much of an experience to deliver to them though — after all, the whole point of all this evangelizing was to get people to give us money to pay for developing the software in the first place! But someone who’s spent even 10 minutes using the Web would never think to ask some of the questions we got asked.

Eventually, if we succeed, the ideas in Spritely will no longer seem exciting... because people will have internalized and come to expect them. Just like hyperlinks on the web today.

But to get there, in the meanwhile, we have to get people interested. To become so successful as to be mundane, we have to first be razzle-dazzle exciting. And to that end, that's why I take the demo approach to Spritely. Because it's hard to tell someone something... but showing them, that's another matter.

PS: It's also not true that people don't get what I'm doing, and that's even been reflected materially. I've been lucky to be supported over the last few years from a combination of a grant from Samsung's Stack Zero and one from NLNet, not to mention quite a few donors on Patreon. I do recognize and appreciate that people are supporting me. In some ways receiving this support makes me feel more seriously about the need to demonstrate and prove that what I'm doing is real. I hope I am doing and will continue to do a sufficient job, and hope that the upcoming demos contribute to that more materially!

PPS: If, in the meanwhile, you're already excited, check out the Goblins documentation. The most exciting stuff is coming in the next major release (which will be out soon), which is when the distributed programming tools will be made available to users of the system for the first time. But if you want to get a head start, the code you'll be writing will mostly work the same between the distributed and non-distributed (as in, distributed across computers/processes) asynchronous stuff, so if you start reading the docs today, most of your code will already just work on the new stuff once released. And if you do start playing around, maybe drop by the #spritely channel on freenode and say hello!


FACILELOGIN

The Integrated Supply Chain for CIAM

Following is the summary of the CIAM maturity model, which I talk about in detail in this blog — and now the question is how we get from level-0 to level-4 or from nonexistent to optimized. That’s where we see the need for a carefully designed integrated supply chain for CIAM. In general, a supply chain is a system of organizations, people, activities, information, and other resources involv

Following is the summary of the CIAM maturity model, which I talk about in detail in this blog — and now the question is how we get from level-0 to level-4 or from nonexistent to optimized. That’s where we see the need for a carefully designed integrated supply chain for CIAM.

In general, a supply chain is a system of organizations, people, activities, information, and other resources involved in supplying a product or service to a consumer, from the inception to delivery. In the industrial supply chain, we see 5 main phases.

Source: https://thenewstack.io/a-successful-api-strategy-needs-a-digital-supply-chain-and-a-thriving-ecosystem/

Under sourcing you find the raw materials, machinery, labour which you need to build your product. In doing that, you will also find out the suppliers that you need to work with. One of the McKinsey reports claims — on average, an auto manufacturer has around 250 tier-one suppliers

Then during the manufacturing phase you build the product, then distribute it, sell it and finally the consumers start using the product.

We can build a similar analogy in the digital supply chain.

If you are building a CIAM solution, then in the discovery phase, you need to figure out what you need to buy and what you need to build. You need not to build everything from scratch.

Uber for example, uses Google Maps, for navigation. It’s one of the most critical parts of Uber to build a smooth experience for its riders. From 2016 to 2018, Uber paid 58M USD to Google for using Google Maps. But, then again it’s a peanut, when you compare that with their revenue in 2019, which was 14.15 billion USD. So, you need to make the right decision and the discovery phase is critical to find out what’s best for you

In terms of CIAM, during the discovery phase, you need to find out what you want for your IAM system, CRM system, marketing platform, e-commerce platform, fraud detection system, risk engine, CMS, data management platform and so on. For each of these systems, you would need to pick a supplier or vendor. Once again, one of the McKinsey reports claims — technology companies have an average of 125 suppliers in their tier-one group.

Then again you need not to pick everything at once. You can go for a phased approach.

In the development phase, you start building your CIAM solution, by integrating multiple systems together, which should finally result in the right-level of user experience that would help you to drive the revenue growth by leveraging identity data to acquire and retain customers.

Then during the deployment phase you need to come up with a model to address some of your non-functional requirements such as scalability, security and so on. Once the system is up and running — you start on-boarding customers. Now you need to start monitoring the customer experience — you need to see how the customers use your product, their pain points, and so on — and then the digital supply chain will continue.

Then, you go to the next phase, do the discovery based on the services you need, and keep going.

The Integrated Supply Chain for CIAM was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


Mike Jones: self-issued

Concise Binary Object Representation (CBOR) Tags for Date progressed to IESG Evaluation

The “Concise Binary Object Representation (CBOR) Tags for Date” specification has completed IETF last call and advanced to evaluation by the Internet Engineering Steering Group (IESG). This is the specification that defines the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices […]

The “Concise Binary Object Representation (CBOR) Tags for Date” specification has completed IETF last call and advanced to evaluation by the Internet Engineering Steering Group (IESG). This is the specification that defines the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group.

The specification is available at:

https://tools.ietf.org/html/draft-ietf-cbor-date-tag-06

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-cbor-date-tag-06.html

Friday, 28. August 2020

Simon Willison

Mexican bean shakshuka

The first time I ever tried a shakshuka I cooked it from this recipe by J. Kenji López-Alt. It quickly became my favourite breakfast. The only good thing about this pandemic is that I've got to spend more time cooking. I've developed a variant on Kenji's shakshuka that I'm calling a Mexican bean shakshuka. It's delicious. Here's how to make it. Ingredients Red bell pepper, chopped sma

The first time I ever tried a shakshuka I cooked it from this recipe by J. Kenji López-Alt. It quickly became my favourite breakfast.

The only good thing about this pandemic is that I've got to spend more time cooking. I've developed a variant on Kenji's shakshuka that I'm calling a Mexican bean shakshuka. It's delicious. Here's how to make it.

Ingredients Red bell pepper, chopped small Medium-sized red onion, chopped small Olive oil (3 tablespoons) 4 eggs 3 cloves garlic, minced 2 cans tomatoes (I use Muir Glen fire roasted diced tomatoes) Half a can of black beans 2 teaspoons cumin 1.5 tablespoons paprika Some salt A chipotle pepper in adobo sauce (or half a pepper depending on your spiciness tolerance) - finely chopped

The crucial ingredient here is the chipotle pepper in adobo sauce (I use this La Morena one). If you've never encountered these before they are absolute magic: a total explosion of flavour, and I keep on finding new uses for them (most recently spaghetti bolognese). The can contains 6-8 peppers and I only use one for this recipe (finely chopped) - thankfully once opened it lasts in the fridge for a couple of months.

Directions

There are three phases: charring the peppers and onions, cooking the sauce and poaching the eggs. It takes 25-30 minutes total.

Heat 3 tablespoons olive oil in a large frying pan (or cast iron pan) Add the chopped bell pepper and onion Cook without stirring for 6 minutes - the idea is to get the ingredients to char a little Stir once, then cook for another 4 minutes Stir in the minced garlic and cook for 30 seconds Stir in the cumin and paprika Add the two cans of diced tomatoes, half can of black beans and the finely chopped chipotle-in-adobo. Mix well. Simmer for ten minutes, stirring occasionally. Make four equally spaced indentations in the sauce using a large spoon, place an egg in each one, then sprinkle the yolk with some salt Cover with a lid and allow the eggs to poach for six minutes. Serve!

Kenji's recipe includes photos illustrating how the egg poaching step works.

The great thing about a shakshuka is that it's an incredibly flexible dish. Once you've mastered the basic form it's easy to come up with new tasty variants based around the key idea of poaching eggs directly in the sauce. I sometimes make breakfast "shakshukas" out of leftovers from the night before - bean chiles and Indian curries have worked particularly well for that.


Weeknotes: California Protected Areas in Datasette

This week I built a geospatial search engine for protected areas in California, shipped datasette-graphql 1.0 and started working towards the next milestone for Datasette Cloud. California Protected Areas in Datasette This weekend I learned about CPAD - the California Protected Areas Database. It's a remarkable GIS dataset maintained by GreenInfo Network, an Oakland non-profit and released u

This week I built a geospatial search engine for protected areas in California, shipped datasette-graphql 1.0 and started working towards the next milestone for Datasette Cloud.

California Protected Areas in Datasette

This weekend I learned about CPAD - the California Protected Areas Database. It's a remarkable GIS dataset maintained by GreenInfo Network, an Oakland non-profit and released under a Creative Commons Attribution license.

CPAD is released twice annually as a shapefile. Back in February I built a tool called shapefile-to-sqlite that imports shapefiles into a SQLite or SpatiaLite database, so CPAD represented a great opportunity to put that tool to use.

Here's the result: calands.datasettes.com

It provides faceted search over the records from CPAD, and uses my datasette-leaflet-geojson plugin to render the resulting geometry records on embedded maps.

I'm building and deploying the site using this GitHub Actions workflow. It uses conditional-get (see here) combined with the GitHub Actions cache to download the shapefiles as part of the workflow run only if the downloadable file has changed.

This project inspired some improvements to the underlying tools:

datasette-leaflet-geojson now handles larger polygons and is smarter about knowing when to load additional JavaScript and CSS shapefile-to-sqlite can now create spatial indexes and has a new -c option (inspired by csvs-to-sqlite) for extracting specified columns into separate lookup tables datasette-graphql 1.0

I'm trying to get better at releasing 1.0 versions of my software.

For me, the most significant thing about a 1.0 is that it represents a promise to avoid making backwards incompatible releases until a 2.0. And ideally I'd like to avoid ever releasing 2.0s - my perfect project would keep incrementing 1.x dot-releases forever.

Datasette is currently at version 0.48, nearly three years after its first release. I'm actively working towards the 1.0 milestone for it but it may be a while before I get there.

datasette-graphql is less than a month old, but I've decided to break my habits and have some conviction in where I've got to. I shipped datasette-graphql 1.0 a few days ago, closely followed by a 1.0.1 release with improved documentation.

I'm actually pretty confident that the functionality baked into 1.0 is stable enough to make a commitment to supporting it. It's a relatively tight feature set which directly maps database tables, filter operations and individual rows to GraphQL. If you want to quickly start trying out GraphQL against data that you can represent in SQLite I think it's a very compelling option.

New datasette-graphql features this week:

Support for multiple reverse foreign key relationships to a single table, e.g. a article table that has created_by and updated_by columns that both reference users. Example. #32 The {% set data = graphql(...) %} template function now accepts an optional variables= parameter. #54 The search: argument is now available for tables that are configured using Datasette's fts_table mechanism. #56 New example demonstrating GraphQL fragments. #57 Added GraphQL execution limits, controlled by the time_limit_ms and num_queries_limit plugin configuration settings. These default to 1000ms total execution time and 100 total SQL queries per GraphQL execution. Limits documentation. #33 Improvements to my TILs

My til.simonwillison.net site provides a search engine and browse engine over the TIL notes I've been accumulating in simonw/til on GitHub.

The site used to link directly to rendered Markdown in GitHub, but that has some disadvantages: most notably, I can't control the <title> tag on that page so it has poor implications for SEO.

This week I switched it over to hosting each TIL as a page directly on the site itself.

The tricky thing to solve here was Markdown rendering. GitHub's Markdown flavour incorporates a bunch of useful extensions for things like embedded tables and code syntax highlighting, and my attempts at recreating the same exact rendering flow using Python's Markdown libraries fell a bit short.

Then I realized that GitHub provide an API for rendering Markdown using the same pipeline they use on their own site.

So now the build script for the SQLite database that powers my TILs site runs each document through that API, but only if it has changed since the last time the site was built.

I wrote some notes on using their Markdown API in this TIL: Rendering Markdown with the GitHub Markdown API.

Storing the rendered HTML in my database also meant I could finally fix a bug with the Atom feed for that site, where advanced Markdown syntax wasn't being correctly rendered in the feed.

The datasette-atom plugin I use to generate the feed applies Mozilla's Bleach HTML sanitization library to avoid dynamically generated feeds accidentally becoming a vector for XSS. To support the full range of GitHub's Markdown in my feeds I released version 0.7 of the plugin with a deliberately verbose allow_unsafe_html_in_canned_queries plugin setting which can opt canned queries out of the escaping - which should be safe because a canned query running against trusted data gives the site author total control over what might make it into the feed.

Datasette Cloud

I'm spinning up work again on Datasette Cloud again, after several months running it as a private alpha. My next key milestone is to be able to charge subscribers money - I know from experience that until you're charging people actual money it's very difficult to be confident that you're working on the right things.

TIL this week Working around the size limit for nodeValue in the DOM Outputting JSON with reduced floating point precision Dynamically loading multiple assets with a callback Providing a "subscribe in Google Calendar" link for an ics feed Creating a dynamic line chart with SVG Skipping a GitHub Actions step without failing Rendering Markdown with the GitHub Markdown API Piping echo to a file owned by root using sudo and tee Browsing your local git checkout of homebrew-core Releases this week asgi-csrf 0.7.1 - 2020-08-27 datasette-graphql 1.0.1 - 2020-08-24 datasette-graphql 1.0 - 2020-08-23 datasette-graphql 0.15 - 2020-08-23 datasette-render-images 0.3.2 - 2020-08-23 datasette-atom 0.7 - 2020-08-23 shapefile-to-sqlite 0.4.1 - 2020-08-23 shapefile-to-sqlite 0.4 - 2020-08-23 datasette-auth-passwords 0.3.2 - 2020-08-22 shapefile-to-sqlite 0.3 - 2020-08-22 datasette-leaflet-geojson 0.6 - 2020-08-21 datasette-leaflet-geojson 0.5 - 2020-08-21 sqlite-utils 2.16 - 2020-08-21

Thursday, 27. August 2020

Vishal Gupta

Money might be obsolete in 40 years

You can look at money in 3 ways.. A global accounting system — to settle value of transactions between people. A behavior control or gamification — to make people get out of bed and do things. A way to channelize resources efficiently — balance scarcity with demand. However it is a tool that really belongs to the stone age.. why? It fails at global accounting — as it

You can look at money in 3 ways..

A global accounting system — to settle value of transactions between people.

A behavior control or gamification — to make people get out of bed and do things.

A way to channelize resources efficiently — balance scarcity with demand.

However it is a tool that really belongs to the stone age.. why?

It fails at global accounting — as it operates in silos and remains two dimensional that requires manual reconciliations, transaction reporting & taxation.

It fails in behavior control or gamification — as it does not differentiate between good or illegal behavior. It creates perverse incentives and does not prevent money laundering.

It fails at channelizing resources — as it does not account for things like environmental costs, family integrity and worsens wealth inequality.

What will replace it?

There will be at least 2 exponential trends.

The robots and AI will take over jobs and incomes will decrease. The wealth asymmetry will keep increasing.

As the autonomous vehicles increase — it changes the availability and access to shared resources. It becomes a platform to deliver on-demand physical goods and services shared in communities.

As the computing power keeps increasing.. humanity will adopt new forms of decentralized surveillance and compliance. Reputation will drive human behavior in getting access to community driven public utilities and services.

The concept of ownership may evaporate as the cost of disposal and environmental damage may be way more than the cost of access to the shared resource.

Most people will be able to live purely on the entitlements they recieve for being good citizens or voting or crowd sourcing the right policies and helping AI to serve better.

The governments will launch more and more robotic services on top of autonomous vehicles as public services.


Boris Mann's Blog

Went for a tour of the in-progress @vanmuralfest murals and met animalitoland at 7th and Ontario.

Went for a tour of the in-progress @vanmuralfest murals and met animalitoland at 7th and Ontario.

Wednesday, 26. August 2020

Identity Praxis, Inc.

IAPP Releases “US State Comprehensive Privacy Law Comparison.” Why this is relevant to The Identity Nexus

I’ve been spending a lot of time thinking about and drilling into the nature of personal information. I’ve been thinking about its role within society, and how we can find a risk-reward equilibrium (i.e., The Identity Nexus®) when it comes to the exchange of personal information between public institutions, private organizations, and people (i.e., data […] The post IAPP Releases “US State Compre

I’ve been spending a lot of time thinking about and drilling into the nature of personal information. I’ve been thinking about its role within society, and how we can find a risk-reward equilibrium (i.e., The Identity Nexus®) when it comes to the exchange of personal information between public institutions, private organizations, and people (i.e., data subjects).

To make this exercise manageable, I landed on three critical cornerstones that we should all consider to equitably manage and exchange personal information, namely privacy, security, and compliance.

Personal Information Management Cornerstones

The three cornerstones of personal information management are,

Privacy, the ability for a data subject to have agency over their phygital self, that is the ability to control the who, what, when, where, for how long, and for what purpose any entity can access and use their physical or digital self (aka personal information), or capital assets (i.e. the physical asset or any data produced by those assets, such as a connected car, house, clothes, phone, pet, etc.) Security, the technologies and processes that ensure only authorized people, systems or services, etc. can access an individual’s physical and digital self or capital assets. Compliance, the rules, the policies, the rights, and governance models that we all technologically and socially agree to follow when it comes to personal information management (e.g. collection, storage, aggregation, refinement, dissemination, exchange, valuation, etc.), and the punishment if any of these rules, policies, rights, and governance models are not followed or are breached.

I’ll write more on each of these cornerstones later, but I wanted to take a moment to talk about compliance briefly.

Personal Information Management Compliance

Without effective personal information management compliance, there will be little chance of achieving The Identity Nexus. People and their data will be “legally” abused (more than they are now), criminals will run amuck, and there will be a total loss of trust. We’ll be facing chaos, anarchy.

Regulations are an important and key element of personal information management compliance. Personal information management regulations are the society and industry laws and policies we all agree to and/or are subjected to to give people rights to their data and the guardrails we put on organizations’ to ensure that they properly steward people’s data.

For those whose job it is to write and manage regulations, wrapping one’s head around the nuances of all the personal information management regulations can be quite a challenge, and for the laymen, it can be nearly impossible. Luckily, we have organizations like the Intonational Association of Privacy Professionals that does yeoman’s work in helping their members and the industry at large navigate the personal information regulatory landscape.

IAPP US State Comprehensive Privacy Law Comparison

On July 7, 2020, the IAPP released an update to its US state comprehensive privacy law comparison, a matrix that evaluates the personal information management regulations in the works across the states. The matrix lists the stage each relevant privacy legislative effort is in within a given state, and the eight most common rights each law affords individuals and the eight obligations that organizations must adhere to (see the image below).

People’s personal information management rights Right of access Right of rectification Right of deletion (aka right to be forgotten) Right of restrictions Right of portability Right of opt-out Right against automated decision making Private right of actions Organization’s obligations toward personal information stewardship Strict age opt-in or opt-out or the prohibition on the sale of information Notice/transparency requirement Data breach notification Risk assessments Prohibition on discrimination when exercising rights Purpose limitations Processing limitations Fiduciary duty

It is essential for every one of us to study and understand how these laws can benefit us personally and how they may affect our businesses, now and in the future. With this understanding we can then strive to find the equilibrium state for the flows of personal information within society, we can find The Identity Nexus. That is we can ensure that there is an equality of value exchange to risk amongst all those participating throughout the personal information management economy.

The Identity Nexus is a registered trademark of Identity Praxis, inc.

The post IAPP Releases “US State Comprehensive Privacy Law Comparison.” Why this is relevant to The Identity Nexus appeared first on Identity Praxis, Inc..


Phil Windley's Technometria

What is SSI?

Summary: If your identity system doesn't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then it's not SSI. A few days ago I was in a conversation with a couple of my identerati friends. When one used the term "SSI", the other asked him to define it since there were so many systems that were claiming

Summary: If your identity system doesn't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then it's not SSI.

A few days ago I was in a conversation with a couple of my identerati friends. When one used the term "SSI", the other asked him to define it since there were so many systems that were claiming to be SSI and yet were seemingly different. That's a fair question. So I thought I'd write down my definition in hopes of stimulating some conversation around the topic.

I think we've arrived at a place where it's possible to define SSI and get broad consensus about it. SSI stands for self-sovereign identity, but that's not really helpful since people have different ideas about what "sovereign" means and what "identity" means. So, rather than try to go down those rabbit holes, let's just stick with "SSI."1

SSI has the following properties:

SSI systems use decentralized identifiers (DIDs) to identify people, organizations, and things. Decentralized identifiers provide a cryptograhic basis for the system and can be employed so that they don't require a central administrative system to manage and control the identifiers. Exchanging DIDs is how participants in SSI create relationships, a critical feature. SSI participants use verifiable credentials exchange to share information (attributes) with each other to strengthen or enrich relationships. The system provides the means of establishing credential fidelity. SSI supports autonomy for participants. The real value proposition of SSI is autonomy—not being inside someone else's administrative system where they make the rules in a one sided way. Autonomy implies that participants interact as peers in the system. You can build systems that use DIDs and verifiable credentials without giving participants autonomy.

Beyond these there are lots of choices system architects are making. Debates rage about how specifically credential exchange should work, whether distributed ledgers are necessary, and, if so, how should they be employed. But if you don't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then you're not doing SSI.

As a consequence of these properties, participants in SSI systems use some kind of software agent (typically called a wallet for individuals) to create relationships and exchange credentials. They don't typically see or manage keys or passwords. And there's no artifact called an "identity." The primary artifacts are relationships and credentials. The user experience involves managing these artifacts to share attributes within relationships via credential exchange. This user experience should be common to all SSI systems, although the user interface and what happens under the covers might be different between SSI systems or vendors on those systems.

I'm hopeful that, as we work more on interoperability, the implementation differences will fade away so that we have a single identity metasystem where participants have choice about tools and vendors. An identity metasystem is flexible enough to support the various ad hoc scenarios that the world presents us and will support digital interactions that are life-like.

Notes This is not to say I don't have opinions on what those words mean in this context. I've written about "sovereign" in Cogito, Ergo Sum, On Sovereignty, and Self Sovereign is Not Self Asserted.

Photo Credit: Girl On A Bicycle from alantankenghoe (CC BY 2.0)

Tags: identity ssi self-sovereign credentials sovrin

Monday, 24. August 2020

Kim Cameron's Identity Weblog

Technical naïveté: UK’s Matt Hancock sticks an ignorant finger in the COVID dike

The following letter from a group of UK parliamentarians rings alarm bells that should awaken all of us – I suspect similar things are happening in the shadows well beyond the borders of the United Kingdom… The letter recounts the sad story of one more politician with no need for science or expertise – for … Continue reading Technical naïveté: UK’s Matt Hancock sticks an ignorant finger in the COVI

The following letter from a group of UK parliamentarians rings alarm bells that should awaken all of us – I suspect similar things are happening in the shadows well beyond the borders of the United Kingdom…

The letter recounts the sad story of one more politician with no need for science or expertise – for him, rigorous attention to what systems do to data protection and privacy can simply be dismissed as “bureaucracy”.  Here we see a man in over his head – evidently unaware that failure to follow operational procedures protecting security and privacy introduces great risk and undermines both public trust and national security.  I sincerely hope Mr. Hancock brings in some advisors who have paid their dues and know how this type of shortcut wastes precious time and introduces weakness into our technical infrastructure at a time when cyberattack by organized crime and nation states should get politicians to sober up and get on the case.

Elizabeth Denham CBE, UK Information Commissioner
Information Commissioner’s Office
Wycliffe House
Water Lane
Wilmslow
Cheshire SK9 5AF

Dear Elizabeth Denham,

We are writing to you about the Government’s approach to data protection and privacy during the COVID-19 pandemic, and also the ICO’s approach to ensuring the Government is held to account.
During the crisis, the Government has paid scant regard to both privacy concerns and data protection duties. It has engaged private contractors with problematic reputations to process personal data, as highlighted by Open Democracy and Foxglove. It has built a data store of unproven benefit. It chose to build a contact tracing proximity App that centralised and stored more data than was necessary, without sufficient safeguards, as highlighted by the Human Rights Committee. On releasing the App for trial, it failed to notify yourselves in advance of its Data Protection Impact Assessment – a fact you highlighted to the Human Rights Committee.

Most recently, the Government has admitted breaching their data protection obligations by failing to conduct an impact assessment prior to the launch of their Test and Trace programme. They have only acknowledged this failing in the face of a threat of legal action by Open Rights Group.The Government have highlighted your role at every turn, citing you as an advisor looking at the detail of their work, and using you to justify their actions.

On Monday 20 July, Matt Hancock indicated his disregard for data protection safeguards, saying to Parliament that “I will not be held back by bureaucracy” and claiming, against the stated position of the Government’s own legal service, that three DPIAs covered “all of the necessary”.

In this context, Parliamentarians and the public need to be able to rely on the Regulator. However, the Government not only appears unwilling to understand its legal duties, it also seems to lack any sense that it needs your advice, except as a shield against criticism.
Regarding Test and Trace, it is imperative that you take action to establish public confidence – a trusted system is critical to protecting public health. The ICO has powers to compel documents to understand data processing, contractual relations and the like (Information Notices). The ICO has powers to assess what needs to change (Assessment Notices). The ICO can demand particular changes are made (Enforcement notices).  Ultimately the ICO has powers to fine Government, if it fails to adhere to the standards which the ICO is responsible for upholding.

ICO action is urgently required for Parliament and the public to have confidence that their data is being treated safely and legally, in the current COVID-19 pandemic and beyond.

Signed,
Apsana Begum MP
Steven Bonnar MP
Alan Brown MP
Daisy Cooper MP
Sir Edward Davey MP
Marion Fellows MP
Patricia Gibson MP
Drew Hendry MP
Clive Lewis MP
Caroline Lucas MP
Kenny MacAskill MP
John McDonnell MP
Layla Moran MP
Grahame Morris MP
John Nicholson MP
Sarah Olney MP
Bell Ribeiro-Addy MP
Tommy Sheppard MP
Christopher Stephens MP
Owen Thompson MP
Richard Thomson MP Philippa Whitford MP

 

[Thanks to Patrick McKenna for keeping me in the loop]


Rebecca Rachmany

Anonymity and Distributed Governance: A Bad Idea

I host a weekly call on distributed governance. This blog provides my personal opinions but by all means view the entire discussion here . One of the big debates in the Genesis DAO started by DAOstack was the question of anonymity. Should people be able to make proposals and ask for budgets without providing a real identity? Part of the problem was a structural problem with DAOstack at the

I host a weekly call on distributed governance. This blog provides my personal opinions but by all means view the entire discussion here .

One of the big debates in the Genesis DAO started by DAOstack was the question of anonymity. Should people be able to make proposals and ask for budgets without providing a real identity?

Part of the problem was a structural problem with DAOstack at the time: there was no escrow system. You could allocate funds to a project, but you could not hold the funds until the project was complete. In other words, everyone was paid up front for their project as soon as the group approved it.

Another aspect of the problem was human: we all feel a little weird chatting with someone faceless. On the discussion boards, one person could potentially have multiple pseudonyms. If we were discussing something controversial, it would be fairly easy for someone to pretend to be multiple people arguing for or against the proposal. It was also fairly easy to troll the system anonymously. It wasn’t as easy to game the voting, though it was certainly possible.

It Ain’t Real

The example of the Genesis DAO was somewhat trivial, because it was a small number of people who actually did know one another. None of the anonymous people seriously asked for budget (though there was an anonymous troll), the amounts of money in question weren’t huge, and it was a small enough community that everyone pretty much knew one another.

In real life, identity is fundamental to democracy. It amazes me how many people cherish their anonymity so much that this is under debate. Our weekly chat about anonymity was wide-ranging, and as usual, we came to the conclusion that “it depends.”

Does It Really Depend?

Personally, I think it doesn’t depend on the situation at all. At almost all stages of governance, you need to know some information about the person. You almost never need to know their actual name, but you almost always need to know, at a minimum, whether they have the right to influence a particular situation.

My minimum viable values statement about governance is: If the decision will impact you, you should have the ability to influence the decision. How much influence you should have is a different question. For example, if you are not an expert on hydroelectric dams, maybe you don’t get to decide where to build the dam, but if you live near the river, your perspective should be taken into account.

This isn’t the way democracy runs today. Corporations make decisions that impact their workers, customers and the environment with no regard for their opinions. Governments determine foreign policy without having any responsibility for the citizens of foreign countries who will be impacted by those policies. Lawmakers in one state make laws that influence neighboring states. We call that democracy. I digress but it’s important. We are completely normalized to a situation in which, as long as we feel fairly treated inside our organization, the external people’s feelings are irrelevant to what we call fair process.

What’s in a Name?

Throughout most of the decision-making process, therefore, full anonymity is not appropriate. Knowing people’s name isn’t particularly important, but in each stage of the process, some identifying information is helpful to democracy.

In discussions and sentiment signalling, you need to know a person’s affiliation and expertise. Are they a resident? Do they work for the solar panel company? Are they an expert in urban planning? Is the electric company going to make a bid to buy up their land if this project is approved? Did they educate themselves and read multiple perspectives on the issue at hand? In the best of cases, you would also show an indicator of their reputation in the domains being discussed. In problem definition, you need to know the person’s sentiment and perspective on the issue as well as something about their cognitive abilities. Are they good at detail or systems-wide analysis? Can they integrate multiple perspectives and listen well to others? Does the makeup of the problem-definition team appropriately represent enough different perspectives on the problem? Are they good at asking deeper questions, or do you use a highly-facilitated problem-definition process? For proposal-making, while it is often optimal to let everyone propose ideas, it’s equally important to have the right experts in the room. Is this person an electrician or architect? Have they done other successful projects in this specific area? Again, the best ideas might come from someone who doesn’t have the proper background, but the process of solidifying the proposal needs to be grounded in reality. For voting you need to know that this is the individual they said they were, and that they are voting in accordance with the rules of the voting system. For execution of the decision, you need to know the qualifications of the people carrying out the work. For accountability, oversight and assessment of the process, you need to know the qualifications and the vested interests of the people. Finally, for whistle-blowing, you need some level of anonymity but you may also need verifiable evidence. Throughout the entire process, there needs to be some mechanism for people to give feedback safely when their own self-interest might be endangered. Scientists at a chemical company are the best qualified to expose if there might be unpublished side-effects to some new product. If there are good enough privacy and anonymity controls, such information could be leaked more transparently while verifying the reliability of the sources. Collapsing reality and desire

One of the reasons people clamor for anonymity is that the tech collapses our identity and name and private data. Identity isn’t just your name. All of your data doesn’t have to be identified in every transaction — collapsing these concepts is sloppy and leads people to think there are only two possibilities: complete anonymity and complete exposure.

Identifying yourself by your name is the easiest way for people to eliminate anonymity, but it exposes more information than is necessary. I think that most of the debate over anonymity is due to the fact that we haven’t found creative ways to de-couple someone’s actual name from other attributes about them.

Technically, it would be possible to create a system where someone has a different name to different people. I would look at all of Alice’s posts and develop an opinion of her. You might be looking at Andrea’s posts — not knowing that Andrea and Alice were actually the same person, but anonymized so that when we met this person in real life, let’s say it’s really Alexa, we wouldn’t be able to attribute that information to her unless she wanted us to. She might appear differently on different forums, because she’s more of an expert in oceanography than in architecture. That’s an extreme implementation, but it’s just one example of how technology can be used to provide us the information we need to form opinions online without compromising someone’s identity.

As soon as we recognize that we can develop solutions for allowing different levels of participation and providing the data we need without exposing something sensitive, we can start to have a conversation about the need for anonymity in specific situations.

Enjoy our talk about anonymity and democracy. Most people don’t agree with me on the call! Feel free to join us. We have new topics every week, and the call is open to all.


Boris Mann's Blog

Applying free shipping as well as a 100% discount in Shopify

There are 20 pages of requests for multiple discounts for Shopify to be able to apply free shipping. The way to do it without a plugin, is to add a new rate, label it “Free Shipping”, and set the conditions to only apply when the min and max are both $0. Any other paid shipping options will still display – and be selectable by the customer! – but obviously they can just pick free shippin

There are 20 pages of requests for multiple discounts for Shopify to be able to apply free shipping.

The way to do it without a plugin, is to add a new rate, label it “Free Shipping”, and set the conditions to only apply when the min and max are both $0.

Any other paid shipping options will still display – and be selectable by the customer! – but obviously they can just pick free shipping and there won’t be a charge.

So to give a “free” item to someone, make a dollar value discount code of the price of the item, so that the cost is zero, and then this free shipping rate will appear. Note: if you are dealing with multiple currencies, sometimes the conversion means that your dollar value discount code makes it not quite $0. You’ll need to experiment and set a discount $ value appropriately.

Monday, 24. August 2020

Boris Mann's Blog

Sean @coates wrote up how he checked Canada’s COVID Alert app and submitted a fix. Thanks Sean, & thanks to the Canadian Digital Service for source code availability & responsiveness!

Sean @coates wrote up how he checked Canada’s COVID Alert app and submitted a fix.

Thanks Sean, & thanks to the Canadian Digital Service for source code availability & responsiveness!


Kim Cameron's Identity Weblog

Identity Blog Active Again

Many readers will already know that I retired from Microsoft after twenty years working as Chief Architect of Identity and other related roles. I had a great time there, and Microsoft adopted the Laws of Identity in 2005 at a time when most tech companies were still under dark influence of “Privacy is Dead”, building … Continue reading Identity Blog Active Again

Many readers will already know that I retired from Microsoft after twenty years working as Chief Architect of Identity and other related roles. I had a great time there, and Microsoft adopted the Laws of Identity in 2005 at a time when most tech companies were still under dark influence of “Privacy is Dead”, building systems destined to crash at endless cost into a privacy-enabled future. Microsoft is a big complicated place, but Bill Gates and Satya Nadella were as passionate as me about moving Microsoft and the industry towards digital identity respectful of the rights of individuals and able to empower both individuals and organizations. I thank them and all my wonderful colleagues and friends for a really great ride.

In the last years I led Microsoft to support Decentralized Identity as the best way to recognize the needs and rights of individual people, as well as the way to move enterprises and governments past the security problems, privacy roadblocks and dead ends that had resulted from the backend systems of the last century. Truly exciting, but I needed more time for my personal life.

I love being completely in control of my time, but my interest in digital identity is a keen as ever. So besides working with a small startup in Toronto called Convergence Tech on exciting innovation around Verifiable Credentials and Decentralized Identity, I’ve decided to start blogging again. I will, as always, attempt to dissuade those responsible for the most egregious assaults on the Laws of Identity. Beyond that, I share my thoughts on developments in the world of Decentralized Identity and technology that enfranchises the individual person so each of us can play our role in a democratic and secure digital future.


Matt Flynn: InfoSec | IAM

Addressing the Cloud Security Readiness Gap

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving worklo

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving workloads to cloud even if they believe it’s a secure environment and even if the platform offers a robust set of security features. 

Two contributing factors to that gap include:

78% reported that cloud requires different security than on-prem. With security skills at a shortage, the ability to quickly ramp up on a new architecture and a new set of security capabilities can certainly slow progress.
Only 8% of respondents claimed to fully understand the cloud security shared responsibilities model; they don’t even know what they’re responsible for; never mind how to implement the right policies and procedures, hire the right people, or find the right security technologies.

I recently posted about how Oracle is addressing the gap on the Oracle Cloud Security blog. There's a link in the post to a new whitepaper from Dao Research that evaluates the cloud security capabilities offered by Amazon AWS, Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure.

Oracle took some criticism for arriving late to the game with our cloud infrastructure offering. But, several years of significant investments are paying off. Dao's research concludes that “Oracle has an edge over Amazon, Microsoft, and Google, as it provides a more centralized security configuration and posture management, as well as more automated enforcement of security practices at no additional cost. This allows OCI customers to enhance overall security without requiring additional manual effort, as is the case with AWS, Azure, and GCP.”

A key take-away for me is that sometimes, the competitive edge in security in delivered through simplicity and ease of use. We've heard over and over for several years that complexity is the enemy of security. If we can remove human error, bake-in security by default, and automate security wherever possible, then the system will be more secure than if we're relying on human effort to properly configure and maintain the system and its security.

Click here to check out the post and the Dao Research whitepaper.


Identity Praxis, Inc.

The Data Confidence Index, 2019: How confident people around the world are that they can protect their privacy

Back in 2019, the Datum Future and GlobalWebindex released a study, The Data Confidence Index. This study is as relevant today as when it was released. According to Catherine Mayer of Datum Future, with this study, they set out to explore the differences between people’s privacy attitudes and their privacy protection behaviors. The Data Confidence […] The post The Data Confidence Index, 2019: Ho

Back in 2019, the Datum Future and GlobalWebindex released a study, The Data Confidence Index. This study is as relevant today as when it was released. According to Catherine Mayer of Datum Future, with this study, they set out to explore the differences between people’s privacy attitudes and their privacy protection behaviors.

The Data Confidence Index is a measure of expressed privacy concerns against online privacy behaviours

When calculating the data confidence index, the researchers assessed people’s attitudes toward data usage and safety across three privacy concerns: 1) concerns related to how companies use data, 2) concerns about the Internet eroding privacy, and 3) the desire for anonymity online. The authors then indexed these privacy concerns against five privacy protection behaviors, 1) deleting cookies, 2) using a private browser, 3) using an ad-blocker, and 4) VPN use to stay anonymous online and for hiding browser activity.

The Findings

The following figure shows the index across 41 countries with 0 representing the people with the most confidence in their ability to engage in privacy protecting-behaviors that are in line with their privacy concerns, and 2.0 representing those with the least confidence to do so; the study suggests that the Taiwanese are the least confident and the Swedes are the most confident about their data.

Discussion

This is a valuable study. It helps us better understand people’s privacy concerns in relation to their behaviors, and it can be useful for understanding market personas and for targeting specific countries with privacy-enhancing technologies.

Study limitations

The study, however, does have its limitations. The authors point out that individual cultures, demographics, and experiences influence people’s privacy attitudes and behaviors. Moreover, attitudes towards privacy and related behaviors are impacted by a country’s collectivist or individualist orientation, with individualist cultures expecting individuals to take more responsibility for themselves, while collectivist cultures look to society, e.g. government, to do so. In addition, the level of Internet penetration in a country and the extent to which digital behaviors are more tightly aligned to social and media behaviors vs. lifestyle activities, like communications and paying the bills, are important to consider. All these factors may play a role in privacy attitudes and behaviors but are not taken into consideration by the researchers in this study.

Other privacy protection behaviors to consider

In addition to the limitations above, I’d like to add one of my own. The study only looks at five privacy protection behaviors; however, there are many other considerations and behaviors that users can and should engage in if they truly want to take control of their data and privacy.

A summary of just a few of the leading privacy-protecting behaviors, including those above, are,

Using an independent cross-platform password manager, and most importantly not reusing usernames and passwords across accounts, or sharing passwords Sharing an alias name, provide false data, when appropriate Limit information sharing on social media and in other apps Employing an identity aliasing tools, such as email, credit card, and phone number aliasing solution Installing a tracker manager (aka ad-blocker) Turning on and using MFA (multi-factor authentication) on accounts Activating and keeping current anti-virus on computers and mobile devices: block viruses, malware, ransomware, phishing Installing and using anti-keylogger software on your computer Installation and using Webcam blocker software, or take the easier route and put on and use a webam cover Private, “safe”, browser (NOT incognito mode) Enabling mobile SIM card protection with a PIN code Installing and keeping current an Internet network monitoring solution Using a VPN whenever connected to a public WiFi network Monitoring credit scores and accounts Subscribing to a darknet monitoring service Using services like Apple Pay and Google Pay Communicating through encrypted messaging or email apps Having identity insurance and access to fraud restoration services Enacting regulatory rights and asking companies for a copy of their data Being wary of phishing and other common fraud, never giving out sensitive information Reading terms & conditions and privacy policies before signing up for a service or downloading and installing apps Use anonymous payment methods Delete or cancel unused social media and other accounts Cancel or don’t go through with a transaction if the site or vendor does not “feel” trustworthy Change default privacy settings on a device or application Validate tax information on file at www.ssa.gov Keeping an eye out for the emergence of personal information management solutions (more on this later in future articles)

These are just a handful of the many behaviors that people can engage in to safely and securely navigate the Internet and build their data confidence and peace-of-mind.

In summary, there is so much more we can do. When looking at the data confidence index, only 2 of 41 countries are generally confident the vast majority of people of moderate to high concerns.

Study Methodology

The study is based on GlobalWebIndex’s data from the Q1 – Q4 2018 research waves among a sample of 391,130 internet users aged 16 – 64 across 41 markets in which GlobalWebIndex tracks online privacy behaviors.

References

The Data Confidence Index Report (p. 1~35). (2019). Datum Future. https://www.datumfuture.org/wp-content/uploads/2019/09/Data-Confidence-Index-Datum-Future-and-GWI-2019.pdf

The post The Data Confidence Index, 2019: How confident people around the world are that they can protect their privacy appeared first on Identity Praxis, Inc..

Sunday, 23. August 2020

Doc Searls Weblog

Bet on obsolescence

In New Digital Realities; New Oversight Solutions, Tom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory mod

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.


Boris Mann's Blog

Montecristo Magazine talks about East Van alleys filled with oregano and other Mediterranean herbs. And yes, 6ft tall rosemary bushes are definitely common too.

Montecristo Magazine talks about East Van alleys filled with oregano and other Mediterranean herbs. And yes, 6ft tall rosemary bushes are definitely common too.


The Tyee’s story of how the Himalayan blackberry came to North America is really interesting. Attached are pictures of three different kinds I took on Bowen.

The Tyee’s story of how the Himalayan blackberry came to North America is really interesting. Attached are pictures of three different kinds I took on Bowen.


I’m a fan of the Ghost blog in part because you can easily run it with one-click “Deploy to Heroku”. Mike Haynes documented how to set Ghost up to support microblogging with title-less asides.

I’m a fan of the Ghost blog in part because you can easily run it with one-click “Deploy to Heroku”.

Mike Haynes documented how to set Ghost up to support microblogging with title-less asides.


Made my first salt-baked salmon tonight. Full step-by-step photos on @ATBRecipes. Definitely want to try more salt-baked things ;)

Made my first salt-baked salmon tonight. Full step-by-step photos on @ATBRecipes. Definitely want to try more salt-baked things ;)

Saturday, 22. August 2020

Boris Mann's Blog

Zettlr, an #opensource desktop markdown editor. Zettelkasten support, tags, and more. via @brianwisti

Zettlr, an #opensource desktop markdown editor. Zettelkasten support, tags, and more.

via @brianwisti

Friday, 21. August 2020

Simon Willison

California Protected Areas Database in Datasette

California Protected Areas Database in Datasette I built this yesterday: it's a Datasette interface on top of the CPAD 2020 GIS database of protected areas in California maintained by GreenInfo Network. This was a useful excuse to build a GitHub Actions flow that builds a SpatiaLite database using my shapefile-to-sqlite tool, and I fixed a few bugs in my datasette-leaflet-geojson plugin as well.

California Protected Areas Database in Datasette

I built this yesterday: it's a Datasette interface on top of the CPAD 2020 GIS database of protected areas in California maintained by GreenInfo Network. This was a useful excuse to build a GitHub Actions flow that builds a SpatiaLite database using my shapefile-to-sqlite tool, and I fixed a few bugs in my datasette-leaflet-geojson plugin as well.

Via calands-datasette on GitHub


Quoting Adrienne Lowe

Why weekly? You want to keep your finger on the pulse of what’s really going on. When 1:1s are scheduled bi-weekly, and either of you have to cancel, you’ll likely be going a month between conversations and that is far too long to go without having a 1:1 with your direct report. Think of how much happens in a month. You don’t want to be that far behind! — Adrienne Lowe

Why weekly? You want to keep your finger on the pulse of what’s really going on. When 1:1s are scheduled bi-weekly, and either of you have to cancel, you’ll likely be going a month between conversations and that is far too long to go without having a 1:1 with your direct report. Think of how much happens in a month. You don’t want to be that far behind!

Adrienne Lowe


Mike Jones: self-issued

OAuth 2.0 JWT Secured Authorization Request (JAR) sent to the RFC Editor

Congratulations to Nat Sakimura and John Bradley for progressing the OAuth 2.0 JWT Secured Authorization Request (JAR) specification from the working group through the IESG to the RFC Editor. This specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure […]

Congratulations to Nat Sakimura and John Bradley for progressing the OAuth 2.0 JWT Secured Authorization Request (JAR) specification from the working group through the IESG to the RFC Editor. This specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and intentionally does so without introducing breaking changes.

This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-28

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-28.html

Again, congratulations to Nat and John and the OAuth Working Group for this achievement!


Simon Willison

Weeknotes: Rocky Beaches, Datasette 0.48, a commit history of my database

This week I helped Natalie launch Rocky Beaches, shipped Datasette 0.48 and several releases of datasette-graphql, upgraded the CSRF protection for datasette-upload-csvs and figured out how to get a commit log of changes to my blog by backing up its database to a GitHub repository. Rocky Beaches Natalie released the first version of rockybeaches.com this week. It's a site that helps you find p

This week I helped Natalie launch Rocky Beaches, shipped Datasette 0.48 and several releases of datasette-graphql, upgraded the CSRF protection for datasette-upload-csvs and figured out how to get a commit log of changes to my blog by backing up its database to a GitHub repository.

Rocky Beaches

Natalie released the first version of rockybeaches.com this week. It's a site that helps you find places to go tidepooling (known as rockpooling in the UK) and figure out the best times to go based on low tide times.

I helped out with the backend for the site, mainly as an excuse to further explore the idea of using Datasette to power full websites (previously explored with Niche Museums and my TILs).

The site uses a pattern I've been really enjoying: it's essentially a static dynamic site. Pages are dynamically rendered by Datasette using Jinja templates and a SQLite database, but the database itself is treated as a static asset: it's built at deploy time by this GitHub Actions workflow and deployed (currently to Vercel) as a binary asset along with the code.

The build script uses yaml-to-sqlite to load two YAML files - places.yml and stations.yml - and create the stations and places database tables.

It then runs two custom Python scripts to fetch relevant data for those places from iNaturalist and the NOAA Tides & Currents API.

The data all ends up in the Datasette instance that powers the site - you can browse it at www.rockybeaches.com/data or interact with it using GraphQL API at www.rockybeaches.com/graphql

The code is a little convoluted at the moment - I'm still iterating towards the best patterns for building websites like this using Datasette - but I'm very pleased with the productivity and performance that this approach produced.

Datasette 0.48

Highlights from Datasette 0.48 release notes:

Datasette documentation now lives at docs.datasette.io The extra_template_vars, extra_css_urls, extra_js_urls and extra_body_script plugin hooks now all accept the same arguments. See extra_template_vars(template, database, table, columns, view_name, request, datasette) for details. (#939) Those hooks now accept a new columns argument detailing the table columns that will be rendered on that page. (#938)

I released a new version of datasette-cluster-map that takes advantage of the new columns argument to only inject Leaflet maps JavaScript onto the page if the table being rendered includes latitude and longitude columns - previously the plugin would load extra code on pages that weren't going to render a map at all. That's now running on https://global-power-plants.datasettes.com/.

datasette-graphql

Using datasette-graphql for Rocky Beaches inspired me to add two new features:

A new graphql() Jinja custom template function that lets you execute custom GraphQL queries inside a Datasette template page - which turns out to be a pretty elegant way for the template to load exactly the data that it needs in order to render the page. Here's how Rocky Beaches uses that. Issue 50. Some of the iNaturalist data that Rocky Beaches uses is stored as JSON data in text columns in SQLite - mainly because I was too lazy to model it out as tables. This was coming out of the GraphQL API as strings-containing-JSON, so I added a json_columns plugin configuration mechanism for turning those into Graphene GenericScalar fields - see issue 53 for details.

I also landed a big performance improvement. The plugin works by introspecting the database and generating a GraphQL schema that represents those tables, columns and views. For tables with a lot of tables this can get expensive, and the introspection was being run on every request.

I didn't want to require a server restart any time the schema changed, so I didn't want to cache the schema in-memory. Ideally it would be cached but the cache would become invalid any time the schema itself changed.

It turns out SQLite has a mechanism for this: the PRAGMA schema_version statement, which returns an integer version number that changes any time the underlying schema is changed (e.g. a table is added or modified).

I built a quick datasette-schema-versions plugin to try this feature out (in less than twenty minutes thanks to my datasette-plugin cookiecutter template) and prove to myself that it works. Then I built a caching mechanism for datasette-graphql that uses the current schema_version as the cache key. See issue 51 for details.

asgi-csrf and datasette-upload-csvs

datasette-upload-csvs is a Datasette plugin that adds a form for uploading CSV files and converting them to SQLite tables.

Datasette 0.44 added CSRF protection, which broke the plugin. I fixed that this week, but it took some extra work because file uploads use the multipart/form-data HTTP mechanism and my asgi-csrf library didn't support that.

I fixed that this week, but the code was quite complicated. Since asgi-csrf is a security library I decided to aim for 100% code coverage, the first time I've done that for one of my projects.

I got there with the help of codecov.io and pytest-cov. I wrote up what I learned about those tools in a TIL.

Backing up my blog database to a GitHub repository

I really like keeping content in a git repository (see Rocky Beaches and Niche Museums). Every content management system I've ever been has eventually desired revision control, and modeling that in a database and adding it to an existing project is always a huge pain.

I have 18 years of content on this blog. I want that backed up to git - and this week I realized I have the tools to do that already.

db-to-sqlite is my tool for taking any SQL Alchemy supported database (so far tested with MySQL and PostgreSQL) and exporting it into a SQLite database.

sqlite-diffable is a very early stage tool I built last year. The idea is to dump a SQLite database out to disk in a way that is designed to work well with git diffs. Each table is dumped out as newline-delimited JSON, one row per line.

So... how about converting my blog's PostgreSQL database to SQLite, then dumping it to disk with sqlite-diffable and committing the result to a git repository? And then running that in a GitHub Action?

Here's the workflow. It does exactly that, with a few extra steps: it only grabs a subset of my tables, and it redacts the password column from my auth_user table so that my hashed password isn't exposed in the backup.

I now have a commit log of changes to my blog's database!

I've set it to run nightly, but I can trigger it manually by clicking a button too.

TIL this week Pointing a custom subdomain at Read the Docs Code coverage using pytest and codecov.io Read the Docs Search API Programatically accessing Heroku PostgreSQL from GitHub Actions Finding the largest SQLite files on a Mac Using grep to write tests in CI Releases this week datasette-graphql 0.14 - 2020-08-20 datasette-graphql 0.13 - 2020-08-19 datasette-schema-versions 0.1 - 2020-08-19 datasette-graphql 0.12.3 - 2020-08-19 github-to-sqlite 2.5 - 2020-08-18 datasette-publish-vercel 0.8 - 2020-08-17 datasette-cluster-map 0.12 - 2020-08-16 datasette 0.48 - 2020-08-16 datasette-graphql 0.12.2 - 2020-08-16 datasette-saved-queries 0.2.1 - 2020-08-15 datasette 0.47.3 - 2020-08-15 datasette-upload-csvs 0.5 - 2020-08-15 asgi-csrf 0.7 - 2020-08-15 asgi-csrf 0.7a0 - 2020-08-15 asgi-csrf 0.7a0 - 2020-08-15 datasette-cluster-map 0.11.1 - 2020-08-14 datasette-cluster-map 0.11 - 2020-08-14 datasette-graphql 0.12.1 - 2020-08-13

Thursday, 20. August 2020

Virtual Democracy

Open science badges are coming

Badges give your cultural norms footholds for members to learn and practice “A ‘badge’ is a symbol or indicator of an accomplishment, skill, quality or interest. From the Boy and Girl Scouts, to PADI diving instruction, to the more recently popular geo-location game, Foursquare, badges have been successfully used to set goals, motivate behaviors, represent achievements … Continue reading Open
Badges give your cultural norms footholds for members to learn and practice “A ‘badge’ is a symbol or indicator of an accomplishment, skill, quality or interest. From the Boy and Girl Scouts, to PADI diving instruction, to the more recently popular geo-location game, Foursquare, badges have been successfully used to set goals, motivate behaviors, represent achievements … Continue reading Open science badges are coming

Boris Mann's Blog

Made it into Columbus Meats for the first time in a long time. Their prepared roasts and other items are SO GOOD — although I didn’t buy any this time.

Made it into Columbus Meats for the first time in a long time. Their prepared roasts and other items are SO GOOD — although I didn’t buy any this time.


Identity Praxis, Inc.

Act now, Aug. 21, 2020 deadline! Opportunity to generate international awareness for your SSI company and the industry category

The International Association of Privacy Professionals (IAPP) has an open call for submissions for their IAPP Privacy Tech Vendor Report. The deadline for submission for the next report is Friday, August 21, 2020. You have 2 days left. Here are the guidelines and the submission form for you to submit your entry for the report. […] The post Act now, Aug. 21, 2020 deadline! Opportunity to generate

The International Association of Privacy Professionals (IAPP) has an open call for submissions for their IAPP Privacy Tech Vendor Report. The deadline for submission for the next report is Friday, August 21, 2020. You have 2 days left. Here are the guidelines and the submission form for you to submit your entry for the report.

IAPP has yet to recognize self-sovereign identity (SSI) as an independent category for privacy management.

Don’t worry; the submission process is easy; you just have to fill out a few fields. It should only take a few minutes.

Company name Company website New or Existing Vendor Location Number of employees Founded Year Leadership (top executives, with full names and titles) How is your organization is funded? Privately held or Publicly traded Best fit category (Activity monitoring, Assessment manager, Consent management, Data discovery, Data mapping, Data Subject Requests, De-identification, Enterprise communications, Incident response, Information privacy manager, or Website scanning) 100-word description Logo and screenshot of tech solution Contact name Contact email address About the IAPP

According to the IAPP “The International Association of Privacy Professionals (IAPP) is a resource for professionals who want to develop and advance their careers by helping their organizations successfully manage these risks and protect their data. In fact, we’re the world’s largest and most comprehensive global information privacy community.”

Why SSI Players Should Submit

The surveillance economy has gotten out of hand.

It is time to help privacy professionals and the establishment recognize that they can help their organizations successfully manage risks and protect data by putting identity and personal information and controls in the hands of the people, the data subjects, from which the data (inc. attribute data, labor data, and capitol data) are derived.

The risks of identity and personal data mismanagement for organizations and individuals alike are no longer just a simple annoyance; the mismanagement and ineffective stewardship of identity and personal data can lead to meaningful lost opportunity and lasting emotional, financial, social, professional, reputational, and physical harm for everyone.

SII solutions (in all their glory) are part of the answer: SSI players have the power to help organizations reduce risk and people to securely and safely collect, manage, gain insight from, and exchange their identities and personal information.

Empowering people with privacy is important. Privacy is the ability for an individual to control the who, what, when, where, for how long, and for what purpose an entity can access and use their identity or personal information.

Most privacy tech does not help with this, SSI can.

Individuals should have agency over their identity and personal information.

Let’s get recognized by the IAPP and the industry at-large

Since the IAPP does not have an SSI category, I’d recommend that SSI players use the “Consent Management” and any other category they deem relevant to their product when making their submission. Remember, in your description, be sure to refer to your solution as being SSI related so as to help the IAPP connect the dots.

I have reached out to the IAPP and will be working with them to help them understand the importance of SSI “privacy technologies” and to recognize SSI as an independent category in future reports.

Please, be sure to submit your solution to this year’s report. If we get enough submissions, there is a chance that we can persuade the IAPP to carve our a specific SSI section in the next or future reports.

The post Act now, Aug. 21, 2020 deadline! Opportunity to generate international awareness for your SSI company and the industry category appeared first on Identity Praxis, Inc..

Wednesday, 19. August 2020

Simon Willison

Announcing the Consortium for Python Data API Standards

Announcing the Consortium for Python Data API Standards Interesting effort to unify the fragmented DataFrame API ecosystem, where increasing numbers of libraries offer APIs inspired by Pandas that imitate each other but aren't 100% compatible. The announcement includes some very clever code to support the effort: custom tooling to compare the existing APIs, and an ingenious GitHub Actions setup

Announcing the Consortium for Python Data API Standards

Interesting effort to unify the fragmented DataFrame API ecosystem, where increasing numbers of libraries offer APIs inspired by Pandas that imitate each other but aren't 100% compatible. The announcement includes some very clever code to support the effort: custom tooling to compare the existing APIs, and an ingenious GitHub Actions setup to run traces (via sys.settrace), derive type signatures and commit those generated signatures back to a repository.

Via @ralfgommers

Tuesday, 18. August 2020

Discovering Identity

test post

test post

test post

Monday, 17. August 2020

Doc Searls Weblog

A side view of the Ranch 2 Fire

What you see there is a flammagenitus cloud rising to the north above Ranch 2, a wildfire about fifteen miles east of here in the San Gabriel Mountains, just north of Asuza (one of too many towns to remember, in greater Los Angeles). If the video works, you’ll see how how the clouds give shape […]

What you see there is a flammagenitus cloud rising to the north above Ranch 2, a wildfire about fifteen miles east of here in the San Gabriel Mountains, just north of Asuza (one of too many towns to remember, in greater Los Angeles). If the video works, you’ll see how how the clouds give shape to the heat from the fire, even as smoke (darker and with a gray/magenta color) stopping at a lower elevation, spreads northward in the same direction.

The fire was caused by arson, says the guy who confessed to starting it.

It’s interesting to see how much reporting on fires has changed in the time I’ve been following stories like this in Southern California. Inciweb, the canonical close-to-live running catalog of wildfires in the U.S. has moved from the .org where it started to NCWG.gov, the National Coordinating Wildfire Group. When I first wrote about Inciweb, back in ’06, I didn’t mention that it was entirely the heroic work of one Linux hacker at the Forest Service who didn’t wish to be identified.

Anyway, if you want to catch up on the Ranch2, one of too many wildfires clouding the western skies right now, here’s the Twitter search for the latest.

 


FACILELOGIN

A Maturity Model for Customer IAM

The main objective of Customer IAM (CIAM) is to drive the revenue growth by leveraging identity data to acquire and retain customers. It… Continue reading on FACILELOGIN »

The main objective of Customer IAM (CIAM) is to drive the revenue growth by leveraging identity data to acquire and retain customers. It…

Continue reading on FACILELOGIN »


Identity Praxis, Inc.

There is “unity” in humanity — introducing Humanity Power

It has been quite some time since I’ve been inspired to write. This is my first Medium story. What finally broke through the noise and my inertia was the launch of Humanity Power, an initiative released today by my friend Charisse. The Humanity Power message:There is “unity” within humanity. Understanding the Humanity Power Message What the message […] The post There is “unity” in huma

It has been quite some time since I’ve been inspired to write. This is my first Medium story. What finally broke through the noise and my inertia was the launch of Humanity Power, an initiative released today by my friend Charisse.

The Humanity Power message:
There is “unity” within humanity.

Understanding the Humanity Power Message

What the message of Humanity Power teaches me is that through an understanding and respect for our humanity we can all come together regardless of ethnicity, age, skin color, religion, ability, sexual orientation, gender, class, social status, or national origin, to achieve greatness. In other words, it is through unity, and the mindful eradication of the “isms” in our lives — racism, sexism, ableism, ageism, colroism, classism, etc. — that we can be of service to each other, to ourselves, and strive to achieve, and maintain a world and a life filled with achievement, rich experiences and lessons, and most importantly gratitude and joy.

The concept of Humanity Power is hugely relevant to my passions and area of commercial and academic focus — the empowerment of people so that they can achieve digital sovereignty; so that they can achieve agency of their phygital self.

The message of Humanity Power is essential; understanding and embracing it is an important step toward our achieving digital sovereignty.

I invite you all to join me in spreading the Humanity Power message. I encourage you to think about the concept. Share it. Write about it. And, for you to buy Humanity Power t-shirts for yourself or as a gift for your family members, friends, and colleagues.

Humanity Power t-shirts are available at www.humanitypower.co for $25.00 + shipping. They are super comfy black tees with “unity” depicted in white, teal, orange, or rainbow text.

Let’s all work together to spread the message of unity through our humanity.

Thanks!

The post There is “unity” in humanity — introducing Humanity Power appeared first on Identity Praxis, Inc..


Rebecca Rachmany

And the fact that even if you ask for a ballot early, some states won’t send it out until…

And the fact that even if you ask for a ballot early, some states won’t send it out until mid-September for overseas voters.

And the fact that even if you ask for a ballot early, some states won’t send it out until mid-September for overseas voters.

Sunday, 16. August 2020

Boris Mann's Blog

Biked around Stanley Park, using the new bike lanes on the road. Relaxing and an awesome ride. Photos are bike & chestnut tree at Brockton Point, and view to North Shore.

Biked around Stanley Park, using the new bike lanes on the road. Relaxing and an awesome ride.

Photos are bike & chestnut tree at Brockton Point, and view to North Shore.


Success with the lactic fermented tomato 🍅 #pickling experiment! Less garlic next time.

Success with the lactic fermented tomato 🍅 #pickling experiment! Less garlic next time.

Saturday, 15. August 2020

Timothy Ruff

Introducing Self-Sovereign Student ID (Part 2 of 2)

Introducing Self-Sovereign Student ID Part 2 of 2: ID Is Only the Beginning. Achievements, Skills, & Competencies For many working with SSI and VCs, exchanging achievements is the top-of-mind use case. By achievements, I mean any kind: diplomas, degrees, certificates, skills, skill shapes, competencies, badges, milestones, grades, awards, commendations, micro-credentials, incremental a
Introducing Self-Sovereign Student ID
Part 2 of 2: ID Is Only the Beginning.
Achievements, Skills, & Competencies

For many working with SSI and VCs, exchanging achievements is the top-of-mind use case. By achievements, I mean any kind: diplomas, degrees, certificates, skills, skill shapes, competencies, badges, milestones, grades, awards, commendations, micro-credentials, incremental achievements, and others.

Students will eventually share their achievements in pursuit of a job, but they also may want to transfer between schools, or reverse transfer credits from their current school to a former one. SSI and VCs are the ideal means of receiving achievements in a form where they can be readily shared again, and instantly trustable without a manual verification process.

But unlike student ID, broadly useful achievements exchange among schools and employers not only requires them to become capable of issuing, holding, and verifying VCs, it also requires them to come to agreement about how the data payload should be arranged. This will happen, but it’s gonna take awhile. Thankfully, there is significant and growing momentum toward precisely that.

For example, serious efforts are underway at the T3 Innovation Network, within the U.S. Chamber of Commerce, in developing Learning and Employment Records, or LERs. LERs are powered by the same VC standards and technologies that enable self-sovereign student ID, with the same issue, hold, verify model to/from an SSI wallet, which they call a “learner wallet” for simplicity. A learner wallet is the same as an SSI wallet, with one important addition: a learner wallet includes in its scope the capability for a student to store some VCs in a cloud-based container with a custodian, in place of or in addition to a personally held wallet, and retain self-sovereign control over them. This is useful with large data sets, infrequently used credentials, and as a backup, and is offered by the likes of ASU’s Trusted Learner Network and the Velocity Network Foundation.

An impressive piece was recently released that everyone interested in interoperable achievements should read, whether in the U.S. or abroad: Applying Self-sovereign Identity Principles To Interoperable Learning Records. The lead author of that piece, Kim Hamilton Duffy, also leads a group called the Digital Credentials Consortium (DCC). DCC includes 14 intrepid schools, including the likes of MIT and Harvard, developing interoperability of achievements that are literally carried by the achievers. They also see VCs as the basis for this interoperability, and are making exciting progress.

My conclusion: VCs are where “the puck is headed” for broad, even global academic interoperability, they are the containers referred to in these documents that can securely transport the achievement or LER “payload” between issuers and verifiers, via the achiever herself.

By using this same VC technology for student ID, a school does three critical things to lay the foundation for later exchanging achievements:

It puts the tools necessary for exchanging achievements into schools’ and students’ hands. It gets schools familiar with working with VCs: issuing, verifying, and managing. It gets students familiar with using an SSI wallet: making connections, receiving VCs, sharing VCs, communicating, giving consent, etc.

After self-sovereign student ID is in place, issuing an achievement or LER to a student means simply clicking a different button (or two).

The “Digital Experience”

Education is increasingly engaged in digital transformation, from enrollment to instruction to achievement and employment. Through all the schools, programs and other experiences you might have, there is one thing that’s constant: you. You are the ideal courier of your own data whenever it’s useful to prove or qualify for something, if only you could receive your data, have a way to hold it and present it, and it was verifiable when presented. That is precisely what this technology does.

When you realize that self-sovereign student ID is simply a school-issued digital VC held inside a secure wallet capable of also storing verifiable achievements, and that wallet ideally belongs to the student and not the school, it becomes clear how it can become foundational to a lifetime digital learning experience for that learner. In this context, the “Digital Student ID” becomes a part of the digital experience rather than the whole of it.

This also ties into the future of work, where lifelong achievements can be accumulated by the student and later used to prove skills and competencies to prospective employers at a granular level, with the power of selective disclosure enabling strong privacy to avoid oversharing.

Taken together, this is direct application of the “Internet of Education” from the Learning Economy Foundation, a vision that is now feasible and with which self-sovereign student ID is aligned.

Privileges, Perks, & Freebies

Unlike typical digital credentials or even digital student ID, with self-sovereign student ID students can prove their ID and status anywhere, not just in school-approved or school-connected systems. This independence opens up the entire internet and the world itself, to embrace your school’s students as verifiably what they claim to be, and give them whatever benefits and privileges that status might conceivably afford. This could mean, at any non-school organization, online or off:

Formless onboarding & passwordless authentication at websites Freebies, discounts and special deals anywhere Access to students-only facilities and events from multiple schools Access to special loans, grants, scholarships and more

Intuitively, the more benefits you can arrange for your students, the more they will want to become your students; with self-sovereign student ID, you can unlock more benefits than ever before.

Communication & Interaction

This category of capabilities is often overlooked in SSI, but I believe it could become the most used and beneficial class of capabilities that self-sovereign student ID enables. If you think about how much time we spend communicating versus how much time we spend authenticating, you’ll get where I’m coming from.

Before issuing a VC to a student, a direct connection must be established between the student’s chosen wallet and the school. This connection is unlike other connections the school may have with the student, and unlike the connections people have with each other; it is peer-to-peer, private, and encrypted end-to-end.

This connection between school and student isn’t ephemeral; it persists until either side breaks it, even for alumni who’ve long since left the school (useful for helping keep track of grads for annual IPEDS and other accreditation reporting), It is a new, private, digital relationship between school and student that enables interactions of many forms: messages, phone calls, video calls, file exchange, playing games, taking polls, voting, gathering permission or consent (digitally signed by the student), granting the school’s consent (digitally signed by the school), and more.

A bit like email, both your school and the student can use different services to help with your/their end of the connection. And these services are substitutable; there is no single vendor in the middle that is connecting both sides, as there is with popular messaging services today. If there were, then self-sovereign independence is lost and most of the benefits listed in this article along with it, replaced with dependence on that intermediary.

Using this capability, schools could do away with proprietary messaging systems they’ve installed to ensure FERPA-protected data, for example, is not shared incorrectly, and instead use a standards-based system that comes for free with self-sovereign student ID.

This communication channel must be respected and not overused, because either side can choose to break it; it’s not like email or a phone number where the other party can simply resume sending messages from another address or device. Reconnection can happen at any time, but both parties must opt in. I particularly love this aspect of SSI, because it is the beginning of the end to spam and phishing, and encourages considerate communications behavior on all sides.

Preventing Fraud & Phishing

Once issued to the student by the school, self-sovereign student ID helps prevent student-related fraud, including with student AID programs the student may apply for with outside organizations, such as government, scholarship programs, and others. Once these organizations realize they can receive cryptographic proof directly from the student, they can lessen their reliance on passwords, social security numbers, and other personal information, bringing us closer to a world devoid of identity theft, where having someone’s personal information — even their passwords — is no longer sufficient to impersonate them.

When a student applies and presents their VCs for verification, the benefits offeror, such as FAFSA in the U.S., can instantly and digitally verify, either remotely or in person, the student’s ID and status as a student, even when the organization isn’t connected to the school’s systems. Eventually, as VCs become more prevalent and the student acquires more VCs as they progress in their learner journey, they’ll be able to prove things like their citizenship or visitor status, high school diploma, GED, academic progress, and more, further preventing fraud and accelerating the process of applying for student aid.

Of course this use case requires the benefits offeror to gain the ability to verify VCs, which they could do tomorrow, but in reality may take awhile.

Phishing attempts to impersonate the school in communications with the student can also be detected and prevented, by sending school communications through the private SSI connection or by using it to validate communications sent via other means. And the school isn’t the only one fraudsters may try to impersonate: faculty, staff, tutors, proctors, authorized partners, service providers and more can be strongly and mutually authenticated by using this same capability.

Why Not Embed An SSI Wallet Into Your School’s Existing App?

We hear questions about “embedded wallets” a lot, and for good reason: your school has worked hard to get your official app into as many hands as possible, so adding functionality to it makes sense, whereas asking students to get another ‘app’ — even though an SSI wallet isn’t really an app — seems almost a non-starter.

Well, if a self-sovereign ‘wallet’ were just another app, and intended solely for interacting with your school, this sentiment would make perfect sense. But it isn’t, so it doesn’t, at least in the longer term. But it might in the short term.

We should unpack that a bit.

‘Wallet’ is a woefully inadequate term for what SSI is and does for a person; it is useful because it is an easy to understand metaphor for the most basic building blocks of SSI, but it is ultimately misleading, like mistaking the trunk of an elephant for a snake. SSI is more like a self-sovereign cockpit for consolidating all your relationships, not just your academic ones, and certainly not just one school. SSI consolidates, under your ultimate control, your connections, communications, interactions, agreements, preferences and data, even data not in your physical possession like medical data, which might be best physically stored with a healthcare provider or other custodian. Leaving all that in bespoke, embedded wallets from each provider brings you right back to the status quo, with your relationships, interactions, and data spread out and under the ultimate control of third parties, with all that entails: vendor lock-in; privacy, security, and compliance issues; ID theft; surveillance capitalism; duplicate relationships and data; etc.

Microsoft, Mastercard, IBM, Samsung, the US Credit Union industry and hundreds of others globally are now developing SSI/VC tech for use in many industries, so your school will soon not be the only entity offering SSI-powered benefits to your students, faculty and staff. Imagine if every organization embedded wallets into their own apps rather than working with an external one, or if every payment, ID, and loyalty card you carried required its own separate physical wallet… people would begin to get annoyed, to say the least, and prefer schools and organizations that made life easier, not harder.

All that said, an embedded wallet could be a reasonable tradeoff early on, when SSI is new and its first uses for your students may be limited to your school. So if you jump on self-sovereign student ID quickly as an early adopter, you could embed SSI/VC/wallet tech into your existing app, foregoing self-sovereignty for now without too much of a tradeoff, and still gain several of the key benefits mentioned. Then, as students, faculty, and staff begin to receive SSI connection requests and VC offers from their other relationships in life, and they start wanting to consolidate things, you can make moves toward greater self-sovereignty with less of a dilemma, counting on SSI’s standards-enabled portability.

What’s Ready Now? What’s Ready Code, Products, & Services — Open source code; VC-oriented products from Microsoft, Workday, IBM, and dozens of startups. Compatibility With Existing Federated ID — CAS, Okta, Ping, ForgeRock, etc. for connecting with SAML, OAuth, OIDC and other federation protocols for passwordless login, KBA-free call-in, and cardless walk-in authentication. Standards Work — W3C, Trust over IP Foundation, DIF Custodial Solutions — Trusted Learner Network, Velocity Network Foundation Broad Consensus About VCs — The Verifiable Credential is the only container I’m consistently seeing under consideration for transporting verifiable data between trust domains, which self-sovereign control and trust require, from academia to healthcare to finance and beyond. Broad Consensus About Individual Control of Data — From academia to healthcare to Europe’s GDPR and the current disdain for big tech and surveillance capitalism, I see broad consensus that control over data must move more and more into the hands of individuals, even data not in their physical possession. Momentum — Years of global open-source development and standards work for SSI; orgs large and small in many industries are actively participating in developing VC code, standards, use cases and business models; strong support from the T3 Innovation Network in the U.S. Chamber of Commerce. What’s Not Ready (Yet) User Experience — The SSI space knows the basics — issue, hold, and verify VCs — but does not yet have the UX figured out. Honestly, the existing SSI wallets I’ve seen are all still a bit clunky and confusing (even though it’s still a much better experience than passwords or answering personal questions), but they do work. Usability must be smoothed and complexity hidden, and access for the disabled, older devices, and more, has yet to be addressed. Interoperability — Today, standards are ahead of implementations. All the players know the importance of interop but haven’t gotten there yet, though there are serious multi-org testing and development efforts underway to get it resolved. I like the alignment of incentives here: any vendor not interoperable with others will be left on its own technology island. Communications — While these private, peer-to-peer connections can support any kind of communication, so far I’ve not seen anything other than simple messaging. Passive Authentication — I look forward to the day when I can be authenticated by those I know and trust passively, by policy, by automatically sharing the appropriate proofs when prompted, without touching my device. As far as I know, only active authentication is now offered. Embedded Agents in Door Access Readers — Another missing element is embedded SSI agents into NFC (or other tap technology) readers, to make door access compatible and performant. Ancillary & Rainy Day Use Cases — Most new tech must first nail sunny day scenarios before tackling the rainy day ones. For example, VCs could be used for guardian relationships, children, pets, things, and complex organizational hierarchies, but those haven’t been done anywhere that I’m aware of. VCs could work off-line or from a QR code on a card or piece of paper, but no one has gone there yet either, to my knowledge.

Considering what’s ready today, what’s not ready, the long list of benefits for both schools and students, the fraud with existing credentials, and the possibility of eliminating existing costs (see next section), I think it adds up to a compelling case that self-sovereign student ID is ready for piloting.

That said, the pieces that enable self-sovereign student ID are nascent and only recently available; it is a new application of SSI that itself has only been around for about four years, and mostly in the lab and not much in production, though that is changing. Schools considering this in 2020 would be the first, which for those that prefer to lead rather than follow makes for a wonderful opportunity, especially during a pandemic.

Cue the Technology Adoption Lifecycle… welcome, Innovators!

Where to Begin

To get started with self-sovereign student ID, a school needs capabilities to issue and verify VCs, and students need wallets to hold them.

Code for simple issuance tools is available open source, more advanced tools are offered by various SSI service providers, and standards-compliant SSI wallets are available for free in both Apple and Google app stores. Verification is the tricky part, as it requires existing school systems to be adapted to accept VCs for authentication, and later for other purposes¹. Thankfully, some IAM systems commonly found in higher ed are adaptable now:

For schools running CAS, Okta, or any IAM system that facilitates an external Identity Provider, self-sovereign student ID can be integrated relatively painlessly, enabling students to immediately use it in place of passwords, and potentially eliminating the need for a dedicated 2FA provider. For contact centers running Salesforce Service Cloud, the StudentPass product that Credential Master is developing will integrate natively, enabling students to use their VCs to authenticate when calling in, without answering personal questions.

Of course, integrations can be made with any existing system. Better to start a new identity project with self-sovereign student ID, which can begin to consolidate systems and reduce complexity, than build another layer that may well add to it.

In Conclusion

For those interested primarily in achievements, ID is an “and” and not an “or,” and it should come first, as it lays a technical and familiarity foundation for achievements to be issued and quickly useful. Communications could come soon after ID, because it becomes available as soon as the first connection with a student is created.

Achievements will come later, after the necessary consensus among schools and employers has been sorted to establish semantic meaning and interoperability for exchanged data payloads. Then this model — of people receiving, controlling, and sharing their achievements in a digital, verifiable form — will become the norm, and the future of work will become the present.

Until that day, schools can leap right past proprietary digital ID solutions and go straight to self-sovereign, and reap all its benefits without having to wait for anyone else to agree to anything, giving students a modern digital experience they’ll love.

Part 1: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor.

¹ Down the road, VCs could also be used for authorization, where granular-level permissions and entitlements are carried and presented by individuals, simplifying the administration of centralized policy-based access control and moving enforcement to the edge.

Special thanks to several helpful reviewers, editors, and contributors: John Phillips, Dr. Phil Windley, Phil Long, Scott Perry, Dr. Samuel Smith, Alan Davies, Taylor Kendal, and Matthew Hailstone.


Introducing Self-Sovereign Student ID (Part 1 of 2)

Introducing Self-Sovereign Student ID Part 1 of 2: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor. Full Disclosure: The benefits discussed here apply to self-sovereign student ID generally and are not specific to any vendor or product, and do not result in vendor lock-in; any standards-compliant SSI tools for issuance, holding, and verification
Introducing Self-Sovereign Student ID
Part 1 of 2: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor.

Full Disclosure: The benefits discussed here apply to self-sovereign student ID generally and are not specific to any vendor or product, and do not result in vendor lock-in; any standards-compliant SSI tools for issuance, holding, and verification could deliver these results. As a partner of venture studio Digital Trust Ventures, I am a co-founder of Credential Master, a startup now developing a self-sovereign student ID product.

TL;DR: Digital student ID: doesn’t exist yet for most students. Self-sovereign student ID: students independently store verifiable data, and strongly prove things about themselves anywhere, online or off. Many uses: strong, passwordless login, KBA-free call in, & cardless walk in; secure, keyless facility and systems access; prove skills, competencies and achievements; secure peer-to-peer communication and interaction; breakthrough fraud prevention; student-controlled privacy; digitally signed consent; and more. Open standards: not tied to a specific vendor, usable outside the school’s network. Schools can start with self-sovereign student ID today, without waiting for collaboration with or agreement from other institutions, and without fear of vendor lock-in.

Note: This article assumes a basic understanding of the concepts of self-sovereign identity (SSI), especially W3C Verifiable Credentials (VCs). The technical specifics of how VCs are issued, revoked, held, verified, and trustworthy are covered extensively across the SSI industry. Basic VC mechanics can be found in my other post: How Verifiable Credentials Bridge Trust Domains.

Digital Student ID

For most students, digital student ID still doesn’t exist.

I’ve been asking experts in academia what comes to mind when I say “digital student ID,” and here is what I learned: other than login credentials for a student portal — which don’t count — a digital student ID still doesn’t exist, at least not for most students in most schools.

The first, still obscure attempts at what I would call real digital student ID have cropped up fairly recently, enabling students to prove their identity in multiple environments and granting them access to electronic systems, software, and even physical facilities. Apple has introduced their version of digital student ID that works exclusively with Apple software and devices, and various smaller companies have launched similarly proprietary software platforms with corresponding apps. Search “student ID” in the Apple or Google app stores and you’ll find dozens of similar offerings.

So why hasn’t digital student ID caught on? I think it’s because available offerings are tied to a single vendor, usable only in systems and facilities where that vendor is installed, and verifiable only by that vendor’s app. In Apple’s case it’s tied to their hardware, too. Even a homegrown digital student ID solution can be verified only by an associated homegrown app. It is vendor lock-in to the extreme, even if that ‘vendor’ is the school itself.

For a school to confidently roll out a more broadly useful digital student ID, it must be with technology that can traverse boundaries between vendors, both within the school’s network and external to it, such as when a student applies for aid. Such technology now exists, and it can do a heckuva lot more than ID.

Introducing a powerful new model for student ID: “self-sovereign” student ID.

Self-Sovereign Student ID

You may have heard of self-sovereign identity, or SSI¹. In this article I’ll explore how self-sovereign student ID can apply SSI capabilities and principles for students, faculty, and staff in an educational environment, primarily higher education.

I recognize that the term “self-sovereign” may not resonate as well with some in academia, which is often dominated by institutions preferring to expand their scope of control and influence, and perceiving self-sovereignty as a threat. However, it is the very act of giving greater control and independence to students that yields most of the benefits listed in this article, and counter intuitively, a closer, richer relationship with those students.

I’ll discuss this more in “Why Self-Sovereign” below.

What Is It?

The term “self-sovereign” is unfortunately not self-explanatory, but when properly understood should feel like a familiar analog to physical identity and credentials, which we already use in a self-sovereign manner well beyond the physical and digital bounds of the organization that issued them to us.

In short, self-sovereign student ID gives students the ability to independently and securely store tamper-resistant, verifiable data related to themselves, and to prove things wherever they want by privately sharing cryptographic (mathematical) proofs from that data, online or off. Importantly, it also enables students to securely, directly, and privately communicate and interact with the school.

Technically, the student has self-sovereign control over three² things:

A standardized digital container, for holding verifiable data; Peer-to-peer connections between their container and the containers of other people, organizations, and things³; Verifiable Credentials (VCs) that the student accepts into their wallet, and shares proofs of with others when desired.

Today, that self-sovereign container is typically a standardized digital SSI “wallet”⁴ within a compliant app that the student can see and interact with on a smart device; eventually, wallets will be found anywhere digital things are stored and become part of the everyday life of people, organizations, and things, hidden and integrated into our devices and experiences in a way where we no longer notice or think of them.

One important point that will help readers better understand VCs, both in this document and generally: I believe VCs are misnamed, they are not credentials, they are verifiable containers capable of transporting any data payload, which may or may not be typically considered a “credential.” This means that VCs held in a wallet are containers within a larger container, but this is a feature, not a bug; it is how physical goods are transported in meatspace and is very useful. I’ve written about these points in greater detail here.

Why Self-Sovereign?

Self-sovereignty implies giving students control over their data and how it’s shared. That may seem counter intuitive to the traditional academic notion of in loco parentis, but the truth is that it not only mirrors how physical student IDs work, it simplifies the IT integration work while expanding the possible use cases.

Importantly, an SSI-based student ID binds the school-issued (and verifiable) information in the ID to a student’s right to use it. Through this binding, students can prove, at their own discretion, that they are enrolled, taking classes, receiving or received grades (and what they are), received a degree, and so on. Giving these facts to students in digital form can make their lives easier, reducing friction and fraud for both students and the school. ID is only one of many VCs that will be issued to the student by the school for various uses.

And once the student has an SSI wallet, it’s not just the school that can now exchange VCs with them. The student might want to connect and exchange VCs with multiple schools, their church, their gym club, their favorite sandwich shop or other entities they interact with. You might think that sounds like an Apple or Samsung wallet, and it kinda does, except:

SSI wallets are portable between vendors, devices, and device types (ie, move from iPhone to Samsung and back again) VCs are portable between wallets VCs can hold any kind of data (ie, identity, location, degree, favorite pizza) VCs are cryptographically verifiable Holders can share only part of a VC, metadata about a VC, or just proof that they have one, without revealing anything else The protocols that make wallets and VCs portable and interoperable are open and standardized

And of course, Apple and Android wallets don’t enable persistent, encrypted connections with other people, organizations and things for secure peer-to-peer communication and interaction, but I’m getting ahead of myself…

A Thousand Uses

By “useful” I mean it can be used for many important, new, relevant purposes and also make expected and ordinary uses better.

I’ll organize the usefulness of self-sovereign student ID into six categories:

Identity & Access Achievements, Skills, & Competencies The “Digital Experience” Privileges, Perks & Freebies Communication & Interaction Preventing Fraud & Phishing

The first category, Identity & Access, will be covered within this Part 1 and is where I’ll delve the most deeply. I’ll touch on the remaining categories in Part 2.

Note: As mentioned above, it is outside the scope of this article to explain the basic mechanics of SSI and VC exchange, such as how the initial encrypted connections are offered and accepted, how VCs are offered and accepted using those connections, how VC presentation and verification occurs, or how those verifications can be trustworthy. Those subjects are covered amply by many other documents, papers, and websites throughout the SSI industry, and are the reason for the industry’s abundant standards activity. Refer to links at the beginning of this article for more information.

Identity & Access

The primary use of self-sovereign student ID — and the gateway to limitless other uses — is as a digital, and therefore more powerful, version of a student ID card, enabling students to instantly and strongly prove their identity and status as a student, online or off, without passwords.

Password Replacement

Perhaps the simplest starting point for self-sovereign student ID is to replace passwords. Passwords have many known problems with both security and UX, and can be replaced with a quick ping to a student’s smartphone. The student responds with a tap, which cryptographically shares their school-issued ID, which is instantly verifiable by the school.

This is a big step-up in both security and user experience over passwords, and social login, and can be used in conjunction with or as a substitute for 2FA.

For schools running CAS, Okta, or any ID system supportive of a “custom IDP” or “external authentication handler,” password replacement can be implemented quickly.

Identity+

Self-sovereign student ID starts with a student-controlled wallet into which the school can issue VCs containing any kind of data, identity or otherwise. So typical student ID data — name, photo, ID number, etc. — can be supplemented with any other desired information: classes registered for or taken, status and authority as a student leader, entitlements, permissions, preferences, allergies, relationships to other students, contact information, family information, achievements (more on that below), and on and on.

Multi-Factor Authentication+ (MFA+)

The plus in MFA+ means the ability to exchange more factors than is feasible with current tech, while not impairing the user experience, and likely improving it.

Because shared secrets (something the student knows, like a username or password) are replaced with cryptographically secure, digitally signed VCs (something the student has, like a unique key), you can exchange much stronger credentials than passwords. Because they’re digital and easily shared, you can exchange more of them, even dozens. Biometrics, location and more can also be incorporated, either as payloads within VCs or in conjunction with them.

Because VCs can be exchanged actively or passively behind the scenes, they are useful within Zero Trust Architecture (ZTA). For higher-risk applications, multiple signatures can be required from multiple devices and/or multiple individuals. Combining MFA+, ZTA, and multi-device and multi-signature capabilities results in a formidable approach to protecting sensitive systems and facilities.

Vendor Independence

SSI is based on standards, so tools from different vendors can be interoperable. This means that your school isn’t locked into a single vendor and can replace a vendor without reissuing IDs (try that with a traditional ID system). Students, faculty, and staff can choose from multiple wallets to store their credentials, which are portable if they later decide to switch. You might use one vendor to help issue student IDs, another to integrate with the student information system for registration and transcript data, and a third to verify student or staff status at the campus bookstore POS. You don’t have to worry that any vendor selected for digital student IDs can’t support your diverse campus needs, or that you’re stuck with lousy quality or service.

Use of student ID becomes no longer constrained digitally or physically by the boundaries of the school’s trust domain, or the presence of any particular vendor, removing what I believe is the #1 barrier for digital student ID adoption today.

Digital First, Then Physical

Self-sovereign student ID is digital first, but not digital only. With self-sovereign student ID you can have both digital and physical forms in several varieties.

A ‘smart’/chipped student ID card could hold an SSI wallet (note the irony) with some of a student’s VCs, greatly expanding the types of credentials, entitlements, tickets or other items the student can carry and benefit from, and making it much harder for a fraudster to get away with swapping out the name, photo, ID number, etc.

A student could also present a card or a paper with a QR code on it. Scanning the QR code could pull up a verified student ID, including a photo, either from a school-controlled database or from student-controlled storage. QR code-based verification could be restricted to authorized personnel, who could also be required to digitally prove their ID before gaining access.

Combining these capabilities, a school could issue a physical ID card with only three elements: a photo, a QR code, and an embedded chip— no name, no ID number, nothing else. If the chip and QR code worked as described above, even this extreme approach could be more useful than existing student ID cards while being more private for students and more difficult to hack for fraudsters.

Access+

Whether for accessing digital systems, physical facilities or events, self-sovereign student ID can begin to support digital versions of key cards, vouchers, receipts and more, all uniquely associated with the student. A VC can also be issued as a bearer instrument not associated with any individual, like a movie ticket or a coupon. Using geofencing, students could be passively authenticated when entering a secure area, of course subject to their consent to the practice.

Mutual Authentication

Because it enables a bi-directional exchange of VCs, self-sovereign student ID may be the first technology that enables students to authenticate schools as strongly as schools authenticate students, preventing impersonation and phishing.

User Experience

Who doesn’t want fewer usernames and passwords to deal with? With self-sovereign student ID, the student can digitally present their ID and other entitlements and be authenticated more strongly, safely, and quickly than with usernames and passwords. This reduces the incidence of fraud, account take-over, and password reset requests.

Because the school maintains a secure, peer-to-peer connection with the student, it can use this connection to prompt the student for ID when the student calls in or walks in⁵. When calling in, this eliminates the need for knowledge-based authentication questions (birthday, mother’s maiden name, etc.) and speeds up the call; when walking in, this eliminates the need to pull out a physical student ID (useful during a pandemic).

Whether calling in, walking in, or logging in, a student can feel recognized by the school rather than repeatedly treated as a stranger.

Privacy & Compliance

Today, when presenting a physical student ID card, the student divulges everything on it; there’s no way to present only part of it. With selective disclosure enabled by self-sovereign student ID, a student can share only the data required and nothing more, or prove something about their credentials without disclosing any of the data, or just prove that they have it. Some examples are helpful:

Prove status as a currently enrolled student, without revealing name, ID number, or other personal info Prove age is over a threshold, without revealing actual age or birthday Prove address on file is within a certain area, city, or building, without revealing the exact location

Selective disclosure is useful for many things, including voting, and can be used online or off. It affords whatever level of privacy the student desires, and satisfies both the spirit and the letter of aggressive privacy regulations such as GDPR and CCPA, while remaining auditable for all interactions between students and the school. And by minimizing the presentation of identity to only the attributes needed, selective disclosure curtails the unchecked growth in the ‘grey market’ of personal data, worth billions of dollars and growing.

And when data is shared, it is shared directly by the student with digitally signed evidence that they did so, a veritable get-out-of-jail free card in today’s PII-sensitive privacy climate.

Consolidation & Simplification

As it scales, federated ID has a tendency to grow into a complex, tangled web of identities, identifiers, vendors, integrations, synchronizations, and registries. Self-sovereign student ID can begin a process of consolidation around a coherent identity meta system, with a reduction in vendors, a reduction in identifiers, and an overall reduction in complexity, without consolidating around a single vendor.

And that’s not a joke.

That said, I’m taking the advice of my business partner, SSI pioneer Dr. Sam Smith, and avoiding diving into how this can occur within this piece. He is the better author for it anyway, as it quickly gets into a technical discussion about self-certifying identifiers, which SSI uses, versus administrative identifiers, which federated ID uses, which is his area of study and expertise. So, Sam can write a separate piece if there is sufficient interest. For readers having a pressing need related to this topic, please get in touch.

ID Is the Low-Hanging Fruit

For most, ID may be the best place to start with VCs. There’s far less complexity than something like achievements, and no other entities need to be consulted before adopting a particular approach; a school catches what it pitches. Plus, the benefits can be broadly and immediately felt, and readily integrated into most IAM systems. And of course, ID is a prerequisite for most other use cases; even if those feel more important or urgent, you usually need to begin by verifying who you’re dealing with.

If you read no further, this should already be apparent: though ID and Access may be a fraction of what self-sovereign student ID can do, it is more than enough to justify serious consideration. The uses discussed in Part 2 are cool and exciting, but just icing on the cake.

If you’re not planning to read Part 2 and are wondering how to operationalize self-sovereign student ID, open Part 2 and skip down to “Where to Begin.”

Part 2: ID is Only the Beginning

¹ One of the earliest/best pieces about SSI, from Christopher Allen: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html

² Technically there is a fourth thing a student controls: self-certifying identifiers provide the root of trust, the crypto magic that makes it possible to traverse trust domain boundaries, but they’re “deeper under the hood” and out of scope for this piece. You can learn more about self-certifying identifiers here, here, and here.

³ It’s technically agents that connect, not wallets.

⁴ In the context of student ID, a wallet in the student’s possession is the container that makes the most sense; for large data sets such as entire transcripts or medical data, or infrequently used data, or simply for backup, a student may employ a custodial storage solution in a blockchain or traditional database, while still retaining self-sovereign control over the stored data.

⁵ The Credit Union industry is beginning to deploy MemberPass, which uses this means to streamline incoming calls into service centers.

Special thanks to several helpful reviewers, editors, and contributors: John Phillips, Dr. Phil Windley, Phil Long, Scott Perry, Dr. Samuel Smith, Alan Davies, Taylor Kendal, and Matthew Hailstone.

Friday, 14. August 2020

Mike Jones: self-issued

COSE and JOSE Registrations for Web Authentication (WebAuthn) Algorithms is now RFC 8812

The W3C Web Authentication (WebAuthn) working group and the IETF COSE working group created “CBOR Object Signing and Encryption (COSE) and JSON Object Signing and Encryption (JOSE) Registrations for Web Authentication (WebAuthn) Algorithms” to make some algorithms and elliptic curves used by WebAuthn and FIDO2 officially part of COSE and JOSE. The RSA algorithms are […]

The W3C Web Authentication (WebAuthn) working group and the IETF COSE working group created “CBOR Object Signing and Encryption (COSE) and JSON Object Signing and Encryption (JOSE) Registrations for Web Authentication (WebAuthn) Algorithms” to make some algorithms and elliptic curves used by WebAuthn and FIDO2 officially part of COSE and JOSE. The RSA algorithms are used by TPMs. The “secp256k1” curve registered (a.k.a., the Bitcoin curve) is also used in some decentralized identity applications. The completed specification has now been published as RFC 8812.

As described when the registrations recently occurred, the algorithms registered are:

RS256 – RSASSA-PKCS1-v1_5 using SHA-256 – new for COSE RS384 – RSASSA-PKCS1-v1_5 using SHA-384 – new for COSE RS512 – RSASSA-PKCS1-v1_5 using SHA-512 – new for COSE RS1 – RSASSA-PKCS1-v1_5 using SHA-1 – new for COSE ES256K – ECDSA using secp256k1 curve and SHA-256 – new for COSE and JOSE

The elliptic curves registered are:

secp256k1 – SECG secp256k1 curve – new for COSE and JOSE

See them in the IANA COSE Registry and the IANA JOSE Registry.


Just a Theory

Flaked, Brewed, and Docked

Sqitch v0.9998: Now with Snowflake support, an improved Homebrew tap, and the quickest way to get started: the new [Docker image](https://hub.docker.com/r/sqitch/sqitch/).

I released Sqitch v0.9998 this week. Despite the long list of changes, only one new feature stands out: support for the Snowflake Data Warehouse platform. A major work project aims to move all of our reporting data from Postgres to Snowflake. I asked the team lead if they needed Sqitch support, and they said something like, “Oh hell yes, that would save us months of work!” Fortunately I had time to make it happen.

Snowflake’s SQL interface ably supports all the functionality required for Sqitch; indeed, the implementation required fairly little customization. And while I did report a number of issues and shortcomings to the Snowflake support team, they always responded quickly and helpfully — sometimes revealing undocumented workarounds to solve my problems. I requested that they be documented.

The work turned out well. If you use Snowflake, consider managing your databases with Sqitch. Start with the tutorial to get a feel for it.

Bundle Up

Of course, you might find it a little tricky to get started. In addition to long list of Perl dependencies, each database engines requires two external resources: a command-line client and a driver library. For Snowflake, that means the SnowSQL client and the ODBC driver. The PostgreSQL engine requires psql and DBD::Pg compiled with libpq. MySQL calls for the mysql client and DBD::mysql compiled with the MySQL connection library. And so on. You likely don’t care what needs to be built and installed; you just want it to work. Ideally install a binary and go.

I do, too. So I spent the a month or so building Sqitch bundling support, to easily install all its Perl dependencies into a single directory for distribution as a single package. It took a while because, sadly, Perl provides no straightforward method to build such a feature without also bundling unneeded libraries. I plan to write up the technical details soon; for now, just know that I made it work. If you Homebrew, you’ll reap the benefits in your next brew install sqitch.

Pour One Out

In fact, the bundling feature enabled a complete rewrite of the Sqitch Homebrew tap. Previously, Sqitch’s Homebrew formula installed the required modules in Perl’s global include path. This pattern violated Homebrew best practices, which prefer that all the dependencies for an app, aside from configuration, reside in a single directory, or “cellar.”

The new formula follows this dictum, bundling Sqitch and its CPAN dependencies into a nice, neat package. Moreover, it enables engine dependency selection at build time. Gone are the separate sqitch_$engine formulas. Just pass the requisite options when you build Sqitch:

brew install sqitch --with-postgres-support --with-sqlite-support

Include as many engines as you need (here’s the list). Find yourself with only Postgres support but now need Oracle, too? Just reinstall:

export HOMEBREW_ORACLE_HOME=$ORACLE_HOME brew reinstall sqitch --with-postgres-support --with-oracle-support

In fact, the old sqitch_oracle formula hasn’t worked in quite some time, but the new $HOMEBREW_ORACLE_HOME environment variable does the trick (provided you disable SIP; see the instructions for details).

I recently became a Homebrew user myself, and felt it important to make Sqitch build “the right way”. I expect this formula to be more reliable and better maintained going forward.

Still, despite its utility, Homebrew Sqitch lives up to its name: It downloads and builds Sqitch from source. To attract newbies with a quick and easy method to get started, we need something even simpler.

Dock of the Bae

Which brings me to the installer that excites me most: The new Docker image. Curious about Sqitch and want to download and go? Use Docker? Try this:

curl -L https://git.io/JJKCn -o sqitch && chmod +x sqitch ./sqitch help

That’s it. On first run, the script pulls down the Docker image, which includes full support for PostgreSQL, MySQL, Firebird, and SQLite, and weighs in at just 164 MB (54 MB compressed). Thereafter, it works just as if Sqitch was locally-installed. It uses a few tricks to achieve this bit of magic:

It mounts the current directory, so it acts on the Sqitch project you intend it to It mounts your home directory, so it can read the usual configuration files It syncs the environment variables that Sqitch cares about

The script even syncs your username, full name, and host name, in case you haven’t configured your name and email address with sqitch config. The only outwardly obvious difference is the editor:1 If you add a change and let the editor open, it launches nano rather than your preferred editor. This limitation allows the image ot remain as small as possible.

I invested quite a lot of effort into the Docker image, to make it as small as possible while maximizing out-of-the-box database engine support — without foreclosing support for proprietary databases. To that end, the repository already contains Dockerfiles to support Oracle and Snowflake: simply download the required binary files, built the image, and push it to your private registry. Then set $SQITCH_IMAGE to the image name to transparently run it with the magic shell script.

Docker Docket

I plan to put more work into the Sqitch Docker repository over the next few months. Exasol and Vertica Dockerfiles come next. Beyond that, I envision matrix of different images, one for each database engine, to minimize download and runtime size for folx who need only one engine — especially for production deployments. Adding Alpine-based images also tempts me; they’d be even smaller, though unable to support most (all?) of the commercial database engines. Still: tiny!

Container size obsession is a thing, right?

At work, we believe the future of app deployment and execution belongs to containerization, particularly on Docker and Kubernetes. I presume that conviction will grant me time to work on these improvements.

Well, that and connecting to a service on your host machine is a little fussy. For example, to use Postgres on your local host, you can’t connect to Unix sockets. The shell script enables host networking, so on Linux, at least, you should be able to connect to localhost to deploy your changes. On macOS and Windows, use the host.docker.internal host name. ↩︎

More about… Sqitch Docker Homebrew Snowflake

Thursday, 13. August 2020

reb00ted

Government contractor pays to embed tracking into 100's of mobile apps

“The data gleaned from this tracking is then sold back to the US government for undisclosed purposes.”

“The data gleaned from this tracking is then sold back to the US government for undisclosed purposes.”


Simon Willison

Weeknotes: Installing Datasette with Homebrew, more GraphQL, WAL in SQLite

This week I've been working on making Datasette easier to install, plus wide-ranging improvements to the Datasette GraphQL plugin. Datasette and Homebrew Datasette is now part of the GitHub Discussions beta - which means the GitHub repository for the project now has a Datasette discussions area. I've been wanting to set up somewhere to talk about the project free of pressure to file issues or

This week I've been working on making Datasette easier to install, plus wide-ranging improvements to the Datasette GraphQL plugin.

Datasette and Homebrew

Datasette is now part of the GitHub Discussions beta - which means the GitHub repository for the project now has a Datasette discussions area. I've been wanting to set up somewhere to talk about the project free of pressure to file issues or bug reports for a while, so I'm really excited to have this as a new community space.

One of the first threads there was about Making Datasette easier to install. This inspired me to finally take a look at issue #335 from July 2018 - "Package datasette for installation using homebrew".

I used the VisiData Homebrew Tap as a starting point, along with Homebrew's Python for Formula Authors documentation. To cut a long story short, brew install datasette now works!

I wrote up some detailed notes on Packaging a Python CLI tool for Homebrew. I've also had my sqlite-utils CLI tool accepted into Homebrew, so you can now install that using brew install sqlite-utils as well.

datasette install, datasette uninstall

The updated Datasette installation instructions now feature a range of different options: Homebrew, pip, pipx and Docker.

Datasette Plugins need to be installed into the same Python environment as Datasette itself. If you installed Datasette using pipx or Homebrew figuring out which environment that is isn't particularly straight-forward.

So I added two new commands to Datasette (released in Datasette 0.47): datasette install name-of-plugin and datasette uninstall name-of-plugin. These are very thin wrappers around the underlying pip, but with the crucial improvement that they guarantee they'll run it in the correct environment. I derived another TIL from these on How to call pip programatically from Python.

datasette --get "/-/versions.json"

Part of writing a Homebrew package is defining a test block that confirms that the packaged tool is working correctly.

I didn't want that test to have to start a Datasette web server just so it could execute an HTTP request and shut the server down again, so I added a new feature: datasette --get.

This is a mechanism that lets you execute a fake HTTP GET request against Datasette without starting the server, and outputs the result to the terminal.

This means that anything you can do with the Datasette JSON API is now available on the command-line as well!

I like piping the output to jq to get pretty-printed JSON:

% datasette github.db --get \ '/github/recent_releases.json?_shape=array&_size=1' | jq [ { "rowid": 140912432, "repo": "https://github.com/simonw/sqlite-utils", "release": "https://github.com/simonw/sqlite-utils/releases/tag/2.15", "date": "2020-08-10" } ] datasette-graphql improvements

I introduced datasette-graphql last week. I shipped five new releases since then, incorporating feedback from GraphQL advocates on Twitter.

The most significant improvement: I've redesigned the filtering mechanism to be much more in line with GraphQL conventions. The old syntax looked like this:

{ repos(filters: ["license=apache-2.0", "stargazers_count__gt=10"]) { edges { node { full_name } } } }

This mirrored how Datasette's table page works (e.g. repos?license=apache-2.0&stargazers_count__gt=10), but it's a pretty ugly hack.

The new syntax is much, much nicer:

{ repos(filter: {license: {eq: "apache-2.0"}, stargazers_count: {gt: 10}}) { edges { node { full_name } } } }

Execute this query.

The best part of this syntax is that the columns and operations are part of the GraphQL schema, which means tools like GraphiQL can provide auto-completion for them interactively as you type a query.

Another new feature: tablename_row can be used to return an individual row (actually the first matching item for its arguments). This is a convenient way to access rows by their primary key, since the primary key columns automatically become GraphQL arguments:

{ users_row(id: 9599) { id name contributors_list(first: 5) { totalCount nodes { repo_id { full_name } contributions } } } }

Try that query here.

There are plenty more improvements to the plugin detailed in the datasette-graphql changelog.

Write-ahead logging in SQLite

SQLite's Write-Ahead Logging feature improves concurrency by preventing writes from blocking reads. I was seeing the occasional "database is locked" error with my personal Dogsheep so I decided to finally figure out how turn this on for a database.

The breakthrough realization for me (thanks to a question I asked on the SQLite forum) was that WAL mode is a characteristic of the database file itself. Once you've turned it on for the file, all future connections to that file will take advantage of it.

I wrote about this in a TIL: Enabling WAL mode for SQLite database files. I also embedded what I learned in sqlite-utils 2.15, which now includes sqlite-utils enable-wal file.db and sqlite-utils disable-wal file.db commands (and accompanying Python API methods).

Datasette 0.46, with a security fix

Earlier this week I also released Datasette 0.46, with the key feature being a security fix relating to canned queries and CSRF protection.

I used GitHub's security advisory mechanism for this one: CSRF tokens leaked in URL by canned query form. I've also included detailed information on the exploit (and the fix) in issue #918.

Also new in 0.46: the /-/allow-debug tool, which can be used to experiment with Datasette's allow blocks permissions mechanism.

Releases this week datasette-graphql 0.12 - 2020-08-13 datasette 0.47.2 - 2020-08-12 sqlite-utils 2.15.1 - 2020-08-12 datasette 0.47.1 - 2020-08-12 datasette 0.47 - 2020-08-12 sqlite-utils 2.15 - 2020-08-10 datasette-json-html 0.6.1 - 2020-08-09 datasette-json-html 0.6 - 2020-08-09 csvs-to-sqlite 1.1 - 2020-08-09 datasette 0.46 - 2020-08-09 asgi-csrf 0.6.1 - 2020-08-09 datasette-graphql 0.11 - 2020-08-09 datasette-graphql 0.10 - 2020-08-08 datasette-graphql 0.9 - 2020-08-08 datasette-graphql 0.8 - 2020-08-07 TIL this week Enabling WAL mode for SQLite database files Attaching a bash shell to a running Docker container Packaging a Python CLI tool for Homebrew How to call pip programatically from Python Customizing my zsh prompt

Quoting @gamemakerstk

We’re generally only impressed by things we can’t do - things that are beyond our own skill set. So, by definition, we aren’t going to be that impressed by the things we create. The end user, however, is perfectly able to find your work impressive. — @gamemakerstk

We’re generally only impressed by things we can’t do - things that are beyond our own skill set. So, by definition, we aren’t going to be that impressed by the things we create. The end user, however, is perfectly able to find your work impressive.

@gamemakerstk


Phil Windley's Technometria

Authentic Digital Relationships

Summary: Self-sovereign identity, supported by a heterarchical identity metasystem, creates a firm foundation for rich digital relationships that allow people to be digitally embodied so they can act online as autonomous agents. An earlier blog post, Relationships and Identity proposed that we build digital identity systems to create and manage relationships—not identities—and discussed

Summary: Self-sovereign identity, supported by a heterarchical identity metasystem, creates a firm foundation for rich digital relationships that allow people to be digitally embodied so they can act online as autonomous agents.

An earlier blog post, Relationships and Identity proposed that we build digital identity systems to create and manage relationships—not identities—and discussed the nature of digital relationships in terms of their integrity, lifespan, and utility. You should read that post before this one.

In his article Architecture Eats Culture Eats Strategy, Tim Bouma makes the point that the old management chestnut Culture Eats Strategy leaves open the question: how do we change the culture. Tim's point is that architecture (in the general sense) is the upstream predator to culture. Architecture is a powerful force that drives culture and therefore determines what strategies will succeed—or, more generally, what use cases are possible.

Following on Tim's insight, my thesis is that identity systems are the foundational layer of our digital ecosystem and therefore the architecture of digital identity systems drives online culture and ultimately what we can do and what we can't. Specifically, since identity systems are built to create and manage relationships, their architecture deeply impacts the kinds of relationships they support. And the quality of those relationships determines whether or not we live effective lives in the digital sphere.

Administrative Identity Systems Create Anemic Relationships

I was the founder and CTO of iMall, an early, pioneering ecommerce tools vendor. As early as 1996 we determined that we not only needed a shopping cart that kept track of a shopper's purchases in a single session, but one that knew who the shopper was from visit to visit so we could keep the shopping cart and pre-fill forms with shipping and billing addresses. Consequently, we built an identity system. In the spirit of the early web, it was a one-off, written in Perl and storing personal data in Berkeley DB. We did hash the passwords—we weren't idiots1.

Early Web companies had a problem: we needed to know things about people and there was no reliable way for them to tell us who they were. So everyone built an identity system and thus began my and your journey to collecting thousands of identifiers as the Web expanded and every single site needed it's own way to know things about us.

Administrative identity systems, as these kinds of identity systems are called, create a relationship between the organization operating the identity system and the people who are their customers, citizens, partners, and so on. They are, federation notwithstanding, largely self contained and put the administrator at the center as shown in Figure 1. This is their fundamental architecture.

Figure 1: Administrative identity systems put the administrator at the center. (click to enlarge)

Administrative identity systems are owned. They are closed. They are run for the purposes of their owners, not the purposes of the people or things being administered. They provision and permission. They are bureaucracies for governing something. They rely on rules, procedures, and formal interaction patterns. Need a new password? Be sure to follow the password rules of what ever administrative system you're in. Fail to follow the company's terms of service? You could lose your account without recourse.

Administrative identity systems use a simple schema, containing just the attributes that the administrator needs to serve their purposes and reduce risk. The problem I and others were solving back in the 90's was legibility2. Legibility is a term used to describe how administrative systems make things governable by simplifying, inventorying, and rationalizing things around them. Identity systems make people legible in order to offer them continuity and convenience while reducing risk for the administrator.

Administrative identity systems give rise to a systematic inequality in the relationships they manage. Administrative identity systems create bureaucratic cultures. Every interaction you have online happens under the watchful eye of a bureaucracy built to govern the system and the people using it. The bureaucracy may be benevolent, benign, or malevolent but it controls the interaction.

Designers of administrative identity systems do the imaginative work of assigning identifiers, defining the administrative schemas and processes, and setting the purpose of the identity system and the relationships it engenders. Because of the systematic imbalance of power that administrative identity systems create, administrators can afford to be lazy. To the administrator, everyone is structurally the same, being fit into the same schema. This is efficient because they can afford to ignore all the qualities that make people unique and concentrate on just their business. Meanwhile subjects are left to perform the "interpretive labor" as David Graeber calls it of understanding the system, what it allows or doesn't, and how it can be bent to accomplish their goals. Subjects have few tools for managing these relationship because each one is a little different, not only technically, but procedurally as well. There is no common protocol or user experience. Consequently, subjects have no way to operationalize the relationship except in whatever manner the administrator allows.

Given that the architecture of administrative identity systems gives rise to a bureaucratic culture, what kinds of strategies or capabilities does that culture engender? Quoting David Graeber from The Utopia of Rules (pg 152):

Cold, impersonal, bureaucratic relations are much like cash transactions, and both offer similar advantages and disadvantages. On the one hand they are soulless. On the other, they are simple, predictable, and—within certain parameters, at least—treat everyone more or less the same.

I argue that this is the kind of thing the internet is best at. Our online relationships with ecommerce companies, social media providers, banks, and others are cold and impersonal, but also relatively efficient. In that sense, the web has kept its promise. But the institutionalized frame of action that has come to define it alienates its subjects in two ways:

They are isolated and estranged from each other. They surrender control over their online activity and the associated data within a given domain to the administrator of that domain.

The administrative architecture and the bureaucratic culture it creates has several unavoidable, regrettable outcomes:

Anemic relationships that limit the capabilities of the systems they support. For example, social media platforms are designed to allow people to form a link (symmetrical or asymmetrical) to others online. But it is all done within the sphere of the administrative domain of the system provider. The relationships in these systems are like two-dimensional cardboard cutouts of the real relationships they mirror. We inhabit multiple walled gardens that no more reflect real life than do the walled gardens of amusement parks. A surveillance economy that relies on the weak privacy provisions that administrative systems create to exploit our online behavior as the raw material for products that not only predict, but attempt to manipulate, our future behaviors.3 Many administrative relationships are set up to harvest data about our online behavior. The administrator controls the nature of these relationships, what is allowed, and what is behavior is rewarded. Single points of failure where key parts of our lives are contained within the systems of companies that will inevitably cease to exist someday. In the words of Craig Burton: "It's about choice: freedom of choice vs. prescribed options. Leadership shifts. Policies expire. Companies fail. Systems decay. Give me the freedom of choice to minimize these hazards." The Self-Sovereign Alternative

Self-sovereign identity (SSI) systems offers an alternative model that supports richer relationships. Rather than provisioning identifiers and accounts in an administrative system where the power imbalance assures that one party to the relationship can dictate the terms of the interaction, SSI is founded on peer relationships that are co-provisioned by the exchange of decentralized identifiers. This architecture implies that both parties will have tools that speak a common protocol.

Figure 2: Self-Sovereign Identity Stack (click to enlarge)

Figure 2 shows the self-sovereign identity stack. The bottom two layers, the Verifiable Data Repositories and the Peer-to-Peer Agents make up what we refer to as the Identity Metasystem. The features of the metasystem architecture are our primary interest. I have written extensively about the details of the architecture of the metasystem in other posts (see The Sovrin SSI Stack and Decentralized Identifiers).

The architecture of the metasystem has several important features:

Mediated by protocol—Instead of being intermediated by an intervening administrative authority, activities in the metasystem are mediated through peer-to-peer protocol. Protocols are the foundation of interoperability and allow for scale. Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Consequently, the metasystem supports a flexible set of interactions that can be adapted for many different contexts and needs. Heterarchical—Interactions in the metasystem are peer-to-peer rather than hierarchical. The are not just distributed, but decentralized. Decentralization enables autonomy and flexibility and to assure its independence from the influence of any single actor. No centralized system can anticipate all the various use cases. And no single actor should be allowed to determine who uses the system or for what purposes. Consistent user experience—A consistent user experience doesn’t mean a single user interface. Rather the focus is on the experience. As an example, consider an automobile. My grandfather, who died in 1955, could get in a modern car and, with only a little instruction, successfully drive it. Consistent user experiences let people know what to expect so they can intuitively understand how to interact in any given situation regardless of context. Polymorphic—The information we need in any given relationship varies widely with context. The content that an identity metasystem carries must be flexible enough to support many different situations.

These architectural features give rise to a culture that I describe as protocological. The protocological culture of the identity metasystem has the following properties:

Open and permissionless—The metasystem has the same three virtues of the Internet that Doc Searls and Dave Weinberger enumerated: No one owns it, everyone can use it, and anyone can improve it. Special care is taken to ensure that the metasystem is censorship resistant so that everyone has access. The protocols and code that enable the metasystem are open source and available for review and improvement. Agentic—The metasystem allows allows people to act as autonomous agents, under their self-sovereign authority. The most vital value proposition of self-sovereign identity is autonomy—not being inside someone else's administrative system where they make the rules in a one sided way. Autonomy requires that participants interact as peers in the system, which the architecture of the metasystem supports. Inclusive—Inclusivity is more than being open and permissionless. Inclusivity requires design that ensures people are not left behind. For example, some people cannot act for themselves for legal (e.g. minors) or other (e.g. refugees) reasons. Support for digital guardianship ensures that those who cannot act for themselves can still participate. Flexible—The metasystem allows people to select appropriate service providers and features. No single system can anticipate all the scenarios that will be required for billions of individuals to live their own effective lives. A metasystem allows for context-specific scenarios. Modular—An identity metasystem can’t be a single, centralized system from a single vendor with limited pieces and parts. Rather, the metasystem will have interchangeable parts, built and operated by various parties. Protocols and standards enable this. Modularity supports substitutability, a key factor in autonomy and flexibility. Universal—Successful protocols eat other protocols until only one survives. An identity metasystem based on protocol will have network effects that drive interoperability leading to universality. This doesn't mean that one organization will have control, it means that one protocol will mediate all interaction and everyone in the ecosystem will conform to it. Supporting Authentic Relationships

Self-sovereign identity envisions digital life that cannot be supported with traditional identity architectures. The architecture of self-sovereign identity and the culture that springs from it support richer, more authentic relationships:

Self-sovereign identity provides people with the means of operationalizing their online relationships by providing them the tools for acting online as peers and managing the relationships they enter into. Self-sovereign identity, through protocol, allows ad hoc interactions that were not or cannot be imagined a priori.

The following subsections give examples for each of these.

Disintermediating Platforms

Many real-world experiences have been successfully digitized, but the resulting intermediation opens us to exploitation despite the conveniences. We need digitized experiences that respect human dignity and don't leave us open to being exploited for some company's advantage. As an example consider how the identity metasystem could be the foundation for a system that disintermediates the food delivery platforms. Platform companies have been very successful in intermediating these exchanges and charging exorbitant rents for what ought to be a natural interaction among peers.

That's not to say platforms provide no value. The problem isn't that they charge for services, but that their intervening position gives them too much power to make markets and set prices. Platforms provide several things that make them valuable to participants: a means of discovering relevant service providers, a system to facilitate the transaction, and a trust framework to help participants make the leap over the trust gap, as Rachel Botsman puts it. An identity metasystem supporting self-sovereign identity provides a universal trust framework for building systems that can serve as the foundation for creating markets without intermediaries. Such a system with support for a token can even facilitate the transaction without anyone having an intervening position.

Disintermediating platforms requires creating a peer-to-peer marketplace on top of the metasystem. While the metasystem provides the means of creating and managing the peer-to-peer relationship, defining this marketplace requires determining the messages to be exchanged between participants and creating the means of discovery. These messages might be simple or complex depending on the market and could be exchanged using DIDComm, or even ride on top of a verifiable credential exchange. There might be businesses that provide discovery, but they don't intermediate, they sit to the side of the interaction providing a service. For example, such a business might provide a service that allows a restaurant to define its menu, create a shopping cart, and provide for discovery, but the merchant could replace it with a similar service, providing competition, because the trust interaction and transaction are happening via a protocol built on a universal metasystem.

Building markets without intermediaries greatly reduces the cost of participating in the market and frees participants to innovate. Because these results are achieved through protocol, we do not need to create new regulations that stifle innovation and lock in incumbents by making it difficult for new entrants to comply. And these systems preserve human dignity and autonomy by removing administrative authorities.

Digitizing Auto Accidents

As an example of the the kinds of interactions that people have every day that are difficult to bring into the administrative sphere, consider the interactions that occur between various participants and their representatives following an auto accident. Because these interactions are ad hoc, large parts of our lives have yet to enjoy the convenience of being digitized. In You've Had an Automobile Accident, I imagine a digital identity system that enables the kinds of ad hoc, messy, and unpredictable interactions that happen all the time in the physical world.

In this scenario, two drivers, Alice and Bob, have had an accident. Fortunately, no one was hurt, but the highway patrol has come to the scene to make an accident report. Both Alice and Bob have a number of credentials that will be necessary to create the report:

Proof of insurance issued by their respective insurance companies Vehicle title from the state Vehicle registration issued by the Department of Motor Vehicles (DMV) in different states (potentially) Driver's license, potentially from a different agencies than the one who registers cars and potentially in different states

In addition, the police officer has a credentials from the Highway Patrol, Alice and Bob will make and sign statements, and the police officer will create an accident report. What's more, the owners of the vehicles may not be the drivers.

Now imagine you're building a startup to solve the "car accident use case." You imagine a platform to relate to all these participants and intermediate the exchange of all this information. To have value, it has to do more than provide a way to exchange PDFs and most if not all of the participants have to be on it. The system has to make the information usable. How do you get all the various insurance companies, state agencies, to say nothing of the many body shops and hospitals, fire departments, and ambulance companies on board? And yet, these kinds of ad hoc interactions confront us daily.

Taking our Rightful Place in the Digital Sphere

Devon Leffreto said something recently that made me think:

You do not have an accurate operational relationship with your Government.

My thought was "not just government". The key word is "operational". People don't have operational relationships anywhere online.4 We have plenty of online relationships, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. Scenarios like the ones envisioned in the preceding section happen all the time in the physical world—in fact they're the norm. When we dine at a restaurant or shop at a store in the physical world, we do not do so under some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. An identity metasystem provides people with the tools they need to be "embodied" in the digital world and act autonomously.

Time and again, various people have tried to create decentralized marketplaces or social networks only to fail to gain traction. These systems fail because they are not based on a firm foundation that allows people to act in relationships with sovereign authority in systems mediated through protocol rather than companies. We have a fine example of a protocol mediated system in the internet, but we've failed to take up the daunting task of building the same kind of system for identity. Consequently, when we act, we do so without firm footing or sufficient leverage.

Ironically, the internet broke down the walled gardens of CompuServe and Prodigy with a protocol-mediated metasystem, but surveillance capitalism has rebuilt them on the web. No one could live an effective life in an amusement park. Similarly, we cannot function as fully embodied agents in the digital sphere within the administrative systems of surveillance capitalists, despite their attractions. The emergence of self-sovereign identity, agreements on protocols, and the creation of a metasystem to operationalize them promises a digital world where decentralized interactions create life-like online experiences. The identity metasystem and the richer relationships that result from it promise an online future that gives people the opportunity to act for themselves as autonomous human beings and supports their dignity so that they can live an effective online life.

End Notes Two of my friends at the time, Eric Thompson and Stacey Son were playing with FPGAs that could crack hashed passwords, so we were aware of the problems and did our best to mitigate them. See Venkatesh Rao's nice summary of James C. Scott's seminal book on legibility and its unintended consequences, Seeing Like a State for more on this idea. See The Age of Surveillance Capitalism by Shoshana Zuboff for a detailed (705 page) exploration of this idea. The one exception I can think of to this is email. People act through email all the time in ways that aren't intermediated by their email provider. Again, it's a result of the architecture of email, set up over four decades ago and the culture that architecture supports.

Photo Credit: Two Yellow Bees from Pikrepo (public)

Tags: ssi identity relationships administrative+identity sovereign+source

Wednesday, 12. August 2020

DustyCloud Brainstorms

Terminal Phase in Linux Magazine (Polish edition)

Hey look at that! My terminal-space-shooter-game Terminal Phase made an appearance in the Polish version of Linux Magazine. I had no idea, but Michal Majchrzak both tipped me off to it and took the pictures. (Thank you!) I don't know Polish but I can see some references to Konami and …

Hey look at that! My terminal-space-shooter-game Terminal Phase made an appearance in the Polish version of Linux Magazine. I had no idea, but Michal Majchrzak both tipped me off to it and took the pictures. (Thank you!)

I don't know Polish but I can see some references to Konami and SHMUP (shoot-em-up game). The screenshot they have isn't the one I published, so I guess the author got it running too... I hope they had fun!

Apparently it appeared in the June 2020 edition:

I guess because print media coverage is smaller, it feels cooler to get covered these days in it in some way?

I wonder if I can find a copy somewhere!

Tuesday, 11. August 2020

Mike Jones: self-issued

Registries for Web Authentication (WebAuthn) is now RFC 8809

The W3C Web Authentication (WebAuthn) working group created the IETF specification “Registries for Web Authentication (WebAuthn)” to establish registries needed for WebAuthn extension points. These IANA registries were populated in June 2020. Now the specification creating them has been published as RFC 8809. Thanks again to Kathleen Moriarty and Benjamin Kaduk for their Area Director […]

The W3C Web Authentication (WebAuthn) working group created the IETF specification “Registries for Web Authentication (WebAuthn)” to establish registries needed for WebAuthn extension points. These IANA registries were populated in June 2020. Now the specification creating them has been published as RFC 8809.

Thanks again to Kathleen Moriarty and Benjamin Kaduk for their Area Director sponsorships of the specification and to Jeff Hodges and Giridhar Mandyam for their work on it.

Monday, 10. August 2020

FACILELOGIN

OpenID Connect Authentication Flows

OpenID Connect core specification defines three authentication flows: authorization code flows, implicit flow and hybrid flow. The… Continue reading on FACILELOGIN »

OpenID Connect core specification defines three authentication flows: authorization code flows, implicit flow and hybrid flow. The…

Continue reading on FACILELOGIN »


Phil Windley's Technometria

Cogito, Ergo Sum

Summary: Sovereign is the right word for describing the essential distinction between our inalienable self and the administrative identifiers and attributes others assign to us online. Descartes didn't say "I have a birth certificate, therefore I am." We do not spring into existence because some administrative system provisions an identifer for us. No single administrative regime, or e

Summary: Sovereign is the right word for describing the essential distinction between our inalienable self and the administrative identifiers and attributes others assign to us online.

Descartes didn't say "I have a birth certificate, therefore I am." We do not spring into existence because some administrative system provisions an identifer for us. No single administrative regime, or even a collection of them, defines us. Doc Searls said this to me recently:

We are, within ourselves, what William Ernest Henley calls “the captain” of “my unconquerable soul”, and what Walt Whitman meant when he said “I know this orbit of mine cannot be swept by a carpenter's compass,” and “I know that I am august. I do not trouble my spirit to vindicate itself or be understood.” Each of us has an inner essence that is who we are.

Even in the digital realm, and limiting ourselves to what Joe Andrieu calls "functional identity", we are more than any single relationship. Our identity is something we are, not something we have. And it's certainly not what someone else provides to us. We are self-sovereign.

Some shrink from the self-sovereign label. There are some good reasons for their reluctance. Self-sovereignty requires some explanation. And it has political overtones that make some uncomfortable. But I've decided to embrace it. Self-sovereign identity is more than decentralized identity. Self-sovereign identity implies autonomy and inalienability.

If our identity is inalienable, then it's not transferable to another and not capable of being taken away or denied. To be inalienable is to be sovereign: to exercise supreme authority over one’s personal sphere—Whitman’s “orbit of mine.” Administrative identifiers, what others choose to call us, are alienable. Relationships are alienable. Most attributes are alienable1. Who we are, and our right to choose how we present ourselves to the world, is not alienable. The distinction between the inalienable and the alienable, the self-sovereign and the administrative, is essential. Without this distinction, we are constantly at the mercy of the various administrative regimes we interact with.

Self-sovereignty is concerned with relationships and boundaries. When we say a nation is sovereign, we mean that it can act as a peer to other sovereign states, not that it can do whatever it wants. Sovereignty defines the source of our authority to act. Sovereignty defines a boundary, within which the sovereign has complete control and outside of which the sovereign relates to others within established rules and norms. Self-sovereign identity defines the boundary in the digital space, gives tools to people and organizations so they can assert control—their autonomy, and defines the rules for how relationships are formed, authenticated, and used.

In the opening chapter of her groundbreaking book, The Age of Surveillance Capitalism, Shoshana Zuboff asks the question "Can the digital future be our home?" Not if it's based on administrative identity systems and the anemic, ofttimes dangerous, relationships they create. By starting with self-sovereignty, we found our digital relationships on principles that support and preserve human freedom, privacy, and dignity. So, while talking about trust, decentralization, credentials, wallets, and DIDs might help explain how self-sovereign identity works, sovereignty explains why we do it. If self-sovereignty requires explanation, maybe that's a feature, not a bug.

End Notes I'm distinguishing attributes from traits without going too deep into that idea for now.

Photo Credit: Cogito, Ergo Sum from Latin Quotes (Unknown License)

Tags: identity ssi self-sovereign

Sunday, 09. August 2020

reb00ted

Nice research close to my heart

They call it local-first software: research on how to get the benefits but not the downsides of cloud-based software.

They call it local-first software: research on how to get the benefits but not the downsides of cloud-based software.


Doom or denial about the climate future?

Interesting synthesis.

Interesting synthesis.


Simon Willison

Datasette 0.46

Datasette 0.46 I just released Datasette 0.46 with a security fix for an issue involving CSRF tokens on canned query pages, plus a new debugging tool, improved file downloads and a bunch of other smaller improvements. Via @simonw

Datasette 0.46

I just released Datasette 0.46 with a security fix for an issue involving CSRF tokens on canned query pages, plus a new debugging tool, improved file downloads and a bunch of other smaller improvements.

Via @simonw

Saturday, 08. August 2020

Simon Willison

Quoting Wade Davis

COVID-19 attacks our physical bodies, but also the cultural foundations of our lives, the toolbox of community and connectivity that is for the human what claws and teeth represent to the tiger. — Wade Davis

COVID-19 attacks our physical bodies, but also the cultural foundations of our lives, the toolbox of community and connectivity that is for the human what claws and teeth represent to the tiger.

Wade Davis


Mike Jones: self-issued

OpenID Connect Logout specs addressing all known issues

I’ve been systematically working through all the open issues filed about the OpenID Connect Logout specs in preparation for advancing them to Final Specification status. I’m pleased to report that I’ve released drafts that address all these issues. The new drafts are: OpenID Connect RP-Initiated Logout 1.0 – draft 01 OpenID Connect Session Management 1.0 […]

I’ve been systematically working through all the open issues filed about the OpenID Connect Logout specs in preparation for advancing them to Final Specification status. I’m pleased to report that I’ve released drafts that address all these issues. The new drafts are:

OpenID Connect RP-Initiated Logout 1.0 – draft 01 OpenID Connect Session Management 1.0 – draft 30 OpenID Connect Front-Channel Logout 1.0 – draft 04 OpenID Connect Back-Channel Logout 1.0 – draft 06

The OpenID Connect working group waited to make these Final Specifications until we received feedback resulting from certification of logout deployments. Indeed, this feedback identified a few ambiguities and deficiencies in the specifications, which have been addressed in the latest edits. You can see the certified logout implementations at https://openid.net/certification/. We encourage you to likewise certify your implementations now.

Please see the latest History entries in the specifications for descriptions of the normative changes made. The history entries list the issue numbers addressed. The issues can be viewed in the OpenID Connect issue tracker, including links to the commits containing the changes that resolved them.

All are encouraged to review these drafts in advance of the formal OpenID Foundation review period for them, which should commence in a few weeks. If you believe that changes are needed before they become Final Specifications, please file issues describing the proposed changes. Discussion on the OpenID Connect mailing list is also encouraged.

Special thanks to Roland Hedberg for writing the initial logout certification tests. And thanks to Filip Skokan for providing resolutions to two of the thornier Session Management issues.

Friday, 07. August 2020

Simon Willison

Pysa: An open source static analysis tool to detect and prevent security issues in Python code

Pysa: An open source static analysis tool to detect and prevent security issues in Python code Interesting new static analysis tool for auditing Python for security vulnerabilities - things like SQL injection and os.execute() calls. Built by Facebook and tested extensively on Instagram, a multi-million line Django application. Via Hacker News

Pysa: An open source static analysis tool to detect and prevent security issues in Python code

Interesting new static analysis tool for auditing Python for security vulnerabilities - things like SQL injection and os.execute() calls. Built by Facebook and tested extensively on Instagram, a multi-million line Django application.

Via Hacker News


Transparent Health Blog

State API Implementation Playbook Published

This playbook helps states implement the new CMS Interoperability and Patient Access Rule.  https://bit.ly/state-api-playbook    

This playbook helps states implement the new CMS Interoperability and Patient Access Rule. 

https://bit.ly/state-api-playbook  

 



Simon Willison

Design Docs at Google

Design Docs at Google Useful description of the format used for software design docs at Google - informal documents of between 3 and 20 pages that outline the proposed design of a new project, discuss trade-offs that were considered and solicit feedback before the code starts to be written.

Design Docs at Google

Useful description of the format used for software design docs at Google - informal documents of between 3 and 20 pages that outline the proposed design of a new project, discuss trade-offs that were considered and solicit feedback before the code starts to be written.


GraphQL in Datasette with the new datasette-graphql plugin

This week I've mostly been building datasette-graphql, a plugin that adds GraphQL query support to Datasette. I've been mulling this over for a couple of years now. I wasn't at all sure if it would be a good idea, but it's hard to overstate how liberating Datasette's plugin system has proven to be: plugins provide a mechanism for exploring big new ideas without any risk of taking the core proje

This week I've mostly been building datasette-graphql, a plugin that adds GraphQL query support to Datasette.

I've been mulling this over for a couple of years now. I wasn't at all sure if it would be a good idea, but it's hard to overstate how liberating Datasette's plugin system has proven to be: plugins provide a mechanism for exploring big new ideas without any risk of taking the core project in a direction that I later regret.

Now that I've built it, I think I like it.

A GraphQL refresher

GraphQL is a query language for APIs, first promoted by Facebook in 2015.

(Surprisingly it has nothing to do with the Facebook Graph API, which predates it by several years and is more similar to traditional REST. A third of respondents to my recent poll were understandably confused by this.)

GraphQL is best illustrated by an example. The following query (a real example that works with datasette-graphql) does a whole bunch of work:

Retrieves the first 10 repos that match a search for "datasette", sorted by most stargazers first Shows the total count of search results, along with how to retrieve the next page For each repo, retrieves an explicit list of columns owner is a foreign key to the users table - this query retrieves the name and html_url for the user that owns each repo A repo has issues (via an incoming foreign key relationship). The query retrieves the first three issues, a total count of all issues and for each of those three gets the title and created_at.

That's a lot of stuff! Here's the query:

{ repos(first:10, search: "datasette", sort_desc: stargazers_count) { totalCount pageInfo { endCursor hasNextPage } nodes { full_name description stargazers_count created_at owner { name html_url } issues_list(first: 3) { totalCount nodes { title created_at } } } } }

You can run this query against the live demo. I'm seeing it return results in 511ms. Considering how much it's getting done that's pretty good!

datasette-graphql

The datasette-graphql plugin adds a /graphql page to any Datasette instance. It exposes a GraphQL field for every table and view. Those fields can be used to select, filter, search and paginate through rows in the corresponding table.

The plugin detects foreign key relationships - both incoming and outgoing - and turns those into further nested fields on the rows.

It does this by using table introspection (powered by sqlite-utils) to dynamically define a schema using the Graphene Python GraphQL library.

Most of the work happens in the schema_for_datasette() function in datasette_graphql/utils.py. The code is a little fiddly because Graphene usually expects you to define your GraphQL schema using classes (similar to Django's ORM), but in this case the schema needs to be generated dynamically based on introspecting the tables and columns.

It has a solid set of unit tests, including some test examples written in Markdown which double as further documentation (see test_graphql_examples()).

GraphiQL for interactively exploring APIs

GraphiQL is the best thing about GraphQL. It's a JavaScript interface for trying out GraphQL queries which pulls in a copy of the API schema and uses it to implement really comprehensive autocomplete.

datasette-graphql includes GraphiQL (inspired by Starlette's implementation). Here's an animated gif showing quite how useful it is for exploring an API:

A couple of tips: On macOS option+space brings up the full completion list for the current context, and command+enter executes the current query (equivalent to clicking the play button).

Performance notes

The most convenient thing about GraphQL from a client-side development point of view is also the most nerve-wracking from the server-side: a single GraphQL query can end up executing a LOT of SQL.

The example above executes at least 32 separate SQL queries:

1 select against repos (plus 1 count query) 10 against issues (plus 10 counts) 10 against users (for the owner field)

There are some optimization tricks I'm not using yet (in particular the DataLoader pattern) but it's still cause for concern.

Interestingly, SQLite may be the best possible database backend for GraphQL due to the characteristics explained in the essay Many Small Queries Are Efficient In SQLite.

Since SQLite is an in-process database, it doesn't have to deal with network overhead for each SQL query that it executes. A SQL query is essentially a C function call. So the flurry of queries that's characteristic for GraphQL really plays to SQLite's unique strengths.

Datasette has always featured arbitrary SQL execution as a core feature, which it protects using query time limits. I have an open issue to further extend the concept of Datasette's time limits to the overall execution of a GraphQL query.

More demos

Enabling a GraphQL instance for a Datasette is as simple as pip install datasette-graphql, so I've deployed the new plugin in a few other places:

covid-19.datasettes.com/graphql for exploring Covid-19 data register-of-members-interests.datasettes.com/graphql for exploring UK Register of Members Interests MP data til.simonwillison.net/graphql for exploring my TILs Future improvements

I have a bunch of open issues for the plugin describing what I want to do with it next. The most notable planned improvement is adding support for Datasette's canned queries.

Andy Ingram shared the following interesting note on Twitter:

The GraphQL creators are (I think) unanimous in their skepticism of tools that bring GraphQL directly to your database or ORM, because they just provide carte blanche access to your entire data model, without actually giving API design proper consideration.

My plugin does exactly that. Datasette is a tool for publishing raw data, so exposing everything is very much in line with the philosophy of the project. But it's still smart to put some design thought into your APIs.

Canned queries are pre-baked SQL queries, optionally with parameters that can be populated by the user.

These could map directly to GraphQL fields. Users could even use plugin configuration to turn off the automatic table fields and just expose their canned queries.

In this way, canned queries can allow users to explicitly design the fields they expose via GraphQL. I expect this to become an extremely productive way of prototyping new GraphQL APIs, even if the final API is built on a backend other than Datasette.

Also this week

A couple of years ago I wrote a piece about Exploring the UK Register of Members Interests with SQL and Datasette. I finally got around to automating this using GitHub Actions, so register-of-members-interests.datasettes.com now updates with the latest data every 24 hours.

I renamed datasette-publish-now to datasette-publish-vercel, reflecting Vercel's name change from Zeit Now. Here's how I did that.

datasette-insert, which provides a JSON API for inserting data, defaulted to working unauthenticated. MongoDB and Elasticsearch have taught us that insecure-by-default inevitably leads to insecure deployments. I fixed that: the plugin now requires authentication, and if you don't want to set that up and know what you are doing you can install the deliberately named datasette-insert-unsafe plugin to allow unauthenticated access.

Releases this week datasette-graphql 0.7 - 2020-08-06 datasette-graphql 0.6 - 2020-08-06 sqlite-utils 2.14.1 - 2020-08-06 datasette-graphql 0.5 - 2020-08-06 datasette-graphql 0.4 - 2020-08-06 datasette-graphql 0.3 - 2020-08-06 datasette-graphql 0.2 - 2020-08-06 datasette-graphql 0.1a - 2020-08-02 sqlite-utils 2.14 - 2020-08-01 datasette-insert-unsafe 0.1 - 2020-07-31 datasette-insert 0.6 - 2020-07-31 datasette-publish-vercel 0.7 - 2020-07-31 TIL this week How to deploy a folder with a Dockerfile to Cloud Run

Wednesday, 05. August 2020

Virtual Democracy

Steal like a(n Open) Scientist

Science is give and take “After giving talks about open science I’ve sometimes been approached by skeptics who say, ‘Why would I help out my competitors by sharing ideas and data on these new websites? Isn’t that just inviting other people to steal my data, or to scoop me? Only someone naive could think this will … Continue reading Steal like a(n Open) Scientist
Science is give and take “After giving talks about open science I’ve sometimes been approached by skeptics who say, ‘Why would I help out my competitors by sharing ideas and data on these new websites? Isn’t that just inviting other people to steal my data, or to scoop me? Only someone naive could think this will … Continue reading Steal like a(n Open) Scientist

Simon Willison

Zero Downtime Release: Disruption-free Load Balancing of a Multi-Billion User Website

Zero Downtime Release: Disruption-free Load Balancing of a Multi-Billion User Website I remain fascinated by techniques for zero downtime deployment - once you have it working it makes shipping changes to your software so much less stressful, which means you can iterate faster and generally be much more confident in shipping code. Facebook have invested vast amounts of effort into getting this

Zero Downtime Release: Disruption-free Load Balancing of a Multi-Billion User Website

I remain fascinated by techniques for zero downtime deployment - once you have it working it makes shipping changes to your software so much less stressful, which means you can iterate faster and generally be much more confident in shipping code. Facebook have invested vast amounts of effort into getting this right, and their new paper for the ACM SIGCOMM conference goes into detail about how it all works.

Via Cindy Sridharan


Virtual Democracy

Time for the academy to retire the giants

The academy can’t afford a culture centered on creating giants in their fields “If I have seen further it is by standing on the sholders [sic] of Giants.” Isaac Newton. 1676. Letter to Robert Hooke (before they became bitter enemies). This notion was a commonplace in the 17th Century, with the implications that even a dwarf … Continue reading Time for the academy to retire the giants
The academy can’t afford a culture centered on creating giants in their fields “If I have seen further it is by standing on the sholders [sic] of Giants.” Isaac Newton. 1676. Letter to Robert Hooke (before they became bitter enemies). This notion was a commonplace in the 17th Century, with the implications that even a dwarf … Continue reading Time for the academy to retire the giants

Tuesday, 04. August 2020

Doc Searls Weblog

Time for advertising to call off the dogs

Digital advertising needs to sniff its own stench, instead of everybody’s digital butts. A sample of that stench is wafting through the interwebs from  the Partnership for Responsible Addressable Media, an ad industry bullphemism for yet another way to excuse the urge to keep tracking people against their wishes (and simple good manners) all over the […]

Is this the way you want your brand to look?

Digital advertising needs to sniff its own stench, instead of everybody’s digital butts.

A sample of that stench is wafting through the interwebs from  the Partnership for Responsible Addressable Media, an ad industry bullphemism for yet another way to excuse the urge to keep tracking people against their wishes (and simple good manners) all over the digital world.

This new thing is a granfalloon conjured by the Association of National Advertisers (aka the ANA) and announced today in the faux-news style of the press release (which it no doubt also is) at the first link above. It begins,

AD INDUSTRY LAUNCHES “PARTNERSHIP FOR RESPONSIBLE ADDRESSABLE MEDIA” TO ENSURE FUTURE OF DIGITAL MEDIA FOR BUSINESSES & CONSUMERS
Governing Group of Industry Leaders Includes 4A’s, ANA, IAB, IAB Tech Lab, NAI, WFA, P&G, Unilever, Ford, GM, IBM, NBCUniversal, IPG, Publicis, Adobe, LiveRamp, MediaMath, The Trade Desk

NEW YORK (August 4, 2020) — Leading trade associations and companies representing every sector of the global advertising industry today joined together to launch the Partnership for Responsible Addressable Media, an initiative to advance and protect critical functionalities like customization and analytics for digital media and advertising, while safeguarding privacy and improving the consumer experience. The governing group of the Partnership will include the most influential organizations in advertising.

I learned about this from @WendyDavis, who wrote this piece in MediaPostNiemanLab summarizes what she reports with a tweet that reads, “A new ad-industry group will lobby Google and Apple to let them track users just a wee bit more, please and thank you.”

Writes Wendy,

The group will soon reach out to browser developers and platforms, in hopes of convincing them to rethink recent decisions that will limit tracking, according to Venable attorney Stu Ingis, who will head the legal and policy working group.

“These companies are taking huge positions that impact the entire economy — the entire media ecosystem — with no real input from the media ecosystem,” Ingis says.

As if the “entire media ecosystem” doesn’t contain the billions of humans being tracked.

Well, here’s a fact: ad blocking, which was already the biggest boycott in world history five years ago, didn’t happen in a vacuum. Even though ad blockers had been available since 2004, use of them didn’t hockey-stick until 2012-13, exactly when adtech and its dependents in publishing gave the middle finger to Do Not Track, which was nothing more than a polite request, expressed by a browser, for some damn privacy while we go about our lives online. See this in Harvard Business Review:

Here’s another fact: the browser makers actually care about their users, some of whom are paying customers (for example with Apple and Microsoft). They know what we want and need, and are giving it to us. Demand and supply at work.

The GDPR and the CCPA also didn’t happen in a vacuum. Both laws were made to protect citizens from exactly what adtech (tracking based advertising) does. And, naturally, the ad biz has been working mightily to obey the letter of those laws while violating their spirit. Why else would we be urged by cookie notices everywhere to “accept” exactly what we’ve made very clear that we don’t want?

So here are some helpful questions from the world’s billions to the brands now paying to have us followed like marked animals:

Have you noticed that not a single brand known to the world has been created by tracking people and aiming ads at them—even after spending a $trillion or two on doing that?

Have you noticed that nearly all the world’s major brands became known through advertising that not only didn’t track people, but sponsored journalism as well?

Have you noticed that tracking people and directing personalized messages at them—through “addressable media”—is in fact direct marketing, which we used to call junk mail?

Didn’t think so.

Time to get the clues, ad biz. Brands too.

Start with The Cluetrain Manifesto, which says, if you only have time for one clue this year, this is the one to get…

we are not seats or eyeballs or end users or consumers.
we are human beings — and our reach exceeds your grasp.
deal with it.

That year was 1999.

If advertising and marketing had bothered to listen back then, they might not be dealing today with the GDPR, the CCPA, and the earned dislike of billions.

Next, please learn (or re-learn) the difference between real advertising and the junk message business. Find that lesson in Separating Advertising’s Wheat and Chaff. An excerpt:

See, adtech did not spring from the loins of Madison Avenue. Instead its direct ancestor is what’s called direct response marketing. Before that, it was called direct mail, or junk mail. In metrics, methods and manners, it is little different from its closest relative, spam.

Direct response marketing has always wanted to get personal, has always been data-driven, has never attracted the creative talent for which Madison Avenue has been rightly famous. Look up best ads of all time and you’ll find nothing but wheat. No direct response or adtech postings, mailings or ad placements on phones or websites.

Yes, brand advertising has always been data-driven too, but the data that mattered was how many people were exposed to an ad, not how many clicked on one — or whether you, personally, did anything.

And yes, a lot of brand advertising is annoying. But at least we know it pays for the TV programs we watch and the publications we read. Wheat-producing advertisers are called “sponsors” for a reason.

So how did direct response marketing get to be called advertising ? By looking the same. Online it’s hard to tell the difference between a wheat ad and a chaff one.

Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.

That’s what had happened to the ANA in 2018, when it acquired what had been the Direct Marketing Association (aka DMA) and which by then called itself the Data & Marketing Association.

The Partnership for Responsible Addressable Media speaks in the voice of advertising’s alien replica. It does not “safeguard essential values in advertising as a positive economic force.” Instead it wants to keep using “addressable” advertising as the primary instrument of surveillance capitalism.

Maybe it’s too late to save advertising from its alien self. But perhaps not, if what’s left of advertising’s soul takes the writings of Bob Hoffman (@AdContrarian) to heart. That’s the only way I know for advertising to clean up its act.

 

 

Monday, 03. August 2020

Jon Udell

Robert Plomin on heritability

This post is just a record of the key insights I took away from Sam Harris’ enlightening talk with Robert Plomin. I sure wish it were easier to capture and embed these kinds of audio segments, and to do so more effectively — ideally with transcription. It was way too much work to construct the … Continue reading "Robert Plomin on heritability"

This post is just a record of the key insights I took away from Sam Harris’ enlightening talk with Robert Plomin.

I sure wish it were easier to capture and embed these kinds of audio segments, and to do so more effectively — ideally with transcription. It was way too much work to construct the URLs that deliver these segments into the embedded players I’ve included here, even with the tool I made for that purpose. It’s nice to have a standard embedded player, finally, but since it doesn’t show the endpoints of the segments you can’t tell they’re each just a few minutes. It looks like they all run to the end.

I wound up going to more trouble than it’s probably worth to convey the most memorable parts of that excellent podcast, and I don’t have time to transcribe more fully, but for what it’s worth, here are the segments that I’ll be remembering.

1. “The most important thing we’ve learned from the DNA revolution in the last 10 years is that genetic influences on complex traits are due to thousands of tiny DNA differences.”

2. “These polygenic scores are perfectly normally distributed.”

3. “I’m saying there are no disorders, there are just quantitative dimensions.”


Sunday, 02. August 2020

Just a Theory

We Need to Talk About Ventilation

Zeynep Tufekci on aerosolized Covid-19 transmission and the need for ventilation.

Zeynep Tufekci, in a piece for The Atlantic:

Jimenez also wondered why the National Guard hadn’t been deployed to set up tent schools (not sealed, but letting air in like an outdoor wedding canopy) around the country, and why the U.S. hadn’t set up the mass production of HEPA filters for every classroom and essential indoor space. Instead, one air-quality expert reported, teachers who wanted to buy portable HEPA filters were being told that they weren’t allowed to, because the CDC wasn’t recommending them. It is still difficult to get Clorox wipes in my supermarket, but I went online to check, and there is no shortage of portable HEPA filters. There is no run on them.

It’s the profoundly irresponsible plan to reopen schools without any remotely sufficient attempt to upgrade and modernize the air circulation systems of our dilapidated public school buildings that disturbs me. Meanwhile, school reopening proposals pay undue attention to hygiene theater to assuage fears, while very real risks go largely unaddressed.1 It simply won’t work, and that means disastrous outcomes for communities.

And it’s not like there aren’t ways to get things under better control. Tufekci continues:

However, Japan masked up early, focused on super-spreader events (a strategy it calls “cluster busting”), and, crucially, trained its public to focus on avoiding the three C’s—closed spaces, crowded places, and close conversations. In other words, exactly the places where airborne transmission and aerosols could pose a risk. The Japanese were advised not to talk on the subway, where windows were kept open. Oshitani said they also developed guidelines that included the importance of ventilation in many different settings, such as bars, restaurants, and gyms. Six months later, despite having some of the earliest outbreaks, ultradense cities, and one of the oldest populations in the world, Japan has had about 1,000 COVID-19 deaths total—which is how many the United States often has in a single day. Hong Kong, a similarly dense and subway-dependent city, has had only 24 deaths.

The U.S. needs to get it shit together. We have the wealth and knowledge to do this right, but need to put empathy for each other ahead of temporary political and economic impacts to do so.

In fairness, the Health and Safety section the New York City DOE’s Return to School 2020-2021 plan says that the “DOE will make improvements to HVAC systems, as well as air conditioning repairs, to improve air circulation, as well as replacing regular air filters with higher efficiency types.” Still, there’s a social failing here, national leaders ought to fund the upgrading of air circulation systems to the highest standards in every school and classroom in the United States. ↩︎

More about… Health Zeynep Tufekci Coronavirus Covid-19

FACILELOGIN

Microservices Security Landscape

In the 1st chapter of the book, Microservices Security in Action, which I authored with Nuwan Dias , we discuss the microservices security… Continue reading on FACILELOGIN »

In the 1st chapter of the book, Microservices Security in Action, which I authored with Nuwan Dias , we discuss the microservices security…

Continue reading on FACILELOGIN »

Saturday, 01. August 2020

Phil Windley's Technometria

Relationships and Identity

Summary: We build digital identity systems to create and manage relationships—not identities. We need our digital relationships to have integrity and to be useful over a specified lifetime. Identity systems should provide relationship integrity and utility to participants for the appropriate length of time. Participants should be able to create relationships with whatever party will provide u

Summary: We build digital identity systems to create and manage relationships—not identities. We need our digital relationships to have integrity and to be useful over a specified lifetime. Identity systems should provide relationship integrity and utility to participants for the appropriate length of time. Participants should be able to create relationships with whatever party will provide utility. SSI provides improved support for creating, managing, and using digital relationships.

The most problematic word in the term Self-Sovereign Identity (SSI) isn't "sovereign" but "identity" because whenever you start discussing identity, the conversation is rife with unspoken assumptions. Identity usually conjures thoughts of authentication and various federation schemes that purport to make authentication easier. I frequently point out that even though SSI has "identity" in it's name, there's no artifact in SSI called an "identity." Instead the user experience in an SSI system is based on forming relationships and using credentials.

I've been thinking a lot lately about the role of relationships in digital identity systems and have come to the conclusion that we've been working on building building systems that support digital relationships without acknowledging the importance of relationships or using them in our designs and architectures. The reason we build identity systems isn't to manage identities, it's to support digital relationships.

I recently read and then reread a 2011 paper called Identities Evolve: Why Federated Identity is Easier Said than Done from Steve Wilson. Despite being almost ten years old, the paper is still relevant, full of good analysis and interesting ideas. Among those is the idea that the the goal of using federation schemes to create a few identities that serve all purposes is deeply flawed. Steve's point is that we have so many identities because we have have lots of relationships. The identity data for a given relationship is contextual and highly evolved to fit its specific niche.

Steve's discussion reminded me of a talk Sam Ramji used to give about speciation of APIs. Sam illustrated his talk with a picture from Encyclopedia Britannica to show adaptive radiation in Galapagos finches in response to evolutionary pressure. These 14 different finches all share a common ancestor, but ended up with quite different features because of specialization for a particular niche.

Adaptive radiation in Galapagos finches (click to enlarge)

In the same way, each of us have hundreds, even thousands, of online relationships. They each have a common root but are highly contextualized. Some are long-lived, some are ephemeral. Some are personal, some are commercial. Some are important, some are trivial. Still, we have them. The information about ourselves, what many refer to as identity data, that we share with each is just as adapted to the specific niche that the relationship represents as the Galapagos finches are to their niches. Once you realize this, the idea of creating a few online identities to serve all needs becomes preposterous.

Not only is each relationship evolved for a particular niche, but it is also constantly changing. Often those changes are just more of what's already there. For example, my Netflix account represents a relationship between me and Netflix. It's constantly being updated with my viewing data but not changing dramatically in structure. But some changes are larger. For example, it also allows me to create additional profiles which makes the relationship specialized for the specific members of my household. And when Netflix moved from DVDs only to streaming, the nature of my relationship with Netflix changed significantly.

I'm convinced that identity systems would be better architected if we were more intentional about their need to support specialized relationships spanning millions of potential relationship types. This article is aimed at better understanding the nature of digital relationships.

Relationships

One of the fundamental problems of internet identity is proximity. Because we're not interacting with people physically, our natural means of knowing who we're dealing with are useless. Joe Andrieu defines identity as "how we recognize, remember, and respond" to another entity. These three activities correspond to three properties digital relationships must have to overcome the proximity problem:

Integrity—we want to know that, from interaction to interaction, we're dealing with the same entity we were before. In other words, we want to identify them. Lifespan—normally we want relationships to be long-lived, although we also create ephemeral relationships for short-lived interactions. Utility—we create online relationships in order to use them within a specific context.

We'll discuss each of these in detail below, followed by a discussion of risk in digital relationships.

Relationship Integrity

Without integrity, we cannot recognize the other party to the relationship. Consequently, all identity systems manage relationship integrity as a foundational capability. Federated identity systems improve on one-off, often custom identity systems by providing integrity in a way that reduces user management overhead for the organization, increases convenience for the user, and increases security by eliminating the need to create one-off, proprietary solutions. SSI aims to establish integrity with the convenience of the federated model but without relying on an intervening IdP in order to provide autonomy and privacy.

A relationship has two parties, let's call them P1 and P2.1 P1 is connecting with P2 and, as a result, P1 and P2 will have a relationship. P1 and P2 could be people, organizations, or things represented by a website, app, or service. Recognizing the other party in an online relationship relies on being able to know that you're dealing with the same entity each time you encounter them.

In a typical administrative identity system, when P1 initiates a relationship with P2, P2 typically uses usernames and passwords to ensure the integrity of the relationship. By asking for a username to identify P1 and a password to ensure that it's the same P1 as before, P2 has some assurance that they are interacting with P1. In this model, P1 and P2 are not peers. Rather P2 controls the system and determines how and for what it is used.

In a federated flow, P1 is usually called the subject or consumer and P2 is called the relying party (RP). When the consumer visits the RP's site or opens their app, they are offered the opportunity to establish a relationship through an identity provider (IdP) whom the RP trusts. The consumer may or may not have a relationship with an IdP the RP trusts. RPs pick well-known IdPs with large numbers of users to reduce friction in signing up. The consumer chooses which IdP they want to use from the relying party's menu and is redirected to the IdP's identity service where they present a username and password to the IdP, are authenticated, and redirected back to the RP. As part of this flow, the RP gets some kind of token from the IdP that signifies that the IdP will vouch for this person. They may also get attributes that the IdP has stored for the consumer.2

In the federated model, the IdP is identifying the person and attesting the integrity of the relationship to the RP. The IdP is a third party, P3, who acts as an intervening administrative authority. Without their service, the RP may not have an alternative means of assuring themselves that the relationship has integrity over time. On the other hand, the person gets no assurance from the identity system about relationship integrity in this model. For that they must rely on TLS, which is visible in a web interaction, but largely hidden inside an app on a mobile device. P1 and P2 are not peers in the federated model. Instead, P1 is subject to the administrative control of both the IdP and the RP. Further, the RP us subject to the administrative control of the IdP.

In SSI, a relationship is initiated when P1 and P2 exchange decentralized identifiers (DID). For example, when a person visits a web site or app, they are presented with a connection invitation. When they accept the invitation, they use a software agent to share a DID that they created. In turn, they receive a DID from the web site, app, or service. We call this a "connection" since DIDs are cryptographically based and thus provide a means of both parties mutually authenticating. The user experience does not necessarily surface all this activity to the user. To get a feel for the user experience, run through the demo at Connect.me3.

In contrast to the federated model, the participants in SSI mutually authenticate and the relationship has integrity without the intervention of a third party. By exchanging DIDs both parties have also exchanged public keys. They can consequently use cryptographic means to ensure they are interacting with the party who controls the DID they received when the relationship was initiated. Mutual authentication, based on self-initiated DIDs provides SSI relationships with inherent integrity. P1 and P2 are peers in SSI since they both have equal control over the relationship.

In addition to removing the need for intermediaries to vouch for the integrity of the relationship, the peer nature of relationships in SSI also means that neither party has access to the authentication credentials of the other. Mutual authentication means that each party manages their own keys and never shares the private key with another party. Consequently, attacks, like the recent attack on Twitter accounts can't happen in SSI because there's no administrator who has access to the credentials of everyone using the system.

Relationship Lifespan

Whether in the physical world or digital, relationships have lifespans. Some relationships are long-lived, some are short-term, and others are ephemeral, existing only for the duration of a single interaction. We typically don't think of it this way, but every interaction we have in the physical world, no matter for what purpose or how short, sets up a relationship. So too in the digital world, although our tools, especially as people online, have been sorely lacking.

I believe one of the biggest failings of modern digital identity systems is our failure to recognize that people often want, even need, short-lived relationships. Think about your day for a minute. How many people and organizations did you interact with in the physical world4 where you established a permanent relationship? What if whenever you stopped in the convenience store for a cup of coffee, you had to create a permanent relationship with the coffee machine, the cashier, the point of sale terminal, and the customers in line ahead and behind you? Sounds ridiculous. But that's what most digital interactions require. At every turn we're asked to establish permanent accounts to transact and interact online.

There are several reasons for this. The biggest one is that every Web site, app, or service wants to send you ads, at best, or track you on other sites, at worst. Unneeded, long-lived relationships are the foundation of the surveillance economy that has come to define the modern online experience.

There are a number of services I want a long-lived relationship with. I value my relationships with Amazon and Netflix, for example. But there are many things I just need to remember for the length of the interaction. I ordered some bolts for a car top carrier from a specialty place a few weeks ago. I don't order bolts all the time, so I just want to place my order and be done. I want an ephemeral relationship. The Internet of Things will increase the need for ephemeral relationships. When I open a door with my digital credential, I don't want the hassle of creating a long-term relationship with it; I just want to open the door and then forget about it.

Digital relationships should be easy to set up and tear down. They should allow for the relationship to grow over time, if both parties desire. While they exist, they should be easily managable while providing all the tools for the needed utility. Unless there's long term benefit to me, I shouldn't need to create a long term relationship. And when a digital relationship ends, it should really be over.

Relationship Utility

Obviously, we don't create digital relationships just so we can authenticate the other party. Integrity is a necessary, but insufficient, condition for an identity system. This is where most identity models fall short. We can understand why this is so given the evolution of the modern Web. For the most part, user-centric identity really took off when the web gave people reasons to visit places they didn't have a physical relationship with, like an ecommerce web site. Once the identity system had established the integrity of the relationship, at least from the web site's perspective, we expected HTTP would provide the rest.

Most Identity and Access Management systems don't provide much beyond integrity except possibly access control. Once Facebook has established who you are, it knows just what resources to let you see or change. But as more and more of our lives are intermediated by digital services, we need more utility from the identity system than simple access control. The most that an IdP can provide in the federated model is integrity and, perhaps, a handful of almost-always-self-asserted attributes in a static, uncustomizable schema. But rich relationships require much more than that.

Relationships are established to provide utility. An ecommerce site wants to sell you things. A social media site wants to show you ads. Thus, their identity systems, built around the IAM system, are designed to do far more than just establish the integrity of the relationship. They want to store data about you and your activities. For the most part this is welcome. I love that Amazon shows me my past orders, Netflix remembers where I was in a series, and Twitter keeps track of my followers and past tweets.

The true identity system is much, much larger and specialized than the IAM portion. All of the account or profile data these companies use is properly thought of as part of the identity system that they build and run. Returning to Joe Andrieu:

Identity systems acquire, correlate, apply, reason over, and govern [the] information assets of subjects, identifiers, attributes, raw data, and context.

Regardless of whether or not they outsource the integrity of their relationships (and notice that none of the companies I list above do), companies still have to keep track of the relationships they have with customers or users in order to provide the service they promise. They can't outsource this to a third party because the data in their identity system has evolved to suit the needs of the specific relationship. We'll never have a single identity that serves all relationships because they are unique contexts that demand their own identity data. Rip out the identity system from a Netflix or Amazon and it won't be the same company anymore.

This leads us to a simple, but important conclusion: You can't outsource a relationship. Online apps and services decorate the relationship with information they observe, and use that information to provide utility to the relationships they administer. Doing this, and doing it well is the foundation of the modern web.

Consequently, the bad news is that SSI is not going to reduce the need for companies to build, manage, and use identity systems. Their identity systems are what make them what they are—there is no "one size fits all" model. The good news is that SSI makes online relationships richer. Relationships are easier and less expensive to manage, not just for companies, but for people too. Here's some of the ways SSI will enhance the utility of digital relationships:

Richer, more trustworthy data—Relationships change over time because the parties change. We want to reliably ascertain facts about the other party in a relationship, not just from direct observation, but also from third parties in a trustworthy manner to build confidence in the actions of the other party within a specific context. Verifiable credentials, self-issued or from others, allow relationships to be enhanced incrementally as the relationships matures or changes. Autonomy and choice through peer relationships—Peer relationship give each party more autonomy than traditional administrative identity systems have provided. And, through standardization and substitutability, SSI gives participants choice in what vendors and agents they employ to manage their relationships. The current state of digital identity is asymmetric, providing companies with tools that are difficult or unwieldy for people to use. People like Doc Searls and organizations like Project VRM and the Me2B Alliance argue that people need tools for managing online relationships too. SSI provides people with tools and autonomy to manage their online relationships with companies and each other. Better, more secure communications—DID exchange provides a private, secure messaging channel using the DIDComm protocol. This messaging channel is mutually recognized, authenticated, and encrypted. Unifying, interoperable protocol for data transmission—DIDs and Verifiable Credentials, and DIDComm provides a standardized, secure means of interaction. Just like URLs, HTML, and HTTP provided for an interoperable web that led to an explosion of uses, the common protocol of SSI will ensure everyone benefits from the network effects. Consistent user experience—Similarly, SSI provides a consistent user experience, regardless of what digital wallet people use. SSI's user experience is centered on relationships and credentials, not arcane addresses and keys. The user experience mirrors the experience people have of managing credentials in the physical world. Support for ad hoc digital interactions—The real world is messy and unpredictable. SSI is flexible enough to support the various ad hoc scenarios that the world presents us and supports sharing multiple credentials from various authorities in the ways the scenario demands.

These benefits are delivered using an architecture that provides a common, interoperable layer, called the identity metasystem, upon with anyone can build the identity systems they need. A ubiquitous identity layer for the internet must be a metasystem that provides the building blocks and protocols necessary for others to build identity systems that meet the needs of any specific context or domain. An identity metasystem is a prerequisite for an online world where identity is as natural as it is in the physical world. An identity metasystem can remove friction, decrease cognitive overload, and make online interactions more private and secure.

Relationships, Risk, and Trust

Trust is a popular term in the SSI community. People like Steve Wilson and Kaliya Young rightly ask about risk whenever someone in the identity community talks about trust. Because of the proximity problem, digital relationships are potentially risky. One of the goals of an identity system is to provide evidence that can be used in the risk calculation.

In their excellent paper, Risk and Trust, Nickel and Vaesen define trust as the "disposition to willingly rely on another person or entity to perform actions that benefit or protect oneself or one’s interests in a given domain." From this definition, we see why crypto-proponents often say "To trust is good, but to not trust is better." The point being that not having to rely on some other human, or human-mediated process is more likely to result in a beneficial outcome because it reduces the risk of non-performance.

Relationships imply a shared domain, context, and set of activities. We often rely on third parties to tell us things relevant to the relationship. Our vulnerability, and therefore our risk, depends on the degree of reliance we have on another party's performance. Relationships can never be "no trust" because of the very reasons we create relationships. Bitcoin, and similar systems, can be low or no trust precisely because the point of the system is to reduce the reliance on any relationship at all. The good news, is the architecture of the SSI stack significantly limits the ways we must rely on external parties for the exchange of information via verifiable credentials and thus reduces the vulnerability of parties inside and outside of the relationship. The SSI identity metasystem clearly delineates the parts of the system that are low trust and those where human processes are still necessary.

The exchange of verifiable credentials can be split into two distinct parts as shown in the following diagram. SSI reduces risk in remote relationships using the properties of these two layers to combine cryptographic processes with human processes.

SSI Stack (click to enlarge)

The bottom layer, labeled Identity Metasystem, comprises two components: a set of verifiable data repositories for storing metadata about credentials and a processing layer supported by protocols and practices for the transport and validation of credentials. Timothy Ruff uses the analogy of shipping containers to describe the identity metasystem. Verifiable credentials are analogous to shipping containers and the metasystem is analogous to the shipping infrastructure that makes intermodal shipping so efficient and secure. The Identity Metasystem provides a standardized infrastructure that similarly increases the efficiency and security of data interchange via credentials.

The top layer, labeled Identity Systems, is where people and organizations determine what credentials to issue, determine what credentials to seek and hold, and determine which credentials to accept. This layer comprises the individual credential exchange ecosystems that spring up and the governance processes for managing those credential ecosystems. In Timothy's analogy to shipping containers, this layer is about the data—the cargo—that the credential is carrying.

The Identity Metasystem allows verifiable credentials can be cryptographically checked to ensure four key properties that relate to the risk profile.

Who issued the credential? Was the credential issued to the party presenting it? Has the credential been tampered with? Has the credential been revoked?

These checks show the fidelity of the credential and are done without the verifier needing a relationship with the issuer. And because they're automatically performed by the Identity Metasystem, they significantly reduce the risk related to using data transferred using verifiable credentials. This is the portion of credential exchange that could be properly labeled "low or no trust" since the metasystem is built on standards that ensure the cryptographic verifiability of fidelity without reliance on humans and human-mediated processes.

The upper, Identity Systems, layer is different. Here we are very much relying on the credential issuer. Some of the questions we might ask include:

Is the credential issuer, as shown by the identifier in the credential, the organization we think they are? Is the organization properly accredited in whatever domain they're operating in? What data did the issuer include in the credential? What is the source of that data? How has that data been processed and protected?

These questions are not automatically verifiable in the way we can verify the fidelity of a credential. They are different for each domain and perhaps different for each type of relationship based on the vulnerability of the parties to the data in the credential and their appetite for risk. Their answers depend on the provenance of the data in the credential. We would expect to see credential verifier's perform provenance checks by answering these and other questions during the process they use to establish trust in the issuer. Once the verifier has established this trust, the effort needed to evaluate the provenance of the data in a credential should cease or be greatly reduced.

As parties in a relationship share data with each other, the credential verifier will spend effort evaluating the provenance of issuers of credentials they have not previously evaluated. Once that is done, the metasystem will provide fidelity checks on each transaction. For the most part, SSI does not impose new ways of making these risk evaluations. Rather, most domains already have processes for establishing the provenance of data. For example, we know how to determine if a bank is a bank, the accreditation status of a university, and the legitimacy of a business. Companies with heavy reliance on properties of their supply chain, like pharmaceuticals, already have processes for establishing the provenance of the supply chain. For the most part, verifiable credential exchange will faithfully present digital representations of the kinds of physical world processes, credentials, and data we already know how to evaluate. Consequently, SSI promises to significantly reduce the cost of reducing risk in remote relationships without requiring wholesale changes to existing business practices.

Conclusion

Relationships have always been the reason we created digital identity systems. Our historic focus on relationship integrity and IAM made modern Web 2.0 platforms and services possible, but has limited use cases, reduced interoperability, and left people open to privacy and security breaches. By focusing on peer relationships supported by an Identity Metasystem, SSI not only improves relationships integrity but also better supports flexible relationship lifecycles, more functional, trustworthy relationship utility, and provides tools for participants to correctly gauge, respond to, and reduce risks inherent in remote relationships.

Notes For simplicity, I'm going to limit this discussion to two-party relationships, not groups. I've described federation initialization very generally here and left out a number of details that distinguish various federation architectures. Connect.me is just one of a handful of digital wallets that support the same SSI user experience. Others vendors include Trinsic, ID Ramp, and eSatus AG. Actually, imagine a day before Covid-19 lockdown.

Photo Credit: Two People Holding Hands from Albert Rafael (Free to use)

Tags: sovrin ssi decentralized+identifiers identity relationships


reb00ted

How every 100x100km region in the world has warmed already

With predictions for the future. Almost two years old, but a great resource. Would be worth updating.

With predictions for the future. Almost two years old, but a great resource. Would be worth updating.


Doc Searls Weblog

Vermont Public Radio rating wins, and the future of streaming & podcasting

Public Radio: What is the best NPR station in the country? That’s a question on Quora I thought needed answering. So I did, with this: Here’s a quantitative answer to your qualitative question: WVPS of Vermont Public Radio. Because, in Nielsen’s Audio Ratings, it scores a 12.6 in its home market of Burlington, and a 16.2 in its neighbor market […]

Radio is moving from these to servers of streams and podcasts.

Public Radio: What is the best NPR station in the country? That’s a question on Quora I thought needed answering. So I did, with this:

Here’s a quantitative answer to your qualitative question: WVPS of Vermont Public Radio. Because, in Nielsen’s Audio Ratings, it scores a 12.6 in its home market of Burlington, and a 16.2 in its neighbor market of Montpelier-Waterbury. Far as I know, those are tops among all the country’s NPR-affiliated stations.

Honorable mentions go to WUOM in Ann Arbor with a 13.0, KCLU in Santa Barbara with a 10.2—plus others you’ll find if you follow the links in Where Public Radio Rocks, which I published in April of last year. All the numbers I sourced have changed since then, but they’re easy to find at the links I provided.

In the long run, however, “best” will come to mean which stations, producers and distributors are best at streaming and podcasting. Because that’s where listening is headed. Vermont Public Radio makes that clear on their own website, which appends “#stream/0” to its URL when you go there—and does its best, on the site, to encourage listening over-the-net rather than just over-the-air.

At this point in history, nearly all radio stations already stream, for a good reason: in the digital world, where every one of us with a smartphone and a data plan has the best radio ever made, antique broadcast virtues such as “range” and “coverage” have become bugs. This is why, when my family drove around Spain in a rental car last summer, we listened to KCLU from our home town of Santa Barbara, piped from one of our phones through the car’s entertainment system (which is no longer called a “radio”). It’s also why, when I’m up early on the West Coast, I often listen to WBUR from Boston or WNYC from New York, my other home towns. (I get around—or at least I did before the plague.)

The streaming numbers in Nielsen’s ratings are still low, but they are growing, and in many markets exceed the numbers for nearly all the remaining AM stations. For example, in the latest ratings for Washington, DC, 36 stations are listed: 33 FM, 2 streams and 2 AM. Those are drawn from a roster of 52 FM and 35 AM stations with listenable signals in Washington (according to radio-locator.com)—and 6 of those FM signals are translators for AM stations, including the two AMs that show in the ratings (which means that even the ratings for AM stations were likely for those stations’ FM signals).

Also, while streaming is the big trend for stations, podcasting is the big trend for programming, aka “content.” Podcasting is exploding now, and earning ever-larger slices of the listening pie, which is a finite sum of people’s time. Podcasting wins at this because it has far more optionality than live over-the-air radio. You can listen when you like, slide forward and backward through a show, jump past ads or skip over topics you’d rather miss, and listen at 1.5x or 2x the normal speed. Those are huge advantages.

It’s also not for nothing that SiriusXM just paid $325 for Stitcher (says Variety), and not long before that Spotify paid $100 million for Joe Rogan’s podcast and (according to Business Insider) nearly $200 million for The Ringer and “nearly $400 million in recent purchases of Gimlet Media, Anchor, and Parcast.”

For that kind of money you could buy every AM and FM station in New York or Los Angeles.

Noncommercial players are also looking pretty good in the podcasting world as well. According to Podtrac, NPR is the #1 podcast publisher and PRX is #5. Also showing well are WNYC Studios, This American Life/Serial and American Public Media. NPR also has 9 of the top 20 podcasts. In fact the majority (11) of those top 20 are from public radio sources.

Off the top of my head, the public stations with head starts in podcast production are WBEZ in Chicago, WBUR in Boston, WNYC in New York, KQED in San Francisco, KPCC and KCRW in Los Angeles and others you’ll hear credited when they open or close a show.

But it’s early. Expect lots of change in the coming months and years as many podcast creators, producers and distributors jockey for positions in two races. One is the free public one, syndicated by RSS on the open Internet and ready to hear on any browser, app or device. The other is the private subscription one, available only through the owner’s services. This is clearly where SiriusXM and Spotify are both going. SiriusXM is audible only by subscription, while Spotify remains $free (for now) but exclusive. (For example, Michelle Obama’s new podcast is available only on Spotify.) This split, between free/open and paid/closed, will be a big story over the coming years.

So, in the meantime, hats off to Vermont Public Radio for being the top public radio operation in the country—at least in its markets’ ratings. And stay tuned for the fights among players in streaming and podcasting.

I expect VPR will continue being the alpha broadcasting, streaming and podcasting service in its home state, both because it does a great job and because Vermont is very much a collection of communities that have come to depend on it.

And, if you want to know why I think journalism of the fully non-fake kind has a last (or first) refuge in the most local forms, dig The story isn’t the whole story, my TEDx talk about that.


FACILELOGIN

IAM4Developers Community

https://i7s6h3v8.rocketcdn.me/wp-content/uploads/2019/10/nuturing-a-developer-community-sparkpost-1000px-2.png Recently we started a meetup group targeting IAM developers in 7 locations globally: Mountain View, Toronto, London, Sydney, Singapore , Bangalore and Colombo. In these meetups we discuss anything on Identity & Access Management and Security, where application developers are passi
https://i7s6h3v8.rocketcdn.me/wp-content/uploads/2019/10/nuturing-a-developer-community-sparkpost-1000px-2.png

Recently we started a meetup group targeting IAM developers in 7 locations globally: Mountain View, Toronto, London, Sydney, Singapore , Bangalore and Colombo. In these meetups we discuss anything on Identity & Access Management and Security, where application developers are passionate about! If you find interesting, please join!

Following lists out the recordings from the 1st seven meetups.

JSON Web Token Internals and Applications Controlling Access to Your Microservices with Istio Service Mesh OAuth 2.0 Internals and Applications Securing Single-page Applications with OpenID Connect Single Logout Dilemma Securing service-to-service interactions over HTTP, gRPC and Kafka Securing gRPC Microservices with Istio Service Mesh

IAM4Developers Community was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


Challenges of Securing Microservices

In the 1st chapter of the book, Microservices Security in Action, which I authored with Nuwan Dias , we list out a set of key challenges… Continue reading on FACILELOGIN »

In the 1st chapter of the book, Microservices Security in Action, which I authored with Nuwan Dias , we list out a set of key challenges…

Continue reading on FACILELOGIN »

Friday, 31. July 2020

Eddie Kago

Digital Credentials for a Safer World

This should have been a tweet. But the format was not fitting. The promise of a personal decentralized identity is that the trust component involved in your use of credentials in the real world can be replicated online. In the real world, when you present your driving license to a law enforcement officer during a periodical check, the officer checks its validity and legitimacy by looking at

This should have been a tweet. But the format was not fitting.

The promise of a personal decentralized identity is that the trust component involved in your use of credentials in the real world can be replicated online. In the real world, when you present your driving license to a law enforcement officer during a periodical check, the officer checks its validity and legitimacy by looking at specific features on your driving license. Since they are a trained eye, the error rate in verifying your document is lower than what a layman would have and the accuracy even better when the officer is aided by tools such as UV light to check for holograms and other security features. Once they are satisfied with your document, you can proceed with your business or transaction at hand.

It may seem straightforward that the scenario described is replicable digitally. If the interaction between you and an academic officer or a law enforcement officer was digital, you would send a scanned copy of your driving license or school leaving certificate over email or through an online portal. Once submitted, they verify security features on the documents and either involve a senior officer or send them to a verifying body to ascertain authenticity. This takes time and before you get a confirmation that your document was approved, and that you can proceed with your intended activity, days are gone not to mention the opportunity cost involved.

While that has been largely the case in our regular interactions involving using our identifying documents and issued credentials, the bottlenecks can be minimized to make processes smoother. We should note that the document, the primary actor in the entire interaction/exchange is not truly digital. It is a scan, an image or a photograph of your certificate or license. You and I (the humans) can read and make out the information contained there. Computers cannot. We are in 2020. Computers should. If your documents were machine readable, computer systems that handle them would be able to process the information, and heck, even verify the information by themselves. Some documents today are machine readable and make information systems very efficient. But there should be a way to make this the new normal.

Enter Verifiable Credentials (VCs). An open standard that forward thinking technologists and organizations, after facing the frustrations described and seeing the potential value being locked up in non-machine-verifiable documents, came up with under the World Wide Web Consortium (W3C). Verifiable Credentials support machine-to-machine communication and their proofs (security features) can be verified cryptographically using the Public Key Infrastructure (PKI). Issuing and Verifying authorities have their work made easier as they can independently check a common registry for proofs regarding the individual and the credential presented. There is no need for a third party to countercheck where the Mathematical proofs are well implemented and the registry, once written on is tamper proof. Such a database could be a blockchain or a sufficiently consensus driven distributed ledger.

Figure 1: Exchange of Verifiable Credentials between Issuer, Holder, and Verifier.

Why should you care? Well, in this COVID-19 era, we are all trying to avoid unnecessary human contact. Because of Fear, Uncertainty and Doubt, even documents are being quarantined if physical copies are exchanged and more services now insist on digital forms of documents. That’s where Verifiable Credentials come in. It should be possible, once available, that a certificate proving you have been vaccinated against corona virus or that you graduated from university remotely, is shared as a Verifiable Credential. One noteworthy forward thinking initiative that is working on making this a reality is the COVID-19 Credentials Initiative (CCI). Vibranium ID is a part of it. The work of the initiative is not restricted to the pandemic as a reactive response but is a basis as to how we can unlock the promise of Verifiable Credentials.

Fun fact: After the Governance Framework (GF) of the CCI approved the CCI GF, three Kenyan software developers (Mbora Maina, Amos Kibet, Chris Achinga) and I translated the document into Kiswahili. The translation gives context as to the practicability of VCs in the real world.

Innovation waits for no man. The pandemic has proved so. The adjustment brought by this disease has surely made some processes redundant and will bring gamechanging technologies to the foreground. As Peter Drucker famously said, “Innovate or die”.

References

Figure 1: A Verifiable Credentials Primer, by Manu Sporny, Digital Bazaar

W3C Verifiable Credentials - https://www.w3.org/TR/vc-data-model/

Digital Credentials for a Safer World was originally published in Vibranium ID on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 30. July 2020

Markus Sabadello on Medium

The Universal Resolver Infrastructure

A DIF-hosted resource for community development Introduction It has been almost three years since DIF began working on the Universal Resolver (GitHub: Universal Resolver) — a foundational piece of infrastructure for the Decentralized Identity ecosystem (see the original announcement). Since then, our vision of being interoperable across ledgers and DID methods has seen a lot of support. Thanks t
A DIF-hosted resource for community development Introduction

It has been almost three years since DIF began working on the Universal Resolver (GitHub: Universal Resolver) — a foundational piece of infrastructure for the Decentralized Identity ecosystem (see the original announcement). Since then, our vision of being interoperable across ledgers and DID methods has seen a lot of support. Thanks to community contributions, the Universal Resolver now supports around 30 different DID methods.

Today, we are happy to announce an updated set of instances where the Universal Resolver is deployed. One stable and one experimental version will be iterated, maintained, and hosted by DIF as a service to the community!🎉

While this is undoubtedly a useful resource for research, experimentation, testing, and development, it is important that it not be mistaken for a production-grade universal resolver. It should be pointed out that:

This infrastructure is neither intended or approved for production use cases and that nobody should rely on it for anything other than for development and testing purposes. These two specific deployments are not production-ready. The preferred scenario continues to be that all DID-based information systems, run by a method operator or otherwise, production or otherwise, host their own instance of the Universal Resolver (or other DID Resolution tools). DIF withholds the right to limit or modify the performance of this free service in case usage for production, commercial, and/or malicious purposes is detected. Two Deployments

The following two deployments are now available as a community service:

* https://resolver.identity.foundation/ — Hosted on IBM Cloud by DIF (thanks IBM!). While not considered production-ready, this instance is expected to be relatively stable. It will be tested before and after manual updates from time to time, with versioned releases.
* https://dev.uniresolver.io/ — Hosted on AWS by DIF. This instance is more experimental, will be updated frequently, and is connected to CI/CD processes. It may be down from time to time or have unexpected functionality changes.

Note: For backward compatibility, the original URL https://uniresolver.io/ will now redirect to https://dev.uniresolver.io/.

Documentation

See the following links for more information about testing, release, and deployment processes of the Universal Resolver:

Photo by Anne Nygård AWS Architecture: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/dev-system.md CI/CD Process: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/continuous-integration-and-delivery.md Branching Strategy: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/branching-strategy.md Release process: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/creating-releases.md Periodically, this standing work item is discussed in the Identifiers and Discovery Working Group, so that group’s recorded meetings and discussions on Slack and mailing list may contain further insight on the above topics.

The Universal Resolver Infrastructure was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 29. July 2020

Bill Wendel's Real Estate Cafe

Covid seeding new era of consumer protection in real estate?

At the Real Estate Cafe, we’re always cooking up something creative — often revisiting anniversaries and ideas that were “half baked” and still need collaborators… The post Covid seeding new era of consumer protection in real estate? first appeared on Real Estate Cafe.

At the Real Estate Cafe, we’re always cooking up something creative — often revisiting anniversaries and ideas that were “half baked” and still need collaborators…

The post Covid seeding new era of consumer protection in real estate? first appeared on Real Estate Cafe.

Tuesday, 28. July 2020

Doc Searls Weblog

To hurt or help?

The choice above is one I pose at the end of a 20-minute audioblog I recorded for today, here it is in .mp4: And, if that fails, here it is in .mp3: The graphic represents the metaphor I use to frame that choice.

The choice above is one I pose at the end of a 20-minute audioblog I recorded for today, here it is in .mp4:

http://blogs.harvard.edu/doc/files/2020/07/2020_07_28_audioblog.m4a

And, if that fails, here it is in .mp3:

http://blogs.harvard.edu/doc/files/2020/07/2020_07_28_audioblog.mp3

The graphic represents the metaphor I use to frame that choice.


Rebecca Rachmany

Just because I'm paranoid doesn't mean they aren't out to get me.

Just because I'm paranoid doesn't mean they aren't out to get me. Excellent article. Thanks for the reflection and critique.

Just because I'm paranoid doesn't mean they aren't out to get me. Excellent article. Thanks for the reflection and critique.

Saturday, 25. July 2020

@_Nat Zone

[2020-07-28] SIOP Virtual Meetup #2

来る7/28午前8時(日本時間)より、SIOP V… The post [2020-07-28] SIOP Virtual Meetup #2 first appeared on @_Nat Zone.

来る7/28午前8時(日本時間)より、SIOP Virtual Meetup の第2回を行います。第1回は大西洋時間帯向けでしたので、第2回は太平洋時間向けです。

無料ですが、連絡したりするために Eventbrite を使っています。登録いただくとイベントのURLも手に入りますので、ご興味があるかたはぜひ。(ちなみに、General Admissionは売り切れで、OpenID Foundation Members 用はまだ若干チケットの残りがあります。)

https://www.eventbrite.co.uk/e/siop-virtual-meetup-2-tickets-113754506792

アジェンダはまだ決まっていません(25日段階)。しかし、少なくとも以下のものは含む予定です。(たぶんこれだけでお腹いっぱい)

OpenID Claims Aggregation (Nat Sakimura/Edmund Jay) OpenID Self Issued Identifiers (Tom Jones) SIOP Laundry List, (Tobias Looker)

The post [2020-07-28] SIOP Virtual Meetup #2 first appeared on @_Nat Zone.


[2020-07-27] Identiverse: So You Want to Base on Consent?

来る日本時間7月28日午前3:30から、「So Y… The post [2020-07-27] Identiverse: So You Want to Base on Consent? first appeared on @_Nat Zone.

来る日本時間7月28日午前3:30から、「So You Want to Base on Consent?」というタイトルでIdentiverseで講演します。

このタイトルは、グレン・グールドの「So You Want to Write a Fugue?」という曲から着想を得たもので、ある意味無謀なことを、といいながら、じゃぁどうしたらよいのかということをISO/IEC 29184 から抽出してお話します。

この「So You Want to Write a Fugue?」という曲の出だしの歌詞、こうなんです。

So you want to write a fugue?
You’ve got the Urge to write a fugue,
You’ve got the nerve to write a fugue,
So go ahead.

Glenn Gould: So you want to write a fugue (1963)

ほほう、フーガが書きたいとな?
フーガを書きたいという衝動があり、
フーガを書くという根性があると
いうならやってみなさい。

上記:私訳

出だしでそう歌って、フーガの様々な技法をこの曲の中で紹介していきます。

Glenn Gould: So you want to write a fugue? (1963)

「フーガを書きたいを」、「同意で取り扱いたい」で置き換えたのがこの講演です。いわば、

ほほう、同意でやりたいとな?
同意でやりたいという衝動があり、
同意でやりきるという根性があると
いうならやってみなさい。

内容は…7月21日にPWSでやったのとほぼ同じことの英語版です。もしご興味があればどうぞ。

Monday, July 27, 12:30 – 12:55 (Mountain Time)

So You Want to Base on Consent?

Many people seem to believe that having their customers pressing “Agree” button is good enough to collect their “consent”. That’s actually not the case. Obtaining privacy consent has very high bar partly because that is the exception mechanism that you can resort to only when other lawful bases for the processing of personal data does not work.

This session will briefly touch on other lawful bases and what is needed for potentially valid consent, then goes on to explain the requirements for privacy notice and consent process set out in “ISO/IEC 29184 Online privacy notices and consent”.

ISO/IEC 29184 is an international standard that has been in making for the last 5 years. Stakeholders involved in the discussion included data protection authorities around the globe, technical community, lawyers, and businesses. It sets out the requirements for 1) What are needed to be in a privacy notice, 2) What are needed to be done in obtaining the consent, 3) What are needed to be done in the maintenance of privacy notices. For any business that wants to respect customer privacy, this document provides excellent guidance on what needs to be followed.
Speaker/s: Nat Sakimura
Topic: Privacy

Sub Topic: Standards, Architecture & Deployment, Consumer Identity, User Experience, Vision & Strategy

The post [2020-07-27] Identiverse: So You Want to Base on Consent? first appeared on @_Nat Zone.


[2020-07-22] Open Banking x OIDF Workshop

これも事後報告ですが、日本時間7/22の深夜1時か… The post [2020-07-22] Open Banking x OIDF Workshop first appeared on @_Nat Zone.

これも事後報告ですが、日本時間7/22の深夜1時から、Open Banking UKとOIDFの共同ワークショップを開催しました。アジェンダは以下のとおりです。

OIDF_OIDF-OBIE-Workshop-at-OSW2020_Agenda_v2

わたしの使ったスライドは以下のものです。

The post [2020-07-22] Open Banking x OIDF Workshop first appeared on @_Nat Zone.


[2020-07-21] Financial Data Exchange + OIDF Workshop

事後報告ですが、去る7/21にFinancial … The post [2020-07-21] Financial Data Exchange + OIDF Workshop first appeared on @_Nat Zone.

事後報告ですが、去る7/21にFinancial Data Exchange + OpenID Foundation のワークショップを主催しました。もちろんオンラインです。

アジェンダは以下の通り。

Agenda: Introductions – 5 min Setting the Stage – Don Cardinal (Managing Director: Financial Data Exchange) & Nat Sakimura (Chairman of the OpenID Foundation) – 10 min FDX initiatives and Roadmap – Dinesh Katyal (Director Product: Financial Data Exchange) – 10 min OpenID Financial-Grade API Work Group Status – Torsten Lodderstedt – 10 min Self Certification Demo – Joseph Heenan – 15 min Open discussion – All; Moderated by Don Cardinal and Nat Sakimura – 30 min Next steps – All; Moderated by Don Cardinal and Nat Sakimura – 10 min

私の冒頭の挨拶には以下のスライドを利用しました。

このワークショップで私が知ったことには以下のようなことがあります。

OFXはFDXの1ワーキンググループになった。 FDXはFAPI ver.1 Part 1 を採用しており、今後、取引APIを作っていくにあたって、Part 2 ないしは FAPI ver.2 の採用に動いていく。

これをうけて、FDXとOIDFは近々共同作業を開始することになりました。詳細が決まり次第またご報告申し上げます。

The post [2020-07-21] Financial Data Exchange + OIDF Workshop first appeared on @_Nat Zone.

Wednesday, 22. July 2020

Here's Tom with the Weather

Houston has https://houstonemergency.org/alerts/ (may or may not include aliens)

Houston has https://houstonemergency.org/alerts/ (may or may not include aliens)

Tuesday, 21. July 2020

Doc Searls Weblog

A rare sky treat

Across almost 73 laps around the Sun, I’ve seen six notable comets. The fifth was Hale-Bopp, which I blogged about here, along with details on the previous four, in 1997. The sixth is NEOWISE, and that’s it, above, shot with my iPhone 11. There are a couple other shots in that same album, taken with […]

Across almost 73 laps around the Sun, I’ve seen six notable comets. The fifth was Hale-Bopp, which I blogged about here, along with details on the previous four, in 1997. The sixth is NEOWISE, and that’s it, above, shot with my iPhone 11. There are a couple other shots in that same album, taken with my Canon 5D Mark III. Those are sharper, but this one shows off better.

Hey, if the comet looks this good over the lights of Los Angeles, you might have a chance to see it wherever you are tonight. It’ll be under the Big Dipper in the western sky.

Every night it will be a bit dimmer and farther away, so catch it soon. Best view is starting about an hour after sunset.

 

 

Monday, 20. July 2020

Justin Richer

The document is now RFC8485: https://tools.ietf.org/html/rfc8485

The document is now RFC8485: https://tools.ietf.org/html/rfc8485

The document is now RFC8485: https://tools.ietf.org/html/rfc8485

Thursday, 16. July 2020

nosuchcommonsense

ShesGeeky2020

I’m really looking forward to the upcoming ShesGeeky2020 – I was part of the last in person gathering and I continue to be part of a co-organizing triple threat 🧡

I’m really looking forward to the upcoming ShesGeeky2020 –

I was part of the last in person gathering and I continue to be part of a co-organizing triple threat


#BlackLivesMatter

This was the original hope of the blog – White folks – or anyone who is able to see the impact their own access to resources and power and knows the above statement needs centering right now. How are you doing what you can to make this statement a reality in the immediate spaces around … Continue reading #BlackLivesMatter →

This was the original hope of the blog –

White folks – or anyone who is able to see the impact their own access to resources and power and knows the above statement needs centering right now.

How are you doing what you can to make this statement a reality in the immediate spaces around you?

Reading books and participating in conversations is such an important first step- and


Wip Abramson

Nobody is in Control

My first attempt at the Cabaret of Dangerous Ideas. Scary… I was nervous and I know I could have conveyed my thoughts with more clarity and…

My first attempt at the Cabaret of Dangerous Ideas. Scary…

I was nervous and I know I could have conveyed my thoughts with more clarity and coherence. I virtually missed out my section on trust - our way of navigating the uncertainties of our environment. Still, I am grateful for the opportunity and everyone who helped make it happen. I largely enjoyed myself. And I learnt a lot. I look forward to hopefully attempting a full show next year!

My show, #AreYouInControl?, was an attempt to condense and convey a period of reflection into the role and impact identity technology in the complex system that is human society. A web of relationships and influence that limit and expand the possibilities of individuals as they navigate uncertainty and seek to optimise some values perceived to be important in a given context at a given moment in time and space.

My background is not in social science, I research digital identity, privacy and to lesser extent cryptography. The digital world represents this fascinatingly tangled mesh of human social and economic interaction. It has infinitely expanded the set of possible possibilities. The fact that we could run a virtual show is a testament to the expansion of capabilities to interact. It has also created new risks - privacy, security, surveillance, manipulation. And new constraints on interaction, the set of characters and emojis we can use to communicate with, the mechanisms we use to identify and authenticate others, the set of possible actions a platform supports.

These new environments that we exist within are no longer inanimate, they are designed, developed and populated by human beings. The environment has become a source of data, a digital representation of collective memory. Whether intentionally or not, the constraints, risks and possibilities emergent in these virtual spaces are perceived subjectively by each of us. There can be little doubt that these systems present powerful mechanisms for influence at the individual, group and societal level.

The challenge that we must all grapple with, is power to who? For what purpose? Who decides?

Digital technology is neither good nor bad. It is up to us to decide the values these virtual spaces incentivise, what rules they enforce, who they benefit and how they can be changed.

So are you in control?

No, but is control something we should aspire for?. Uncertainty and independent thought is the difference that makes a difference in our complex human system. We should embrace it and protect it.

Instead of control, we should strive for influence and agency driven through intention and the freedom from the intrusive influence of others intentionally kept beyond our awareness. I am interested in technology that can foster cooperation, participation and democratic governance among individuals so they can become empowered agents for change in our society.

I am not sure how much of the above I managed to convey. What can I say, I certainly was not in control despite the influence I attempted to exert through practice and preparation.

My attempt to prepare ...

Wednesday, 15. July 2020

Doc Searls Weblog

How long will radio last?

The question on Quora was How long does a radio station last on average? Here is my answer, which also addresses the bigger question of what will happen to radio itself. Radio station licenses will last as long as they have value to the owners—or that regulators allow them to persist. Call signs (aka call letters) come […]

These are among the since-demolished towers of the once-mighty WMEX/1510 radio in Boston.

The question on Quora was How long does a radio station last on average? Here is my answer, which also addresses the bigger question of what will happen to radio itself.

Radio station licenses will last as long as they have value to the owners—or that regulators allow them to persist. Call signs (aka call letters) come and go, as do fashions around them.* But licenses are the broadcasting equivalent of real estate. Their value is holding up, but it won’t forever.

Arguing for persistence is the simple fact that many thousands of radio station licenses have been issued since the 1920s, and the vast majority of those are still in use.

Arguing for their mortality, however, are signs of rot, especially on the AM band, where many stations are shrinking—literally, with smaller signals and coverage areas—and some are dying. Four reasons for that:

FM and digital media sound much better. Electrical (and especially computer) noise also infects all but the strongest signals. It also doesn’t help that the AM radios in most new cars sound like the speakers are talking through a pillow. Syndicated national programming is crowding out the local kind. This is due to consolidation of ownership in the hands of a few large companies (e.g. Entercom, Cumulus, iHeart) and to the shift of advertising money away from local radio. The independent local AM (and even FM) station is in the same economic pickle as the independent local newspaper. AM transmission tends to come from towers, or collections of them, on many acres of land. Now, as suburbs spread and the value of real estate goes up, the land under many AM transmitters exceeds the value of the stations themselves. A typical example is KDWN/720 in Las Vegas. Since it was born in 1975, KDWN has been 50,000 watts day and night (the legal max), with a night signal that blanketed the whole West Coast. But, in the last year, the station moved a site where it can share another station’s towers, downscaling the signal to just 25,000 watts by day and 7,500 watts by night. Here is a 2019 Google StreetView of the old site, with a For Sale sign. Also note also that KDWN now identifies as “101.5 FM / 720 AM – The Talk of Las Vegas .” The 101.5 is its 250-watt translator (signal repeater), known legally as K268CS. From its perch atop The Strat (formerly the Stratosphpere) on The Strip, the translator puts out a good-enough FM signal to cover the heart of the Las Vegas metro. Today many AM stations exist only as an excuse to operate FM translators like this one. Even fully successful AM stations play this new game. WBBM/780, the legacy all-news station that (rarely among AMs) is still ranked #1 in the Nielsen Ratings for Chicago, sold the land under its old towers and now shares the towers of another station, where it radiates with less power. In the Battle of the Bands, FM won. For evidence, look at the Nielsen Audio Ratings for the Washington DC region. Only two AMs show, and they’re at the bottom. One is WBQH/1050, a regional Mexican formatted station with an 0.2% share of listening and a signal that is only 44 watts at night. And most of the listening likely owes to the station’s 180-watt translator on 93.5fm. (Both only cover a few northside suburbs and the northern tip of DC.) The other station is WSBN/630am, a sports station with an 0.1% share: a number that couldn’t be lower without disappearing. That license was once WMAL, which sold off the land under its towers a few years ago, moving far out of town to “diplex” on the towers of yet another station that long ago sold the land under its original towers. That other station is now called WWRC/570. It’s a religious/conservative talker with no ratings that was once WGMS, famous in its glory years as a landmark classical station.

Despite this, the number of AM licenses in the U.S persist in the thousands, while the number of abandoned AM licenses number in the dozens. (The FCC’s Silent AM Broadcast Stations List is 83 stations long. The Silent FM Broadcast Stations List is longer, but includes a lot of translators and LPFMs—low-power stations meant to serve a few zip codes at most. Also, neither list includes licenses that have been revoked or abandoned in the distant past, such as the once-legendary KISN in Portland, Oregon.)

What I’ve reported so far applies only to the U.S. AM band, which is called MW (for mediumwave) in most of the rest of the world. In a lot of that world, AM/MW is being regulated away: abandoned by decree. That’s why it is gone, or close to it, in some European countries. Canada has also scaled back on AM, with the CBC  moving in many places exclusively to FM.

The news is less bad for FM, which has thrived since the 1970s, and now accounts for most over-the-air radio listening. The FCC has also done its best to expand the number of stations and signals on the FM band, especially in recent years through translators and LPFMs. In Radio-Locator’s list of stations you’ll recieve in Las Vegas, 16 of the 59 listed signals are for translators and LPFMs. Meanwhile only 18 stations have listenable signals on AM, and some of those signals (such as KDWN’s) are smaller than they used to be.

Still, the effects of streaming and podcasting through the Internet will only increase. This is why so many stations, personalities, programming sources and station owners are rushing to put out as many streams and podcasts as possible. Today, every phone, pad and laptop is a receiver for every station with digital content of any kind, and there are many more entities competing for this “band” than radio stations alone.

While it’s possible that decades will pass before AM and FM are retired completely, it’s not hard to read the tea leaves. AM and FM are both gone now in Norway, which has switched to Digital Audio Broadcasting, or DAB, as has much of the rest of Europe. (We don’t have DAB in the U.S., and thus far there is very little interest in it.)

Still, I don’t doubt that many of entities we call “stations” will persist without signals. Last summer we listened to local radio from Santa Barbara (mostly KCLU) while driving around Spain, just by jacking a phone into the dashboard and listening to Internet streams through the cellular data system. Even after all their transmitters get turned off, sometime in the far future, I’m sure KCLU will still be KCLU.

The process at work here is what the great media scholars Marshall McLuhan and his son Eric  (in Laws of Media: The New Science) call retrieval. What they mean is that every new medium retrieves the content of what it obsolesces. So, much as print retrieved writing and TV retrieved radio, the Internet retrieves damn near everything it obsolesces, including TV, radio, print, speech and you-name-it.

In most cases the old medium doesn’t go away. But broadcasting might be different, because it exists by grace of regulation, meaning governments can make them disappear. The FCC has already done that to much of the UHF TV band, auctioning off the best channels to cellular systems. This is why, for example, T-Mobile can brag about their new long-range “5G” coverage. They’re getting that coverage that over what used to be UHF TV channels that stations auctioned away. It’s also why, for example, when you watch KLCS, channel 58 in Los Angeles, you’re actually watching channel 28, which the station shares with KCET, using the same site and transmitter. The Los Angeles Unified School District collected a cool $130,510,880 in a spectrum auction for channel 58.

So, when listening to the AM and FM bands drops sufficiently, don’t assume the FCC won’t say, “Hey, all the stations that matter are streaming and podcasting on the Internet, so we’re going to follow the path of Norway.” When that happens, your AM and FM radios will be as useful as the heavy old TVs you hauled out to the curb a decade ago.

Additional reading: The slow sidelining of over-the-air radio  and AM radio declared dead by BMW and Disney.

*In the ‘1970s, the hot thing in music radio was using high-value scrabble letters: Z, Q and J. Also combining those with “dial” positions, e.g. “Z-100.”

Tuesday, 14. July 2020

Timothy Ruff

How Verifiable Credentials Bridge Trust Domains

CREDIT: © DAVID NOTON PHOTOGRAPHY / ALAMY/DAVID NOTON PHOTOGRAPHY / ALAMY This is part 3 of a 3-part series: Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers. Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World Part 3: How Verifiable Credentials Bridge Trust Domains TL;DR: Most organizations aren’t digitally c
CREDIT: © DAVID NOTON PHOTOGRAPHY / ALAMY/DAVID NOTON PHOTOGRAPHY / ALAMY This is part 3 of a 3-part series:

Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers.

Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

Part 3: How Verifiable Credentials Bridge Trust Domains

TL;DR: Most organizations aren’t digitally connected to one another, creating “trust domain” barriers that are slow, manual, and expensive to traverse. Similar to shipping containers, Verifiable Credentials (VCs) can bridge trust domains between and within organizations, by using you as their courier. The potential ramifications of transitive trust with rapid verifiability are endless, when boundaries between trust domains from many of today’s friction-filled interactions all but disappear. The Barriers VCs Can Bridge: Trust Domains https://www.coloradoinfo.com/royal-gorge-region

Today, if you’re applying for a loan at your financial institution (FI) and need to prove your employment, how can you do so digitally, instantly? You can’t, in most cases. Why? Because it’s not feasible for every FI to have a direct, digital connection to every possible employer they may need data from. The same boundaries exist between schools, when a student transfers and desires to receive credit for classes taken: it’s untenable for every school to have a direct connection to every other school.

This leaves FI’s, employers, schools and other organizations in separate, unconnected trust domains, unable to directly exchange trusted data and reliant on manual processes, often through third-party data brokers acting as intermediaries. Exchanging data between trust domains today is analogous to the manual, labor-intensive “break-bulk” process used for millennia for all global trade prior to shipping containers, and is similarly slow, complicated, and expensive.

This is also why VC adoption may be resisted by incumbent data intermediaries as vigorously as labor unions protested shipping containers.

And this is the same reason we have so many usernames and passwords in the first place: there is no universal way to traverse trust domains in the digital realm. Identity federations within a single trust domain, like what’s offered by the current IAM industry to large organizations, solve this problem elegantly. But cross-domain identity federations, however, have repeatedly attempted to address this and failed for various reasons, leaving the problem unresolved and ballooning.

The boundaries between digital trust domains are understandably high, as the stakes can be high, the risk of fraud ever-present, and fraudsters can comfortably apply their trade from anywhere. We know these boundaries well:

Having many usernames and passwords Cumbersome forms and onboarding processes Verbally authenticating when calling a service center, and re-authenticating when being transferred Waiting for agreements to be signed or consent to be given Waiting for any kind of application to be approved Slow verifications of any kind of documents, records Many slow and/or tedious processes that rely on verifications

Any time you are slowed down or prevented from doing something because you can’t quickly prove something, it’s a manifestation of boundaries between trust domains, much like the bygone situation in shipping that necessitated “break-bulk.” If gatekeepers, human or automated, could quickly verify everything they needed to verify, they could quickly open their gates.

With VCs, they now can.

How VCs Bridge Trust Domains

This post already assumes a basic understanding of how VCs work, but at the risk of being redundant, I want readers to understand, mechanically, how VCs bridge trust domains.

https://despair.com/products/dysfunction

I think this ^^ is one of the funnier “demotivators” at despair.com, a site everyone should check out for a good laugh. As with most humor, what makes a demotivator funny is the thread of truth that runs through it.

The relevant truth in this poster is this: the only consistent feature in ALL of your relationships — not just your dissatisfying ones — is you. You are the common factor between all your employers, your FIs, your schools, and every other organization you deal with, and that makes you the ideal courier of data between them.

If only you had the data… and if only it could be trusted when you delivered it…

Well, this is where our super duper data shipping container comes in, the VC. Here’s how it all comes together:

You start with an empty digital SSI wallet. Not the Apple kind or Android kind or anything else similarly proprietary and non-portable, but a standardized self-sovereign digital wallet that’s all yours, where you literally hold the digital keys to it. You offer/accept connections to/from other people, organizations, or things using QR codes or other means of bootstrapping. These are peer-to-peer relationships that you now own and control, and not through some platform or company where they could be seen, controlled, or taken away from you. When VCs are offered to you by your connections, you choose whether to accept them into your wallet. When you do, you’re now a “holder.” An entity who gives you a VC, whether person, organization, or thing, is called an “issuer.” When you want to prove something to someone, or share verifiable data, you must first connect with them (#2) then you can share one or more of your VCs. You can also share a subset of data from a VC and not the whole thing, or just proof that you have a VC, or a compound proof from multiple VCs. In seconds, verifiers can verify four things about what you’ve shared: Who (or what) issued it to you It was issued to you and not someone else It hasn’t been tampered with It hasn’t been revoked by the issuer

Voila! Data has just left one trust domain and been accepted into another, without any direct connections between the two.

Most important: verifiers can confirm these four things cryptographically, without having to contact the original issuer. If verifiers had to contact the original issuer to confirm things, that defeats the whole purpose and we’d be back to square one. But they don’t, so we’re not, and that changes everything.

By using you as the courier of verifiable data between organizations you deal with, unrelated trust domains now have a bridge, and without requiring direct connections to one another. The key to making this all work is that issuers, holders, and verifiers all use the same interoperable protocols. This is why cooperation among competitors and the VC standards work¹ is critical, and exciting that it has made so much progress over the last several years.

The Vast Potential of Verifiable Credentials CREDIT: Evernym, Inc.

The potential ramifications of transitive trust with rapid verifiability are endless, when boundaries between trust domains for many of today’s friction-filled interactions all but disappear:

No more usernames or passwords, replaced by trusted VCs No more forms, quick and easy onboarding everywhere No more verbal authentication, you’re recognized everywhere you choose to be Instant, digital, multi-party consent (after waiting for slow humans to decide something) Instant application approvals Instant verifications of documents, records The efficient exchange of any trusted information

The effects for user experience would be wonderful. Friction from trying new vendors, products, and services would reduce to almost nothing. Customers can be instantly recognized as valued guests rather than treated as strangers at each new interaction. Processes and workflows can be simplified, automated, and accelerated, and new, nearly seamless experiences offered.

In Conclusion

The current inability for data to efficiently traverse trust domains bears a striking resemblance to global trade complexities prior to 1956; Verifiable Credentials bear a striking resemblance to its revolutionary solution, the shipping container.

Due to the enormous physical, logistical and even political obstacles to adoption, it took shipping containers more than a decade to hit their stride, for everyone to realize their potential. With VCs it will happen much more quickly, as much of our world is already digital, and connected.

I have often said during my five years in SSI that the tech titans of tomorrow are being created today, and I believe that now more than ever. If there’s a sweet spot for when to jump on the SSI bandwagon, we may still be a tad early, but not much, and I don’t think it will last as long as comparable opportunity sweet spots may have in the past.

VCs are the shipping containers of our lifetime, with all that entails.

This is part 3 of a 3-part series:

Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers.

Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

¹ VC standards and governance efforts can be followed at W3C, the Trust Over IP Foundation, and the Decentralized Identity Foundation.


Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

https://www.atclogistic.net/service/ This is part 2 of a 3-part series: Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers. Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World Part 3: How Verifiable Credentials Bridge Trust Domains TL;DR: Verifiable Credentials (VCs) aren’t just functionally similar to shipping conta
https://www.atclogistic.net/service/ This is part 2 of a 3-part series:

Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers.

Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

Part 3: How Verifiable Credentials Bridge Trust Domains

TL;DR: Verifiable Credentials (VCs) aren’t just functionally similar to shipping containers, they are economically similar as well, and will bring an explosion in global digital trade like shipping containers brought for global physical trade. The right container at the right time can change the world, dramatically. Shipping containers caused a precipitous drop in shipping costs (over 95%) by bridging the physical contexts between ships, trains, and trucks; VCs can deliver a similar cost reduction in the digital realm, by bridging the contexts between organizations. The transitive trust enabled by VCs enables cooperative network effects, which are even bigger than regular network effects. If VCs Are Economically Similar to Shipping Containers, It’s a Big Deal

The comparison of VCs to shipping containers is much more than a passing one. This analogy, which I’ve been using in presentations and conversations for the last couple of months, has been a big improvement when teaching the basic concept of VCs from a functional standpoint. It is simple, clear, and everyone seems to quickly understand it, a rare breath of messaging fresh air in a world of confusing SSI lingo.

But more important than messaging clarity, if VCs are economically similar to shipping containers, the ramifications to the digital economy are hard to overstate; everyone in SSI will be in for one heckuva ride.

In fact, I believe this possibility — that Verifiable Credentials are economically similar to shipping containers — could be the most impactful business realization of my lifetime.

The Right Container = Economic Explosion

From a humble beginning in 1956, “containerization” brought an explosion in global trade and became “more of a driver of globalisation than all trade agreements in the past 50 years taken together,” according to The Economist.

But how?

Before 1956, when shipping containers were invented by trucking entrepreneur Malcolm McLean, the world already had containers of many kinds — barrels, bags, boxes and crates — that needed to be painstakingly emptied when a ship arrived at port, and contents moved into other containers on trucks and trains. This manual “break-bulk” process, unchanged for millennia, was slow, complicated, dangerous, and costly.

Instead, Mr. McLean envisioned a container that was sturdy, stackable, secure, and most important: moved intact from a truck onto a train or a ship, and then back again when a ship reached its destination, without being re-packed along the way. It took a few years to really take off, but when shipping containers proved their immense usefulness in the Vietnam war it became clear they were the future, and the ISO then introduced the first shipping container standards that the industries of ships, trains, and trucks would begin to adhere to.

The rest, as they say, is history.

The efficiency gains were so dramatic that producers of raw materials and small manufacturers in every corner of the world could now market their products to every other corner of the world, changing global trade forever. “Made in China” wouldn’t be a thing today without “containerization.”

From $5.86 to $.16

There were many other containers in existence prior to 1956; why did this particular container make such a disproportionate difference?

Trans-contextual value transfer,” is the answer, in economic terms, my colleague Dr. Sam Smith would use, the ability to efficiently move value between the very different contexts of ships, trucks, and trains. By moving the entire container, and not emptying it until it has reached its final destination, the break-bulk process was eliminated entirely, causing a precipitous drop in shipping times and overall transaction costs. Stunningly, the price to unload one ton of goods fell from $5.86 in 1956 to only $.16 today.

Whenever transaction costs are reduced there is an increase in transactions proportional to the reduction¹, and that is exactly what happened.

Bridging contexts in any arena is hard, and expensive, but the gains from discovering how to do it efficiently can be enormous. Standardized shipping containers bridged physical contexts between ships, trains, and trucks and changed the world forever; VCs will do a similar thing for data, lowering transaction costs by efficiently bridging contexts between trust domains.

VCs Do for Data What Shipping Containers Did for Physical Goods http://www.exlindia.com/train-cargo-3/

Trust domains and how VCs bridge them, mechanically, is the subject of Part 3 of this series, “How VCs Bridge Trust Domains,” and so will not be treated fully here, but a quick summary seems appropriate.

The current process of moving trusted data between unconnected organizations is uncannily similar to the break-bulk shipping process prior to 1956: data must be carefully removed from the data containers of the sending organization and re-packaged into the data containers of the receiving organization. As an example, just ask anyone in the know how academic transcripts in the U.S. are really exchanged between schools today: it is manual, slow, complicated, expensive, and often requires the paid help of 3rd-party intermediaries. Again, this is strikingly similar to break-bulk.

VCs address this chasm how shipping containers did. Data senders (“issuers”) and data receivers (“verifiers”) first agree on a standardized container². Issuers pack VCs with a desired payload and then issue it, either to the wallet of an individual “holder” or into storage in a database or blockchain. When verified, the verifier can rely on the fidelity of the data payload because they can verify the provenance and fidelity of the container it came in, and critically, perform that verification cryptographically without contacting the issuer.

We don’t yet know to what degree VCs will reduce costs. Earlier I cited the more than 95% drop in costs shipping containers delivered; that level of efficiency wasn’t fully realized until most ships, trains, and trucks had made modifications to adhere to the standards and were in full production. The same thing will happen with VCs. While it will take some time, with the complete elimination of “break-bulk”-style data exchange, and of the then-obsolete data middlemen and their associated time, cost, and manual processes, the cost reduction will be massive.

When comparing VCs and shipping containers, you discover the nature of the problem they address is similar, the way they address it is similar, and the size of the markets — global physical trade and global e-commerce — is arguably similar. It follows that the reduction in transactions costs, proportional increase in transactions, and overall economic impact would be similar, too.

VCs Unlock Global Network Effects

Another, related dimension of the economic impact VCs will have on the digital economy, and one that’s more familiar than shipping containers but potentially just as impactful, is network effects.

VCs enable a powerful kind of network effect, a cooperative network-of-networks effect. Deeper research into how SSI and VCs enable cooperative network effects has been done by Dr. Samuel Smith, my partner at Digital Trust Ventures, in his seminal paper, Meta-Platforms and Cooperative Network-of-Networks Effects, so I will only summarize Sam’s findings here.

When networks “cooperate,” some value from one network is transferable to another. Cooperative network effects are inter-network and exponentially greater than intra-network effects, like comparing the internet to AOL. To understand cooperative network effects it’s helpful to first understand its cousin: competitive network effects³.

The ride sharing industry offers an excellent example of competitive network effects. When Uber acquires a new driver, that driver becomes more likely to also drive for Lyft, Uber’s direct competitor. Though Uber and Lyft may wish it were not so, every driver and rider gained by one network is just clicks away from being gained by the other, which is unfortunate for whoever spent large sums to first gain them. This effect accelerates competition, adds price pressure, and sows the seeds of mass defection if it ever becomes advisable. Of course there is a selfishly beneficial tradeoff too, when your network gains from another’s expansion.

In contrast, cooperative network-of-network effects occur when growth in one network benefits other, non-competitive networks. Literally, customers gained in one network or industry are accretive to other, non-competitive and even completely unrelated networks or industries.

For example, several credit unions (CUs) in the U.S. are now issuing VCs to members to enable a secure, streamlined experience when they call in, walk in, or log in; they call it MemberPass. While the first use cases are limited to within the issuing CU, any entity in any industry could also choose to accept MemberPass and offer a sleek onboarding and login experience that eliminates forms and passwords for anyone presenting one. What website wouldn’t prefer instant onboarding and authentication of new customers presenting cryptographically verifiable credentials issued by regulated U.S. financial institutions, instead of the friction of forms, usernames, and breachable passwords they’re forced to offer today?

Every new MemberPass issued becomes a new, easily onboardable prospective customer for any entity anywhere that accepts VCs, online or off. And value flows both ways, as CUs benefit from increased satisfaction whenever members successfully use MemberPass, from increased value provided to members and the loyalty that engenders, and from increased demand for MemberPass from both existing and prospective CU members.

These are cooperative network effects indeed, as they foster cross-industry collaboration. The only obstacle is awareness.

This kind of cooperation between unrelated networks, both intentional and unintentional, has rarely been observed or achievable, but with VCs becomes predictable. The resulting ‘meta’ network effects — Smith’s Law — are exponential compared to intra-network effects, and also beyond Metcalfe’s Law. If you’re a student of the science of network effects, be sure to check out Sam’s paper. You’re in for a treat; the math for this is incredible.

In Conclusion

Shipping containers were a once-in-a-lifetime improvement in global trade, forever bridging the physical barriers between ships, trains, and trucks. VCs are the shipping containers of the digital realm, bridging trust domain barriers between organizations, departments, and even internal silos.

Shipping containers brought an unprecedented multi-decade, multi-trillion-dollar global economic expansion by dramatically reducing transaction costs for physical goods; VCs will do the same for digital goods, and economically change the world to a similar magnitude.

If that were VCs only economic value, it would be more than enough. VCs also enable cooperative network-of-network effects, where issuances in one industry can benefit other, non-competitive industries, potentially many of them.

Excited yet? It’s time.

This article is part 2 of a 3-part series:

Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers.

Part 3: How Verifiable Credentials Bridge Trust Domains

¹ M. C. Munger, “Tomorrow 3.0: Transaction costs and the sharing economy,” Cambridge University Press, 2018.
² This underscores the importance of the work those involved in VC standards have been working on globally for years, and the exciting progress that has been and continues to be made at W3C and the Trust Over IP Foundation.
³ Network effects between competitive networks are technically also considered cooperative, but for the purpose of explanation I’m treating them here as separate and distinct.


Verifiable Credentials Aren’t Credentials. They’re Containers.

This is part 1 of a 3-part series: Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers. Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World Part 3: How Verifiable Credentials Bridge Trust Domains TL;DR: Verifiable Credentials (VCs) aren’t actually credentials, they’re containers, like shipping containers for dat
This is part 1 of a 3-part series:

Part 1: Verifiable Credentials Aren’t Credentials. They’re Containers.

Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

Part 3: How Verifiable Credentials Bridge Trust Domains

TL;DR: Verifiable Credentials (VCs) aren’t actually credentials, they’re containers, like shipping containers for data. Having containers inside of containers is a feature, not a bug. Like physical credentials, the VC container is always verifiable, not its data payload. The Credential That Isn’t

Despite their name, Verifiable Credentials (VCs) aren’t really credentials; they are containers, much like shipping containers for data, which actually makes them more useful and powerful than if they were credentials.

NOTE: The comparison to shipping containers is more than a passing one, as shipping containers are the most impactful containers ever devised and VCs are strikingly similar to them, both functionally and economically. Details of these similarities and their potential ramifications are covered in parts 2 and 3 of this series.

Like a shipping container, there’s a sender, called the “issuer”, who packs a VC’s data contents. The receiver of the VC container, called the “verifier”, unpacks and verifies its payload (or a subset proof thereof). Between issuers and verifiers there is typically a human carrying the VC in a standardized SSI digital wallet, but VCs can also be transported in other ways, or stored in a relational database¹ or a distributed ledger².

The payload of a VC can be any kind of data and is not limited to what might normally be considered a credential.

In fact, the word “credential” is a holdover from the worlds of identity and academia, where VCs began their standardization journey and have many important use cases. But recently I’ve seen people in academia get confused when the payload inside a VC is what the academic world calls a credential — like a diploma or a transcript — resulting in a credential within a (verifiable) credential. Others get confused when the VC payload isn’t a credential, in either academic or identity parlance, but we still call the VC a credential.

Now that VCs have the global momentum they do, I feel we need to “open the aperture” beyond identity and credentials. Here are some example VC payloads not typically considered credentials, all of which become cryptographically verifiable when delivered as a payload within a VC:

Consent, permissions, votes, opinions Balances, totals, deficits, ranges, statistics, source data Statements, agreements, contracts, invoices, sources Confirmations, acknowledgements, attestations, assertions, affidavits The speed, trajectory, weight, temperature or location of anything Laws, regulations, statutes, rules, orders, decrees, declarations Photos, videos, music, messages Software code, hashes, files, the state of a database at a given point in time Tests, procedures, prescriptions, diagnoses

Of course, objects typically considered credentials make ideal VC payloads:

Identities, names, addresses, usernames, titles, attributes, profiles, bios Licenses, passports, visas, permits, tickets, coupons, vouchers, receipts Skills, competencies, certificates, badges, diplomas, degrees, achievements, awards Identifiers, codes, account numbers, ticket numbers

It is true that the identity- and achievement-related use cases in this second list are those most actively being pursued in the VC space right now, and many in the first list cannot be realized until there’s broader adoption by issuers and verifiers. Still, the word “credential” has a specific meaning in several industries, and it is consistently confusing to VC noobs for one simple reason: it is an inaccurate term for what a VC really is and does.

Truly, “Verifiable Credentials” should be renamed to “Verifiable Containers.” I can’t think of anything important we lose by doing so, and can think of much clarity and other benefits (detailed in this series) that we would gain. Of all the gnarly terminology challenges we face in the SSI space, this one seems like a slam dunk to me.

Containers Inside of Containers Is a Feature, Not a Bug http://www.rubylane.com/item/712219-T118-121814-3/Russian-Matryoshka-Dolls-Nesting-Dolls

On several occasions I’ve been asked something like: “We already have containers for learning achievements with Open Badges; why do we need another container?”

A fair question, but it presumes containers inside of containers aren’t useful or necessary for data, which isn’t necessarily the case.

Life is full of containers inside of containers. Products at the store come in boxes that arrived in larger boxes, stacked with other boxes onto a crate, which itself is another container, often shipped inside shipping containers, which are carried in/on the containers of ships, trains, and trucks. Data is stored inside files within folders, within other folders, inside of directories, inside of virtual machines, inside of partitions, inside a hard drive, within an enclosure, inside a computer, often on a shelf within a rack within a room inside a building.

You get the point.

Each level of container has a purpose and adds value and efficiency to the whole chain; try taking one away and it gets messy. Sub-specialties and entire industries are built up around the containers at each level, and they often don’t know much about the containers at the other levels. The people operating giant cranes at a port don’t know or care much about what’s inside the containers they’re moving, and they don’t need to.

CREDIT: Cam Geer

Even though it is a container itself, like any file a VC must be held in another container: a database, blockchain, or preferably a standardized SSI wallet. If a wallet, that wallet must be held in another container that makes it usable, like a smartphone, a USB stick or even a chipped card.

A VC can hold any sort of data payload, even as simple as a clear-text “Yes” or “No” when granting or refusing consent. If the payload is more advanced, like an academic achievement, it should be encoded in a way that provides semantic meaning to a verifier, such as an IMS Comprehensive Learner Record (CLR). When a CLR is a VC’s payload, it’s an achievement inside of a CLR container inside of a VC container likely inside of a wallet. Take away the CLR container and you lose some semantic meaning for verifiers; take away the VC container and you lose the ability for CLRs to become portable alongside other, non-CLR payloads, forcing CLR users to adopt special capabilities unique to CLRs. If you repeat that specialized thinking for every kind of payload, from academia to healthcare to government, you quickly get back to a world full of bespoke containers and troublesome trust domains, right back where we started.

This is why a standardized shipping container for data is needed, and just as useful as its physical analog. Whether physical or digital, it’s Russian dolls everywhere you look, and that’s a good thing.

VC Containers Are Always Verifiable; Payloads Are Not

As with any container, with VCs it’s GIGO: garbage in, garbage out. The only thing that’s always verifiable with a VC is the container itself, where a verifier can cryptographically verify its fidelity, but not always the veracity of the payload.

For example, ABC University could issue to you a VC with this statement: “blue is green.” Clearly blue is not green and someone was messing around, but we can still verify four elements of the VC’s fidelity:

ABCU issued it; They issued it to you; It has/hasn’t been tampered with since it was issued; ABCU has/hasn’t revoked it.

Knowing who packed the container establishes the VC’s provenance to a specific issuer and directly influences a verifier’s willingness to rely on its payload, which is why many conflate this issue and believe that payloads are verifiable. But it is a significant distinction; a verifier can confirm that a VC container is authentic and from a trustworthy source, but still decide not to rely on its payload. (A payload can often be separately verifiable in some way, such as an Open Badge or even another VC, but that is beside the point here.)

This is how physical credentials work, too. A physical passport is a container that holds data (though only one type of highly regulated data); it is the physical container the airport verifies when you present it to them. Then, upon concluding the container is authentic, the airport usually proceeds to rely upon the data within the presented container. VCs function in the same manner but in the digital realm, and capable of carrying any data payload, which makes them far more flexible, useful, and powerful overall than physical credentials.

Of course, you don’t need a battery to share your driver’s license, so there’s that. But then again your passport and driver’s license can — and I believe will — become chipped³ and carry VCs just as chipped EMV payment cards carry other bits of data, no batteries required, so there’s that, too.

In Conclusion

If it’s true that VCs aren’t really credentials, and that calling them credentials often confuses people coming into the space, then we should seriously consider not calling them credentials.

The fact that VCs are containers capable of carrying any sort of verifiable data payload isn’t just a good thing, it’s a great one. And as you’ll read in parts 2 and 3 of this series, it has the potential to change the world as much as the shipping container did, and that’s a big deal. A huge one.

Finally, it is useful to put other data containers inside of VCs, and maybe even containers inside of those containers, all riding along and benefiting from the portability between trust domains enabled by a Verifiable Credential.

Or should we say, “Verifiable Container”?

This article is part 1 of a 3-part series:

Part 2: Like Shipping Containers, Verifiable Credentials Will Economically Transform the World

Part 3: How Verifiable Credentials Bridge Trust Domains

¹ Example: BC Gov (British Columbia) Verifiable Organization Network: https://vonx.io/
² Examples: https://www.velocitynetwork.foundation/ and https://trust.asu.edu/
³ German ID cards (“Personalausweis”) are chipped and NFC-accessible through a corresponding app. See here and here.


@_Nat Zone

[2020-07-21] セキュリティ・サミット講演「本人に伝えるプライバシー・ポリシー ~ISO/IEC 29184 より~」

第90回コンピュータセキュリティ・第38回セキュリ… The post [2020-07-21] セキュリティ・サミット講演「本人に伝えるプライバシー・ポリシー ~ISO/IEC 29184 より~」 first appeared on @_Nat Zone.

第90回コンピュータセキュリティ・第38回セキュリティ心理学とトラスト合同研究発表会(セキュリティ・サミット) Day 2 で、「本人に伝えるプライバシー・ポリシー ~ISO/IEC 29184 より~」と題した招待講演をする機会をいただきました。

日本の企業は「プライバシー・ポリシー」なるものを多くの場合公表しておられると思います。しかし、残念ながらその多くは、発表している企業を守るためのものになっていて、法律的な文書を読み慣れない一般人には到底理解不能なものになっています。これに対して本来の「プライバシー・ポリシー」は、どのようにデータが扱われるのかを本人やその他のステークホルダーに伝えるものでなければなりません。

この講演では、こうした正しい「プライバシー・ポリシー」〜これを英語でPrivacy Notice (プライバシー告知 or 通知)といいます〜のあり方を、6月に出版された ISO/IEC 29184 に沿って解説します。

参加申込は https://service.kktcs.co.jp/smms2/m2event/ipsj1/201800408 からお願いいたします。

■Session: 企画セッション(2) 14:30 – 15:30
(27) 2020-07-21 14:30-15:30
招待講演「本人に伝えるプライバシー・ポリシー ~ISO/IEC 29184 より~」
崎村夏彦 (ISO/IEC JTC 1/SC 27/WG 5 主査

0721-Notice-and-Consent-ja-

The post [2020-07-21] セキュリティ・サミット講演「本人に伝えるプライバシー・ポリシー ~ISO/IEC 29184 より~」 first appeared on @_Nat Zone.


DustyCloud Brainstorms

Announcing FOSS and Crafts

I wrote recently about departing Libre Lounge but as I said there, "This is probably not the end of me doing podcasting, but if I start something up again it'll be a bit different in its structure." Well! Morgan and I have co-launched a new podcast called FOSS and Crafts …

I wrote recently about departing Libre Lounge but as I said there, "This is probably not the end of me doing podcasting, but if I start something up again it'll be a bit different in its structure."

Well! Morgan and I have co-launched a new podcast called FOSS and Crafts! As the title implies, it's going to be a fairly interdisciplinary podcast... the title says it all fairly nicely I think: "A podcast about free software, free culture, and making things together."

We already have the intro episode out! It's fairly intro-episode'y... meet the hosts, hear about what to expect from the show, etc etc... but we do talk a bit about some background of the name!

But more substantial episodes will be out soon. We have a lot of plans and ideas for the show, and I've got a pretty good setup for editing/publishing now. So if that sounds fun, subscribe, and more stuff should be hitting your ears soon!

(PS: we have a nice little community growing in #fossandcrafts on irc.freenode.net if you're into that kind of thing!)


@_Nat Zone

[2020-07-16] Citi Client Advisory Board and Treasury Leadership Forum Day 3 に出演します

ご縁があって、来る7/16(日本時間深夜)に、Ci… The post [2020-07-16] Citi Client Advisory Board and Treasury Leadership Forum Day 3 に出演します first appeared on @_Nat Zone.

ご縁があって、来る7/16(日本時間深夜)に、Citi のイベント「Citi Client Adivisory Board and Treasury Leadership Forum」に出演します。

イベントは、米国東部時間の午前10時(日本時間午後11時)からで、わたしの出番は午前0時からのパネルディスカッションです。タイトルは、「Digital Identity: foundational or elective?」です。

Digital Identity: foundational or elective?

日時:2020年7月17日 0:00 – 1:00
場所:バーチャル

Tony McLaughlin, Global Head, Emerging Payments and Business Development, Citi (moderator) Bianca Lopes, Co-Founder, Talle Greg Wolfond, Founder, SecureKey Nat Sakimura, Chairman, OpenID Foundation Tony McLaughlin

Tony McLaughlin
Global Head, Emerging Payments and Business Development

Tony McLaughlin is the Managing Director responsible for Emerging Payments and Business Development in Citi’s Treasury and Trade Solutions (TTS) business. Tony is responsible for the TTS Ecommerce proposition and is deeply involved in new methods of payment, Distributed Ledger and Fintech engagements. He joined Citi in 2004 and has been Core Cash Head for Asia Pacific based in Hong Kong and the Global Transaction Services Head for the United Kingdom, spearheading Citi’s engagement with large public sector clients and payment aggregators. Tony was responsible for the design and development of ABN AMRO’s Third Party Continuous Linked Settlement (CLS) offering, electronic banking platform and transactional FX solution. At HSBC Holdings, he fulfilled a global strategy role for the Payments and Cash Management business. Before that he was a Senior Product Manager for Barclays Bank with responsibility for electronic collections products including International Direct Debits.

Bianca Lopes

Bianca Lopes
Co-Founder, Talle

Bianca is an economist, mathematician and serial entrepreneur. Naturally curious and fascinated by humans, she has been a banker, has been a CIO of a biometrics company, and has worked with over 40 financial institutions and eight governments to help reshape their human approach to technology by rethinking identity. Bianca is an experienced speaker in data, identity and fintech, with a passion for digital and financial literacy.

Bianca co-founded Talle, a global collective of futurists, strategists, technologists and creatives who take a multidisciplinary approach to helping organizations navigate the complexities of an increasingly digital world. Talle enables businesses to envision the future, create strategy and execute against tangible KPIs, transforming business and messaging by connecting trust and true empathy to make people think, expand perspectives and connect ecosystems.

Greg Wolfond

Greg Wolfond
Founder, SecureKey

Greg is the founder of SecureKey and brings more than 30 years of experience in fintech, security and mobile solutions to his role as Chief Executive Officer. Greg is a serial entrepreneurial whose earlier ventures include Footprint Software Inc., a financial software company he sold to IBM, and 724 Solutions Inc., a wireless infrastructure software provider he took public. He sits on several boards and has been recognized as one of Canada’s Top 40 Under 40, Entrepreneur of the Year and one of the 100 Top Leaders in Identity. Greg holds a Bachelor of Arts in Computer Science from the University of Western Ontario and a Bachelor of Science in Biochemistry and Life Sciences from the University of Toronto.

Nat Sakimura

Nat Sakimura
Chairman, OpenID Foundation

Nat Sakimura is a well-known identity and privacy standardization architect at NAT Consulting and the Chairman of the Board of the OpenID Foundation and MyData Japan. Besides being an author/editor of such widely used standards as OpenID Connect, JWT (RFC7519), JWS (RFC7515), OAuth PKCE (RFC7636) ISO/IEC 29100 Amd.2, and ISO/IEC 29184, he helps communities to organize themselves to realize the ideas around identity and privacy.

As the chairman of the board of the OpenID Foundation, he streamlined the process, bolstered the IPR management, and greatly expanded the breadth of the foundation spanning over 10 working groups whose members include large internet services, mobile operators, financial institutions, governments, etc.

He is also active in public policy space. He has been serving in numerous committees in the Japanese government and also advising OECD’s Working Party on Data Governance and Privacy in Digital Economy as a member of the Internet Technical Advisory Committee (OECD/ITAC).

He is currently the head of delegates of the Japanese National Body to ISO/PC 317 Consumer Protection: Privacy by design for consumer goods and ISO/IEC JTC 1/SC 27/WG 5 that standardizes Identity management and privacy technologies and is a founding board member of Kantara Initiative.

Personally, he was a flautist and still deeply loves (both western and Japanese) ‘classical’ music especially the 20th century and later. (Well, is that ‘classical’?) He spent six years in Kenya while he was in junior and senior high school, where he learnt how to horse ride to go after giraffe, and still loves the life there.

The post [2020-07-16] Citi Client Advisory Board and Treasury Leadership Forum Day 3 に出演します first appeared on @_Nat Zone.

Sunday, 12. July 2020

Just a Theory

BLM NYC

I biked down to Midtown to see the new #BlackLivesMatter street art in front of Trump Tower.

#BlackLivesMatter #NewYorkCity

I took advantage of the new City Bike dock in front of our flat to ride 80 blocks south through Central Park today to see the new #BlackLivesMatter mural outside Trump Tower. Not too many people around, perhaps a dozen taking photos and selfies. I joined them, then took a photo for a couple and their toddler in front of the mural. I couldn’t stop grinning under my mask. The mural may be symbolic, but holy cats do symbols matter.

More photos on Instagram.

Also, what a great ride this was. Thinking about quitting my job just to wander around New York and take it all in. Love this city.

More about… Justice Black Lives Matter New York City Trump Tower

Saturday, 11. July 2020

Just a Theory

Jia Tolentino on… Gestures Vaguely

A terrific interview with a fabulous writer.

Kottke’s right, this interview with Jia Tolentino in Interview is soo great. In a few cogent, direct, clear paragraphs, she covers her views on quarantine, Covid, reading, pregnancy, capitalism, Black Lives Matter, policy changes, protest, climate change, and more. An example, on white discomfort:

I’m also suspicious of the way that Not Being Racist is a project that people seem to be approaching like boot camp. To deepen your understanding of race, of this country, should make you feel like the world is opening up, like you’re dissolving into the immensity of history and the present rather than being more uncomfortably visible to yourself. Reading more Black writers isn’t like taking medicine. People ought to seek out the genuine pleasure of decentering themselves, and read fiction and history alongside these popular anti-racist manuals, and not feel like they need to calibrate their precise degree of guilt and goodness all the time.

The whole thing’s a gem, don’t miss it.

(Via Kottke)

More about… Interviews Jia Tolentino

Tuesday, 07. July 2020

Mike Jones: self-issued

Identiverse 2020 Talk: Enabling Scalable Multi-lateral Federations with OpenID Connect

My Identiverse 2020 talk Enabling Scalable Multi-lateral Federations with OpenID Connect was just broadcast and is available for viewing. The talk abstract is: Numerous large-scale multi-lateral identity federations are in production use today, primarily in the Research and Education sector. These include national federations, such as SWAMID in Sweden and InCommon in the US, some […]

My Identiverse 2020 talk Enabling Scalable Multi-lateral Federations with OpenID Connect was just broadcast and is available for viewing. The talk abstract is:

Numerous large-scale multi-lateral identity federations are in production use today, primarily in the Research and Education sector. These include national federations, such as SWAMID in Sweden and InCommon in the US, some with thousands of sites, and inter-federations among dozens of federations, such as eduGAIN. Yet these existing federations are based on SAML 2 and require the federation operator to poll the participants for their metadata, concatenating it into a huge file that is distributed to all federation participants nightly – a brittle process with significant scalability problems.

Responding to demand from the Research and Education community to migrate from SAML 2 to the simpler OpenID Connect protocol, the OpenID Connect working group has created the OpenID Connect Federation specification to enable this. The new approach incorporates lessons learned from existing SAML 2 federations – especially using a new, scalable approach to federation metadata, in which organizations host their own signed metadata and federation operators in turn sign statements about the organizations that are participants in the federation. As Shibboleth author Scott Cantor publicly said at a federation conference, “Given all my experience, if I were to redo the metadata handling today, I would do it along the lines in the OpenID Connect Federation specification”.

This presentation will describe progress implementing and deploying OpenID Connect Federation, upcoming interop events and results, and next steps to complete the specification and foster production deployments. The resulting feedback from Identiverse participants on the approach will be highly valuable.

As a late-breaking addition, data from the June 2020 Federation interop event organized by Roland Hedberg was included in the presentation.

You can also view the presentation slides as PowerPoint or PDF.

Sunday, 05. July 2020

Moxy Tongue

Serve Versus Solve

Do you serve a problem or do you solve a problem? Serving problems is lucrative work. Solving problems is valuable work. Extracting the value from your service of problems is done in completely different ways than from solving problems. People who solve problems are the innovators, the entrepreneurs, the social-benefactors that move our species forward. People who serve problems benefit from the
Do you serve a problem or do you solve a problem?

Serving problems is lucrative work.

Solving problems is valuable work.

Extracting the value from your service of problems is done in completely different ways than from solving problems. People who solve problems are the innovators, the entrepreneurs, the social-benefactors that move our species forward. People who serve problems benefit from the existence of problems to serve.

This is a profoundly important distinction to understand in the constitution of any Society. People do the work they are enticed to do. Work is valued in any number of ways, and markets form systems based on the repeatability of a task, or behavior, or contract model.

Contracts define the repeatability of actions. Solving problems ends the service loop provided by those people that benefit from the existence of problems to serve. All around the industrialized world, people are serving problems. Little problems that are solved with button pushes, and big problems that require emotional investment.. both are being solved by people. Solving those problems in a different manner than they are being served creates a new problem:

What will people who serve problems do if someone solves the problem that they serve?

The extinction of problems is a HUGE problem for people that are dependent on the existence of problems to serve for their own well-being. Do you see the loop? Do you sense the problem?

Meta-thinkers that can escape this loop and apply their attention to the solving of problems are the greatest agents of change possible in the Human species. All over the world, the fear of the future is taking over immature psychologies. People who fear their own ability to take care of their needs, to have enough to get by, or to prosper with adequate health, wealth and wisdom through life's required challenges, are falling victim to a very predictable mental deficiency.

The unpredictability of life is not unique, it is the status quo. Change is constant, and people are here to solve the problems that life is composed of. Serving problems is an industrial era monetization of the permanence of problems. Government think tanks of eras gone by persist this logic pattern and operational mandate in order to extract rent from the existence of problems they, and their cronies, service.

Service models that help problems persist exist definitively in 2020 as political platforms. Creating problems from the raw material of modern life, recycling problems that are rehashed from dead history, joining problems in endless propaganda cycles that induce fear, compliance, and predictable work-patterns is how problem-servers maintain their authority in Society.

People are manipulated into believing, into accepting the idea that the servants of persistent problems, and problem-cycles that never end, are here to help them. People buy into these ideas in good faith, because they think that these "good people" serving their problems are their friends helping them overcome, when in fact they are the barnacles riding shotgun on the energy of the problem that is being served for the benefit of the party being directly affected by the problem, as well as the party earning a paycheck via their service of the problem.

Never-ending cycles of oppression are the result when the servants of problems show up on the scene. In the name of social justice, of workers rights, or child welfare, of education, of health care, of drug enforcement, and many other problems, servants of problems loot the world of value in a never-ending cycle of servitude. Problems persist indefinitely, incremental improvements ebb and flow, and people believe in servants that become parties with platforms.. servants of problems.. VOTE DEMOCRAT!

Problem solvers are different.

Problem solvers are disruptors.

Problem solvers end service cycles, and give new capabilities to Individual people. ALWAYS to Individual people, empowerment solutions are delivered directly to the experience where people have problems. This is the critical distinction, as Individuals we begin, as Individuals we remain.. for our entire life, one person will travel with you for the duration. One.

One among many other ones, this is our species Humanity.

Solving human problems is the highest order work there is. People solving problems is as close as we will ever come to the purpose of human life. People helping people solve problems, everyday.. constantly, new problems is the raw material of enlightenment, and equality for enlightened people.

There is no other condition that intercedes as more important than being human, and solving problems.

Mob rulers need mob thinkers, to enforce their service model to problems. People will never solve a problem so long as they are held captive to the servants of problems. When you vote, vote not for the servants of Society's problems, but for the people that solve problems.

Vote for the actual, functional leader. Vote for your operational existence, vote for the solution to real problems, and look for the distinction between problems that can be solved, and problems that exist so that they can be serviced.

This will help you immensely in your life, and it will solve a major problem for all of Humanity.




Saturday, 04. July 2020

@_Nat Zone

調べてみた>「PaloAlto Networks製品のSAML認証に危険度最大の脆弱性米サイバー軍も注意喚起」

去る7月1日に次のような記事が流れてきました。 た… The post 調べてみた>「PaloAlto Networks製品のSAML認証に危険度最大の脆弱性米サイバー軍も注意喚起」 first appeared on @_Nat Zone.

去る7月1日に次のような記事が流れてきました。

ただ、この記事からだけだと、結局どういう脆弱性なのかわかりませんし、Palo Alto Networks のサイトの当該ページに行っても同様です。ここのところ忙しくてなかなか手が回らなかったのですが、今日土曜日は仕事をしないと決めていたのでちょっと調べてみました。といっても、他の文献をあたっただけですが。

その中で詳しくて良いと思ったのが、tenableの解説です。

攻撃成功の前提条件

それによると、この攻撃が成功するためには次の2つの前提条件があります。

SAML認証をおこなっていること。 IdPの証明書の検証を行わない設定になっていること。 (図1)  SAML Identity Provider Server Profile configuration画面 (Source) tenable

「IdPの証明書の検証を行わないなってことあるの?」と思われるかも知れませんが、これはかなり普通のことです。自己署名証明書を使う場合がこれで、いわばRaw Keyを使っている状態ということになります。上記の記事によると、実際に以下の製品ではこの設定を推奨しているようです。

Okta [Image] SecureAuth [Image] SafeNet Trusted Access [Image] Duo [Image] Trusona via Azure AD [Image] Azure AD [Image] Centrify [Image]

メジャーどころ勢揃いですね。そして、セキュリティ的にはこれで問題ありません。

何が起きているのか?

さて、では、お待ちかねのセキュリティホールの内容です。(図1)の設定で省略されるのは、IdPが署名に使う証明書のCertificate ChainとRevocationのチェックを省略するだけのはずですよね?ところが、実際には、SAML Assertionに対するIdPの署名の検証まで省略してしまっていたようなのです。すべてのSAML Assertionへの署名の検証が省略されるのか、ある特定の条件のときなのかはわかりませんが、その条件にあたれば、管理者の権限まで取られてしまうことは確かなようです。なので、重大性MAXの脆弱性〜VPNで守っているつもりでもまったく守られていない状態〜にあたるわけですね。

ITメディアの記事を最初に読んだときには「署名検証の不備」と書かれていたので、またXML署名におけるXML正規化由来の署名バイパスかと思って、

XMLDSig の署名検証は人類には早すぎた。> Palo Alto Networks製品のSAML認証に危険度最大の脆弱性、米サイバー軍も注意喚起 https://t.co/1usvMmpwOX

— 崎村夏彦 (=nat) (@_nat) July 2, 2020

と書いたのですが、どうやらそうではなくて、「PKIは人類には早すぎた」問題だったようです。

対策

対策は2種類あります。

PaloAltoネットワークが出したパッチを当てる。 すぐにパッチが当てられないならば、IdPの使う証明書を自己署名ではなくCAが発行したものに変え、IdPの証明書の検証を行うようにすること。

です。まだ、実際の攻撃はされていないようですが、可及的速やかに対策する必要がありますのでご注意ください。

The post 調べてみた>「PaloAlto Networks製品のSAML認証に危険度最大の脆弱性米サイバー軍も注意喚起」 first appeared on @_Nat Zone.

Thursday, 02. July 2020

Moxy Tongue

DevCare

In 2008, my new family had a 3 year old child, a young international toy design and distribution company, and a newly emerging indie educational technology non-profit to take care of and grow with all the daily intensities you can imagine. Money? Yeah, we were working our butts off to survive and prosper with ideas and implementations that had very little support to fall back on if something went w
In 2008, my new family had a 3 year old child, a young international toy design and distribution company, and a newly emerging indie educational technology non-profit to take care of and grow with all the daily intensities you can imagine. Money? Yeah, we were working our butts off to survive and prosper with ideas and implementations that had very little support to fall back on if something went wrong. Health care? Yeah, we paid for it ourselves.

Then came ObamaCare. You might of heard about it. I watched the news, wondering if our Government would do something right for a change. Then reality started to sink in. The "Individual Mandate" was a perversion of the system I knew, and was an ugly administrative overreach that looked desperate in its attempt to force participation. Every part of that program was devised to support the non-working class, or the employed-class of people. As an entrepreneur funding my own survival, and any hope of quality of care or prosperity, I was not a consideration. Months later, after its passage, this reality become solidified. My family lost the health care it was paying for independently.

I was a casualty of progress, right? A method of socialized health care was being promoted as a savior for the people of my country, when in fact, it was being experienced in just the opposite way by an actual American family. It took me many years to solve this problem, but eventually I did so. It is called DevCare... and I provide it to 9 people who work for me plus their families. I spend $60,000+ per year providing health care to the people I employ. It works, and it works really well.

ObamaCare? Yeah.. that was pure socialist BS. That was never a solution to a problem, it was a redesign of dependency, forced upon Americans by a President and Congress hell-bent on changing America from a place where an "American Dream" of self-possession, private ownership, free-enterprise and hard work paid off. Instead, we all got equalized.. as Medicare dependents.

So along the way, after years doing private toy development, and licensing under brands as well known as "Imaginarium" from Toys R Us, we folded shop. It was time to move on. Doing business in China was not going to go well for anyone anymore. The writing was on the wall. "Move along"....

My kid, in the meantime, had entered the public school system, specifically because I wanted him to have an experience of learning with the public in my country. We lived in an extremely rural location, and 60%+ of the students attending our school received federally subsidized lunches. So in order to make sure the school produced at the level I hoped, my wife and I volunteered. She became the President of the PTA, and I donated my time and resources to introduce kids to robotics, programming, and art/math/logic that can move while you learn about it. Along the way, some local-yocal parents complained that our activities were going to steal jobs from the local contractor-economy, and should be shut-down. I shit you not. The school Principal was smarter than that, and we continued volunteering.. at least until 2nd grade.

That was the end of my public school experience. In 2nd grade, we exceeded the capacity of the public school system in the United States of America. There was no other choice... move, pay for private school that "might" be better, or do it yourself... home school... or leave the religious stuff out and just "indie school". So we did so... we built a company that supports multiple families while educating our own child indie school style.

And we moved... to New York.. to Long Island, where accessing independent learning was much easier than in Virginia. Where economics was something that had velocity, and all sectors were present in everyday life... not separated by scores of miles, stored on Dulles Toll Road, or in communities trying to overcome their racist past like Charlottesville.

And along the way, we had an experience that was completely unexpected; others saw the world and the problems being created at the hands of socialist Governors as we did. Individual mandates were seen as communist over-reaches, regardless of media propaganda controlling the masses of dependents. And being an entrepreneur, building and teaching, and leading the world towards "functional literacy" and "functional identity" by sourcing everyone with the ideas behind self-Sovereign identity, starting in President Obama's own $10million effort called NSTC, started picking up steam.

But socialists do one thing well... they clone value so they can deplete it. They loot, to be more precise. And by becoming visible, and sharing problem:solution outcomes, I was exposing myself to a ravenous community of socialist looters, both in Washington DC and Silicon Valley. No one survives that onslaught, right? With a social graph documenting all the relationships and hidden beliefs of Americans, and Government making dependency the law of the land, producing functional literacy on the part of American's was becoming increasingly challenging.

"If you are not a socialist in your 20's, you have no heart. If you are not a capitalist by your 30's, you have no brain".. not sure that is an actual quote anybody ever said, but it makes sense in observable reality.

I was raised by a Teacher, a social worker who believed she could solve the worlds problems by caring enough, even if practicable financial literacy was a mystery concept. I come from a family of educators, economists, a learned people.. and I respect all their ideas and practices to help the world be a better place, and people be better citizens.

But I am not one to swallow conformity, and I am not one to wait for my dependencies to be served by a Governing system that can not accurately define them, let along respect the life, liberty or pursuit of happiness that Individuals create in my country.

So with all the "Hope" in the world for "Change I could Believe in" I decided to go a different route. I decided to solve my own problems. To stand up my own Rights. To demonstrate what a functional American is, and to mentor others to overcome the false education system based on "Credentials of Exclusion" that exists to provide employment first, and education next. Jobs, Jobs Jobs!

Sure, a "Union of Workers" seems like a decent concept within a "More Perfect Union".. but is it? The USSR (Russia) was formed for just that purpose...  a "Union of Workers". More importantly, how is a "More Perfect Union" made possible by sub-Unions, or operational cults in most cases, that demand loyalty from their participants and black-ball any and all non-believers who step out of line?

America in 2020 has a thinking problem to overcome. It has induced an operational problem wherein data slaves functioning on a data plantation believe they are "FREE", when in fact they are anything but. Labor is owned by the State, Identity is provisioned by the State, and compliance is not optional.

Is that even Constitutionally accurate? No, but then again, the Constitution does not compute in 2020. How can it ever hope to be accurate in rendering the operational format of participation for citizens when it can not even be designed effectively at root participation for American babies with any sense of accuracy? American babies with a Social Security number, after all, are the most at risk population for identity theft that there is. Sound like a good system?

Depends who you ask.

You see, the Government loves stability. Nothing propels a war machine like the stable revenue derived from employees. W2/W4 employees after all, pay the Government first, before they ever receive any money in their pay checks, having it auto-debited from their salaries. Of course, owners do it completely different, having a system designed to support them because well.. "Ownership Supremacy" on planet Earth is easily experienced. Owners pay the Government last, after all expenses have been deducted against revenue.

A Society of Owners is the only structural equality that will ever be possible. In 2020, the concept of "White Supremacy" dominates the perception of operationally illiterate people. In our country, a socio-economic war was fought, and owners were the winners on both sides of the transition from enslavement to employment, because structure yields results. It was never about black/white... it has always been about the ownership contract. Native Americans did not experience genocide, they experienced the incremental defeat of living on a planet that you do not own. Owners own the world that everyone else is living on.

Equality begins when you own yourself, and you own root participation in civil Society. That begins at identity formation. American Sovereignty was demonstrated by the founding fathers. They had no permission to dissolve their Sovereign allegiance to the King of England, and they did not have any permission to "Declare Independence" as a Sovereign Nation. But these people understood something far more basic and true. Human authority on planet Earth, and in this Universe, is self-Sovereign, or it does not exist. You may need to fight for your Human Rights, but they will never come from anyone but yourself.

No Institution, no man-made organization will ever give Humanity the direct experience that the Creator of existence has given to Humanity. Self-Sovereign Human Authority is the only authority that exists in Civil Society, or your Society is not civil.

Look around 2020 people... your Society is not civil. Fixing that starts with identity, for owners.

Own Root. It is not optional.








Wednesday, 01. July 2020

Justin Richer

XYZ: Cryptographic Binding

This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Interaction, and Compatibility. OAuth 2 loves its bearer tokens. They’re a really useful construct because they are simple: if you have the token, you can do whatever the token is good for. It’s all the proof that you need: you can present it in exactly the format that you received it in, no cry
This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Interaction, and Compatibility.

OAuth 2 loves its bearer tokens. They’re a really useful construct because they are simple: if you have the token, you can do whatever the token is good for. It’s all the proof that you need: you can present it in exactly the format that you received it in, no cryptography involved.

This simplicity makes it easy for client developers to get it right, most of the time. You give them a magic value, they put the magic value into the right header, and suddenly the API works and they can stop thinking about this security thing.

Sender Constraints

The downside to bearer tokens, and it is a big downside, is that anyone who has a copy of the token can use it. TLS protects the token in transit over a direct hop, but it doesn’t fix the fact that the people who are allowed to see the token aren’t the same as the people who are allowed to use the token. This means that if a client sends its token to a rogue resource server, or even the wrong resource server, then that resource server can replay the token somewhere else.

To address this in the OAuth 2 world, there is active work to define how to do sender constrained access tokens. These tokens would move beyond a bearer token by requiring the client to present some type of verifiable keying material, like a mutual TLS certificate or an ephemeral key generated by the client. There was previously work of using TLS token binding as well, but that has sadly gone by the wayside as TLS token binding failed to take off. These work pretty well, but there is an overwhelming prevalence of OAuth 2 code that looks for the Bearer keyword for the token type and breaks if you show it anything else. Because of this, the MTLS spec requires that you continue to use the Bearer token type, and there has even been pushback for DPoP to use Bearer as an option as well.

And on top of all of this, the token presentation is tangled up with the OAuth2 client authentication, which could be based on a secret, a key, or nothing at all (in the case of public clients).

With XYZ, we don’t have this legacy of bearer tokens and client authentication, and we were able to build a system that was more consistent from the start.

Presenting Keys

In XYZ, key presentation is at the core of all client interactions. When a client calls the AS, it identifies itself by its key. The key formats in XYZ are flexible. A client can present its key as a JWK, an X509 certificate, or potentially any number of other formats through extensions. The key can be RSA, or elliptic curve, or potentially some other exotic form. The protocol doesn’t really care, so long as there’s a method to validate it.

This key can be passed by value or by reference, and the AS can even decide to assign a reference dynamically, but the important thing is that it’s the key that identifies the software that’s making the call. This alone is an important shift from OAuth 2, because specifications like MTLS for OAuth and DPoP have shown us the value in allowing a client to bind an ephemeral key to its requests. In XYZ, the default is that all client keys are ephemeral, unless the AS has some additional set of metadata and a policy attached to the key.

Whatever key the client uses for its requests to the AS, it’s reasonable that the client would be able to use that same key for requests to the RS. But in this case, the client is also going to be presenting the access token it was issued.

But unlike a bearer token, it’s not enough to present the token alone, or the key, or its identifier. The client also has to present proof that it currently holds that key to the server. It does this by performing a cryptographic operation on the request, in some fashion can be verified by the server and associated with the key. But how?

Agility of Presentation

Herein lies the real trick: a new delegation protocol is going to have to be flexible and agile in how the client is allowed to prove its keys. Just about every deployment is going to have its own considerations and constraints affecting everything from how keys are generated to how proofs can be validated across the layers.

In XYZ, we’ve made the key presentation modular. Instead of defining a required cryptographic container, XYZ declares that the client has to present the key in a manner bound to its request, in some fashion, and declare that proofing method. The two categories implemented to date are MTLS and HTTP-based message signatures.

For MTLS, it’s pretty straightforward. The client needs to use its client certificate during the TLS connection to the server, and the server needs to verify that the certificate in use is the same one referenced in the request — either the request body for the AS or the access token for the RS. The server does not need to do a full chain validation of the client’s certificate, though it’s free to do so in order to limit which certificates are presented.

For HTTP message signing it’s a similar outlay of effort, but the signature is presented at the HTTP layer and has to directly cover some or all of the HTTP request. There are a number of different specifications out there for doing this, and our test XYZ implementations have built out support for a several of them including Cavage signatures, OAuth DPoP, and even a simple JWS-based detached body signature invented for the protocol. The most promising, to my mind, is the new signatures specification being worked on in the HTTP working group of the IETF. It’s being based on a long experience with several community specifications, and I think it’s really got a chance to be a unifying security layer across a lot of different applications. Eventually, we’re likely to see library and tooling support like we have with TLS.

The use cases for these are different. MTLS tends to work great for webserver-based applications, especially in a closed enterprise setup, but it falls apart for SPA’s and ephemeral apps. HTTP message signing is more complex to implement, but it can survive across TLS termination and multiple hops. There is no one answer, and there are likely other approaches that will be invented down the road that work even better.

Signatures as a Core Function

OAuth 1 had its own bespoke signing mechanism, which confused a lot of developers. OAuth 2 set out to avoid the problems that this caused by removing signatures entirely, but in so doing has pushed the needle too far away from good security practices and made it hard to add such functionality back in. With XYZ we tried to strike the balance by allowing different mechanisms but assuming that signing, of some fashion, was going to be available to every client. With today’s library and software support, this seems to be true across many platforms, and time will tell which methods work the best in the real world.


DustyCloud Brainstorms

Some updates: CapTP in progress, Datashards, chiptune experiments, etc

(Originally written as a post for Patreon donors.) Hello... just figured I'd give a fairly brief update. Since I wrote my last post I've been working hard towards the distributed programming stuff in Goblins. In general, this involves implementing a protocol called CapTP, which is fairly obscure... the idea is …

(Originally written as a post for Patreon donors.)

Hello... just figured I'd give a fairly brief update. Since I wrote my last post I've been working hard towards the distributed programming stuff in Goblins.

In general, this involves implementing a protocol called CapTP, which is fairly obscure... the idea is generally to apply the same "object capability security" concept that Goblins already follows but on a networked protocol level. Probably the most prominent other implementation of CapTP right now is being done by the Agoric folks, captp.js. I've been in communication with them... could we achieve interoperability between our implementations? It could be cool, but it's too early to tell. Anyway it's one of those technical areas that's so obscure that I decided to document my progress on the cap-talk mailing list, but that's becoming the length of a small novel... so I guess, beware before you try to read that whole thing. I'm far enough along where the main things work, but not quite everything (CapTP supports such wild things as distributed garbage collection...!!!!)

Anyway, in general I don't think that people get too excited by hearing "backend progress is happening"; I believe that implementing CapTP is even more important than standardizing ActivityPub was in the long run of my life work, but I also am well aware that in general people (including myself!) understand best by seeing an interesting demonstration. So, I do plan another networked demo, akin to the time-travel Terminal Phase demo, but I'm not sure just how fancy it will be (yet). I think I'll have more things to show on that front in 1-2 months.

(Speaking of Goblins and games, I'm putting together a little library called Game Goblin to make making games on top of Goblins a bit easier; it isn't quite ready yet but thought I'd mention it. It's currently going through some "user testing".)

More work is happening on the Datashards front; Serge Wroclawski (project leader for Datashards; I guess you could say I'm "technical engineer") and I have started assembling more documentation and have put together some proto-standards documents. (Warning: WIP WIP WIP!!!) We are exploring with a standards group whether or not Datashards would be a good fit there, but it's too early to talk about that since the standards group is still figuring it out themselves. Anyway, it's taken up a good chunk of time so I figured it was worth mentioning.

So, more to come, and hopefully demos not too far ahead.

But let's end on a fun note. In-between all that (and various things at home, of course), I have taken a bit of what might resemble "downtime" and I'm learning how to make ~chiptunes / "tracker music" with Milkytracker, which is just a lovely piece of software. (I've also been learning more about sound theory and have been figuring out how to compose some of my own samples/"instruments" from code.) Let me be clear, I'm not very good at it, but it's fun to learn a new thing. Here's a dollhouse piano thing (XM file), the start of a haunted video game level (XM file), a sound experiment representing someone interacting with a computer (XM file), and the mandatory demonstration that I've figured out how to do C64-like phase modulation and arpeggios (XM file). Is any of that stuff... "good"? Not really, all pretty amateurish, but maybe in a few months of off-hour experiments it won't be... so maybe some of my future demos / games won't be quite as quiet! ;)

Hope everyone's doing ok out there...

Tuesday, 30. June 2020

Identity Woman

API Days Links

Decentralized Identity Foundation http://www.identity.foundation W3C CCG https://www.w3.org/community/credentials/ Secure Data Store Working Group https://identity.foundation/working-groups/secure-data-storage.html MyData Operators https://mydata.org/wp-content/uploads/sites/5/2020/04/Understanding-Mydata-Operators-pages.pdf OpenID SOIP https://medium.com/decentralized-identity/using-openid-connec

Decentralized Identity Foundation http://www.identity.foundation W3C CCG https://www.w3.org/community/credentials/ Secure Data Store Working Group https://identity.foundation/working-groups/secure-data-storage.html MyData Operators https://mydata.org/wp-content/uploads/sites/5/2020/04/Understanding-Mydata-Operators-pages.pdf OpenID SOIP https://medium.com/decentralized-identity/using-openid-connect-with-decentralized-identifiers-24733f6fa636 DID:Key https://w3c-ccg.github.io/did-method-key/ DID:Peer https://openssi.github.io/peer-did-method-spec/index.html Aires Agent https://www.hyperledger.org/use/aries Trinsic https://docs.trinsic.id Trinsic docs.trinsic.id Trinsic API  https://app.swaggerhub.com/apis-docs/Streetcred/agency/v1 ArcBlock  https://www.arcblock.io/en/platform  SpacemanID https://apollo-documentation.readthedocs.io/en/latest/ CCredential Handler API (CHAPI): https://w3c-ccg.github.io/credential-handler-api/ Verifiable Credential HTTP API: https://w3c-ccg.github.io/vc-issuer-http-api/ https://w3c-ccg.github.io/vc-verifier-http-api/ Encrypted Data Vaults: https://identity.foundation/secure-data-store/#data-vault-https-api

The post API Days Links appeared first on Identity Woman.


Dick Hardt

I think the Apple DEA server does that somehow.

I think the Apple DEA server does that somehow. In my opinion, spam has become a non-issue for most users. The concern on email address is more the privacy implications of having a common identifier across systems that can be used to correlate the user.

I think the Apple DEA server does that somehow. In my opinion, spam has become a non-issue for most users. The concern on email address is more the privacy implications of having a common identifier across systems that can be used to correlate the user.


Mike Jones: self-issued

SecEvent Delivery specs sent to the RFC Editor

I’m pleased to report that the SecEvent delivery specifications are now stable, having been approved by the IESG, and will shortly become RFCs. Specifically, they have now progressed to the RFC Editor queue, meaning that the only remaining step before finalization is editorial due diligence. Thus, implementations can now utilize the draft specifications with confidence […]

I’m pleased to report that the SecEvent delivery specifications are now stable, having been approved by the IESG, and will shortly become RFCs. Specifically, they have now progressed to the RFC Editor queue, meaning that the only remaining step before finalization is editorial due diligence. Thus, implementations can now utilize the draft specifications with confidence that that breaking changes will not occur as they are finalized.

The specifications are available at:

https://tools.ietf.org/html/draft-ietf-secevent-http-push-14 https://tools.ietf.org/html/draft-ietf-secevent-http-poll-12

HTML-formatted versions are also available at:

https://self-issued.info/docs/draft-ietf-secevent-http-push-14.html https://self-issued.info/docs/draft-ietf-secevent-http-poll-12.html

Sunday, 28. June 2020

Just a Theory

Harlem Park Steps

A photo from a walk in Jackie Robinson Park, Harlem, New York City.

Steps in Jackie Robinson Park, Harlem

More about… Harlem Jackie Robinson Park Photography Steps

Aaron Parecki

Redesigning my Blog Post Pages

I had a great time in the sessions at IndieWebCamp West yesterday! Today is project day, so I started the morning off listening to some chill tunes with other folks on the Zoom "hallway track" deciding what to work on.

I had a great time in the sessions at IndieWebCamp West yesterday! Today is project day, so I started the morning off listening to some chill tunes with other folks on the Zoom "hallway track" deciding what to work on.

My blog post permalinks have been bothering me for a while, I feel like they are too cluttered. I mostly like the design of my notes and photos permalinks, since the content on those pages is relatively short, I don't mind how much other stuff on the page there is since it's also good for discoverability of my other content. However on my blog posts, people often come to those from some external source, and they're really there just to read the blog post and then leave. So I wanted to clean it up a bit.

Here's what my blog post pages looked like before.

This is on a 13" Macbook screen with a browser as wide as possible. Here is what the blog posts look like now.

I wanted to give the blog posts a bit more space, so I increased the size of the middle column. I also took out the top bar with the clock and weather. I removed my little author avatar (it duplicated my name, and the photo wasn't really placed very well.) I also changed the timestamp to show just the date, instead of the full date and time and timezone. For notes and photos the time is possibly relevant, but it's really not important for blog posts.

I also added a custom font, which tbh I'm on the fence about using. I do feel like the slightly wider font goes well with the slightly larger column.

So that's the first of the projects for the day done!

Update: Thanks to a suggestion from Sebastian, I also increased the font size on the blog posts to keep a similar number of characters per line.


Just a Theory

Test Postgres Extensions With GitHub Actions

I finally made the jump from Travis CI to GitHub Actions for my Postgres extensions. Here’s how you can, too.

I first heard about GitHub Actions a couple years ago, but fully embraced them only in the last few weeks. Part of the challenge has been the paucity of simple but realistic examples, and quite a lot of complicated-looking JavaScript-based actions that seem like overkill. But through trial-and-error, I figured out enough to update my Postgres extensions projects to automatically test on multiple versions of Postgres, as well as to bundle and release them on PGXN. The first draft of that effort is pgxn/pgxn-tools1, a Docker image with scripts to build and run any version of PostgreSQL between 8.4 and 12, install additional dependencies, build, test, bundle, and release an extension.

Here’s how I’ve put it to use in a GitHub workflow for semver, the Semantic Version data type:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 name: CI on: [push, pull_request] jobs: test: strategy: matrix: pg: [12, 11, 10, 9.6, 9.5, 9.4, 9.3, 9.2] name: 🐘 PostgreSQL ${{ matrix.pg }} runs-on: ubuntu-latest container: pgxn/pgxn-tools steps: - run: pg-start ${{ matrix.pg }} - uses: actions/checkout@v2 - run: pg-build-test

The important bits are in the jobs.test object. Under strategy.matrix, which defines the build matrix, the pg array defines each version to be tested. The job will run once for each version, and can be referenced via ${{ matrix.pg }} elsewhere in the job. Line 10 has the job a pgxn/pgxn-tools container, where the steps run. The are are:

Line 12: Install and start the specified version of PostgreSQL Line 13: Clone the semver repository Line 14: Build and test the extension

The intent here is to cover the vast majority of cases for testing Postgres extensions, where a project uses PGXS Makefile. The pg-build-test script does just that.

A few notes on the scripts included in pgxn/pgxn-tools:

pg-start installs, initializes, and starts the specified version of Postgres. If you need other dependencies, simply list their Debian package names after the Postgres version.

pgxn is a client for PGXN itself. You can use it to install other dependencies required to test your extension.

pg-build-test simply builds, installs, and tests a PostgreSQL extension or other code in the current directory. Effectively the equivalent of make && make install && make installcheck.

pgxn-bundle validates the PGXN META.json file, reads the distribution name and version, and bundles up the project into a zip file for release to PGXN.

pgxn-release uploads a release zip file to PGXN.

In short, use the first three utilities to handle dependencies and test your extension, and the last two to release it on PGXN. Simply set GitHub secrets with your PGXN credentials, pass them in environment variables named PGXN_USERNAME and PGXN_PASSWORD, and the script will handle the rest. Here’s how a release job might look:

15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 release: name: Release on PGXN # Release pon push to main when the test job succeeds. needs: test if: github.ref == 'refs/heads/main' && github.event_name == 'push' && needs.test.result == 'success' runs-on: ubuntu-latest container: image: pgxn/pgxn-tools env: PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }} PGXN_PASSWORD: ${{ secrets.PGXN_PASSWORD }} steps: - name: Check out the repo uses: actions/checkout@v2 - name: Bundle the Release run: pgxn-bundle - name: Release on PGXN run: pgxn-release

Note that lines 18-19 require that the test job defined above pass, and ensure the job runs only on a push event to the main branch, where we push final releases. We set PGXN_USERNAME and PGXN_PASSWORD from the secrets of the same name, and then, in lines 27-32, check out the project, bundle it into a zip file, and release it on PGXN.

There are a few more features of the image, so read the docs for the details. As a first cut at PGXN CI/CD tools, I think it’s fairly robust. Still, as I gain experience and build and release more extensions in the coming year, I expect to work out integration with publishing GitHub releases, and perhaps build and publish relevant actions on the GitHub Marketplace.

Not a great name, I know, will probably change as I learn more. ↩︎

More about… Postgres PGXN GitHub Actions Automation CI/CD

Saturday, 27. June 2020

Voidstar: blog

Jennifer Government by Max Barry

[from: Librarything]

[from: Librarything]

The First Church on the Moon by Jmr Higgs

[from: Librarything]

[from: Librarything]

Wednesday, 24. June 2020

Virtual Democracy

The dance of demand-sharing culture in open science

Working together in time. Open science culture promotes active belonging in the community of science [photo: Nick Ansell CC license in Flickr] “I am not saying science is a community that treats ideas as contributions; I am saying it becomes one to the degree that ideas move as gifts” (Hyde, 2009). “The specificity of [demand] sharing… … Continue reading The dance of demand-sharing culture in
Working together in time. Open science culture promotes active belonging in the community of science [photo: Nick Ansell CC license in Flickr] “I am not saying science is a community that treats ideas as contributions; I am saying it becomes one to the degree that ideas move as gifts” (Hyde, 2009). “The specificity of [demand] sharing… … Continue reading The dance of demand-sharing culture in open science

Tuesday, 23. June 2020

Bill Wendel's Real Estate Cafe

Money-saving real estate business models — time for a mass movement?

Few people in the real estate industry know this month marks the 16th anniversary of the first convention for fee-for-service real estate agents. For consumers,… The post Money-saving real estate business models -- time for a mass movement? first appeared on Real Estate Cafe.

Few people in the real estate industry know this month marks the 16th anniversary of the first convention for fee-for-service real estate agents. For consumers,…

The post Money-saving real estate business models -- time for a mass movement? first appeared on Real Estate Cafe.

Friday, 19. June 2020

Mike Jones: self-issued

Registrations for all WebAuthn algorithm identifiers completed

We wrote the specification COSE and JOSE Registrations for WebAuthn Algorithms to create and register COSE and JOSE algorithm and elliptic curve identifiers for algorithms used by WebAuthn and CTAP2 that didn’t yet exist. I’m happy to report that all these registrations are now complete and the specification has progressed to the RFC Editor. Thanks […]

We wrote the specification COSE and JOSE Registrations for WebAuthn Algorithms to create and register COSE and JOSE algorithm and elliptic curve identifiers for algorithms used by WebAuthn and CTAP2 that didn’t yet exist. I’m happy to report that all these registrations are now complete and the specification has progressed to the RFC Editor. Thanks to the COSE working group for supporting this work.

Search for WebAuthn in the IANA COSE Registry and the IANA JOSE Registry to see the registrations. These are now stable and can be used by applications, both in the WebAuthn/FIDO2 space and for other application areas, including decentralized identity (where the secp256k1 “bitcoin curve” is in widespread use).

The algorithms registered are:

RS256 – RSASSA-PKCS1-v1_5 using SHA-256 – new for COSE RS384 – RSASSA-PKCS1-v1_5 using SHA-384 – new for COSE RS512 – RSASSA-PKCS1-v1_5 using SHA-512 – new for COSE RS1 – RSASSA-PKCS1-v1_5 using SHA-1 – new for COSE ES256K – ECDSA using secp256k1 curve and SHA-256 – new for COSE and JOSE

The elliptic curves registered are:

secp256k1 – SECG secp256k1 curve – new for COSE and JOSE

Thursday, 18. June 2020

Justin Richer

XYZ: Interaction

This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Compatibility, and Cryptographic Agility. When OAuth 1 was first invented, it primarily sought to solve the problem of one website talking to another website’s API. It could also be used with native applications, but it was awkward at best due to many of the assumptions built into the OAuth 1 mo
This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Compatibility, and Cryptographic Agility.

When OAuth 1 was first invented, it primarily sought to solve the problem of one website talking to another website’s API. It could also be used with native applications, but it was awkward at best due to many of the assumptions built into the OAuth 1 model. To be fair, smartphones didn’t really exist yet, and the handful of clunky desktop apps could more or less be dealt with via a few hacks. In OAuth 2, we at least admitted that mobile applications were now A Thing and created the notion of a public client with no recognizable credentials of its own. But it turned out that wasn’t enough, and we needed additional tools and additional guidance for developers to get mobile applications right. Even later, single-page applications really came into their own, further defying previous assumptions about OAuth clients. Smart devices and the internet of things drove a world of new interaction methods and even new ideas about what logging in means. And all of this is on top of OAuth’s core set of grant types, the differences between which are largely driven by the abilities of different clients.

This kind of flexibility and adaptability is one of OAuth 2’s greatest successes, so what’s the problem? Simply put, OAuth 2 makes a lot of assumptions that the user has access to a web browser which is on the same device as the client. In today’s world of smart clients, rich applications, and constrained devices, these assumptions no longer match reality and there’s a real need for a system with more flexibility. While browsers are certainly still around, we aren’t just connecting two websites to each other anymore, and our security protocols should reflect that.

Photo by Daniel Romero on Unsplash Getting the User Involved

In OAuth 2, the type of interaction that the client is capable of is defined by the authorization grant that’s in use. This dictates, among other things, how the user gets to the AS and how they get back from the AS to the client. In the authorization code grant, which is at the core of the OAuth 2 protocol design, the user is redirected in both directions through HTTP. The client has to choose exactly one way of dealing with the user before it starts, and this leads to awkward situations like the Device Grant letting the user interactively enter a code at a static URL or having a pre-composed URL that the user just follows. It also means that different grant types need to re-invent parts of the process used in other grant types just so that they can do things a little differently.

But what if we could decompose all of these aspects and get away from a type-based system? In XYZ, instead of choosing a grant type, a client declares all of the different parts of interaction that it supports. The AS then responds with appropriate responses to whatever portions it supports for that request. From there, the client can take action based on whatever portions make sense. After all, the client knows how it can interact with the user. With these tools, XYZ’s interaction model allows for a very rich set of interactions.

In XYZ, we wanted to enable the traditional redirect-based interaction but not depend on it. To do that, the client tells the AS that it can handle redirecting the user to an arbitrary URL, and also signals that it can get the user back on a similar redirect. With this, we’ve got everything we need for an authorization code style interaction.

{
"interact": {
"redirect": true,
"callback": {
"uri": "https://client.foo",
"nonce": "VJLO6A4CAYLBXHTR0KRO"
}
}
}

OK, but what if we’re now talking about a set-top box that can’t pop open a browser? Here we take a clue from the OAuth 2 device grant and say that we could display an arbitrary code, but at a static URL. We expect the user to go there on a secondary device, and so we don’t have a good way to get the user back at the client itself.

{
"interact": {
"user_code": true
}
}

This is great, but what if we could handle either a redirect or a user code? Let’s say, for instance, that we can hand the user a code they can type in, or we can get them to scan a barcode on a secondary device and be sent to an arbitrary page. To handle that, we simply say we can do both.

{
"interact": {
"redirect": true,
"user_code": true
}
}

Notice that our client expects the user to complete the request on a separate device, and so we don’t have the callback parameter included. The AS can decide whether that’s good enough for what’s being asked, or not. An important difference between these protocols is that OAuth 2 starts with a redirect, while XYZ allows the redirect only after the initial back-channel request. This means we can mix things together in novel ways. For example, we could say that we expect the user to enter a code at a fixed URL, but we do want the user to come back to a callback webpage so that our client can continue.

{
"interact": {
"redirect": true,
"callback": {
"uri": "https://client.foo",
"nonce": "VJLO6A4CAYLBXHTR0KRO"
}
}
}

What kind of client would ever need to do this? Maybe we’ve got a smart client on device that has ties to a push notification service with a web-based component, or there’s some other deployment structure that we haven’t thought up yet. Componentizing how the user gets to the AS separately from how the user gets back allows us to mix, match, and stack capabilities as appropriate for our use case.

This setup means we can start to think about other ways to get the user back and securing that return, as well. We need to consider capabilities like sending a response to the client application directly through an HTTP/2 Server Push, or a client being notified through some other signaling mechanism.

Moving Off the Web

Even more importantly, we need to consider interactions with the user outside of the web entirely. While XYZ’s back channel is defined completely over HTTP calls, it doesn’t have to use the front channel (redirects) to interact with the user at all. Here is where the capability-based interaction mechanism starts to get really powerful.

What if the client indicates that it can do a webauthn style signature? The server can offer its challenge to be signed in the initial response, and the client can sign it (by prompting the user to activate their key) and return the results in the follow-up request. If it’s a key the AS already knows, and there isn’t any consent to be gathered, the user doesn’t need to be redirected anywhere.

Or what if the client has access to a digital agent with ties to a non-web distributed data store, like a blockchain or other graph-based data fabric? The client can indicate that it can send and receive messages to this agent, or field queries through it. The agent and the AS might be part of the same data fabric, and therefore not need to go through the client in order to communicate. Such a system could even gather and manage consent entirely outside of a web-based protocol. All the XYZ client would need to care about is how to kick off the transaction and how to get the results.

Or what if the client wants to prompt the user directly for their credentials (cringe)? It’s generally considered bad security practice, but the UX of a login experience that stays completely native to the application is compelling, even today. Be defining an interaction mode specifically for this, the AS can help manage things in a more secure fashion than the OAuth 2 Resource Owner Password Grant. It could allow some kind of clever presentation of the shared secret without exposing it directly, and it could even account for different kinds of MFA.

Or what if the AS can interact with the user through a native application? The client should be able to indicate it can support that without pretending it’s an HTTP redirect.

And if a client could do many different things, then it can set them up in the initial request and let the AS decide what is allowed according to its own policies. We should be able to prompt for a security token and call an on-device agent and redirect out to a webpage if that’s not enough. Since the client doesn’t need to pick a single method ahead of time, it doesn’t even need to discover what is possible in order for it to work. This type of engineering at the protocol level greatly enhances interoperability while simplifying code paths for both client and server.

And finally, what if we don’t need to interact most of the time? OAuth 2 forces the client to try interaction first, but if the AS knows who the client is and who the user is, then it could decide to grant access without ever doing an explicit interaction. We can even do a bootstrapping of complex protocols, such as allowing the client to present an assertion to represent the user but still allowing the AS to interact through a web page (or some other mechanism) if necessary. The user gets involved only when they need to.

Interaction is one of the core extensibility points of XYZ and one of the key places where it breaks from OAuth 2’s past assumptions. The possibilities are nearly endless, and I believe we’re going to see some really exciting advancements in this space.


Bill Wendel's Real Estate Cafe

Transitioning to #UUtopia post-Pandemic

As reflected in the juxtaposition of images above, the extended #StayAtHome has caused many to conclude they #DontFeelAtHome where they live so they’re ready to… The post Transitioning to #UUtopia post-Pandemic first appeared on Real Estate Cafe.

As reflected in the juxtaposition of images above, the extended #StayAtHome has caused many to conclude they #DontFeelAtHome where they live so they’re ready to…

The post Transitioning to #UUtopia post-Pandemic first appeared on Real Estate Cafe.

Tuesday, 16. June 2020

Justin Richer

XYZ: Compatibility With OAuth 2

This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Interaction, and Cryptographic Agility. XYZ is a novel protocol, and one of its goals is to move beyond what OAuth 2 enables in any easy fashion. One of the first and most important decisions I made was to not be backwards compatible with OAuth 2. This is a bold choice: OAuth 2 is absolute
This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Interaction, and Cryptographic Agility.

XYZ is a novel protocol, and one of its goals is to move beyond what OAuth 2 enables in any easy fashion. One of the first and most important decisions I made was to not be backwards compatible with OAuth 2.

This is a bold choice: OAuth 2 is absolutely everywhere, and deliberately breaking with it will put any new protocol at a distinct deployment disadvantage. However, I think that the advantages in the flexibility, especially in the models, are worth it. There are a ton of use cases that don’t fit well into the OAuth 2 world, and even though many of these square pegs have been crammed into the round hole, we can do better than that by taking a step back and designing something new.

Working With the Legacy

For people in these new use cases, this is a great opportunity. But what about everyone out there who already has OAuth 2 systems and wants to also support something new? XYZ doesn’t exist in a vacuum, and so as a solution it needs to take OAuth 2 developers into account in its design even while addressing the new world.

To do that, we’ve taken a page from how some of the most successful backwards compatible efforts in the past have worked. Let’s look at Ninendo’s Wii console. When it was introduced, one of its most compelling features was a radically new control scheme. This had features like wireless connections, IR-based pointers, switchable accessories, motion sensitivity, and even a built-in speaker. It was unlike anything else out there, and it was a particularly radical departure from Nintendo’s previous generation game console, the GameCube. Nintendo wanted to encourage existing GameCube owners to come along to the new system, not just by making the new features compelling on their own right (since without that, why move away from the GameCube?) but also by adding ports to allow GameCube games and hardware to be used with the new system. If you had GameCube stuff, there was a place you could plug it in. It was on the side/top of the console and hidden under a flap, but it’s there if you wanted to use it.

The ports are there, just hidden. Image from https://www.wikihow.com/Play-Gamecube-Games-on-Wii

Now if you wanted to play a Wii game, you needed to use the Wii controllers and hardware. And if you wanted to play a GameCube game, you could always just buy a GameCube. But what if you were interested in doing both, but didn’t want to have two systems sitting under your TV? This model of focusing on the new but allowing the old to be plugged in is a powerful one, and it was wildly successful.

Plugging OAuth 2 Into XYZ

XYZ’s core model is pretty different from OAuth 2. XYZ doesn’t have client IDs as part of the protocol, nor does it have scopes. Instead, clients are identified by their keys, and resources have a rich request language to describe them in multiple dimensions. You can even request multiple access tokens simultaneously. All of these are great for use cases like ephemeral and native clients, or rich APIs, but what about something where OAuth 2 already works ok?

Just like with the Wii, we’ve made sure that there’s a place to plug in things. In fact, this is one of the key features that polymorphic JSON brings to the party. Let’s say you’ve got an OAuth 2 client that has been assigned the client ID client1 and it asks for scopes foo and bar at its AS. This is pretty easy for a developer to plug in to the authorization endpoint (ignoring other optional parameters):

client_id=client1&scope=foo%20bar

Since we don’t have explicit client identifiers or scopes in XYZ, what are we supposed to do with these values? To understand that, we first need to take a step back and think about what these items each mean in the context of the overall protocol.

The client_id in OAuth 2 identifies the client, which should be no surprise. But why do we need to identify the client? For one, we want to make sure that when the client authenticates, it does so using the appropriate set of credentials. Since the browser is untrusted for handling shared secrets, we can’t pass those credentials in the front channel where OAuth 2 starts and we need a separate identifier. If we start in the back channel, we don’t need an identifier separate from the credential. But more fundamentally, what does it mean to identify the client at all? The AS is going to want to know which client is asking so that it can apply the appropriate policies, including gathering consent from the user, and these policies drive the rest of the decision process. We can make that same set of decisions without a separate identifier and using the credential directly. XYZ allows us to send the credential by value, but it also allows us to send a reference to that credential instead of the value. In this way, we can use our existing client_id to tell the AS which key we’re using to protect the requests by passing it in as the key handle for the request, in lieu of the key itself. The AS can associate the key, and therefore the policies, by looking up the handle.

The scope of OAuth 2 follows a similar journey. A resource in XYZ is described by a multi-dimensional and potentially complex object, with actions, locations, datatypes, and any number of API-specific components to it. But just like the client_id, we need to ask ourselves what does the scope represent in OAuth 2? Functionally, it’s a shortcut asking for a particular slice of an API, defined by the API. So in other words, it’s a predefined combination of actions, locations, datatypes, and any number of other API-specific components — it is a shorthand to a concept analogous to one of the resource request objects in XYZ. Which means, of course, that we can use a resource handle to represent that predefined object.

What this means is that our client developer has a place in the protocol to plug in the values that they already have. They don’t plug into the same place, and they don’t go through quite the same circuitry, but they’re there, and they work.

{
"keys": "client1",
"resources": [
"foo",
"bar"
]
}

But here’s something really cool about this: once we’re plugging in our values into the new protocol, then we can start to use the new features alongside the old ones. Since we have the new system, we can play both the new games as well as the old games. That means that our client could, for example, ask for an additional detailed resource alongside its tried and true scopes.

{
"keys": "client1",
"resources": [
"foo",
"bar",
{
"actions": ["read", "write"],
"datatypes": ["accounts", "history"]
}
}

All of this is done without putting a burden on new apps to use any of these legacy mechanisms. Ephemeral apps don’t have to pretend to get a client ID, and resource servers don’t have to wrap their APIs into simple scope values. Those functions are there for the cases where they make sense and stay out of the way when they don’t.

Staying With OAuth 2

But what if you don’t want all the new features at all? What if you’re totally fine with OAuth 2 as it is today, or as it will be in a year with all of its extensions? In those cases, just keep using it. No, really. If it’s a good solution, you should keep using it, even if something fancier is on the horizon. OAuth 2 is a good solution for a lot of things, and we’ll continue seeing it for a long time to come.

OAuth 2’s got a lot of great extensions that let it do a lot of different things, including things like PAR and RAR that start to approach a very different protocol that’s closer to XYZ than classical OAuth 2. However, these extensions have to live in the world of OAuth 2, and the fit isn’t always that great. But for many developers, the cost of smushing these new functionalities in to their existing system might be less than the cost of switching to a new system, and that’s a fine and reasonable approach.

But for the rest of the world, those who aren’t using OAuth 2 because it’s not a good fit or want OAuth 2 but a bit more than what it offers, then it’s a good time to move forward in a clear and deliberate way.


Pulasthi Mahawithana

Monitoring WSO2 Identity Server Health With Prometheus/Grafana

Monitoring WSO2 Identity Server Health with Prometheus/Grafana Monitoring the health of your application is something we must have in a production system to make sure that the application is running without any issues. In addition, monitoring may be required in performance testing, capacity planning, etc as well in the pre-production environments to see who well the application perform under diff
Monitoring WSO2 Identity Server Health with Prometheus/Grafana

Monitoring the health of your application is something we must have in a production system to make sure that the application is running without any issues. In addition, monitoring may be required in performance testing, capacity planning, etc as well in the pre-production environments to see who well the application perform under different conditions.

WSO2 Identity Server is an IAM server which is running on a JVM. There are multiple ways where you can monitor the health of a JVM application which can be used with Identity Server as well. In this post, we’ll see how to do that with Prometheus and Grafana.

Adding the Prometheus JMX exporter

Prometheus is a pull-based monitoring application. It provides different tools and clients to extract the data out of different applications. One such tool is the JMX exporter. Let’s see how we can add this exporter to Identity Server so that It can make the Identity Server JVM matrices available to Prometheus.

First, create a config file to be used by the JMX exporter at <IS_HOME>/bin as is.yml and add the following in that file.

lowercaseOutputLabelNames: true
lowercaseOutputName: true
whitelistObjectNames: ["java.lang:type=OperatingSystem"]
blacklistObjectNames: []
rules:
- pattern: 'java.lang<type=OperatingSystem><>(committed_virtual_memory|free_physical_memory|free_swap_space|total_physical_memory|total_swap_space)_size:'
name: os_$1_bytes
type: GAUGE
attrNameSnakeCase: true
- pattern: 'java.lang<type=OperatingSystem><>((?!process_cpu_time)\w+):'
name: os_$1
type: GAUGE
attrNameSnakeCase: true

Then get the latest release version Prometheus JMX exporter from here. You may also need to build it to get the binary. Place that binary in the <IS_HOME>/bin/.

Now, open the <IS_HOME>/bin/wso2server.sh file (or the .bat file for windows), and scroll to the bottom of the file and add the following as a parameter,

-javaagent:$CARBON_HOME/bin/jmx_prometheus_javaagent-0.13.0.jar=8880:$CARBON_HOME/bin/is.yml \

Note that I’m using the 0.13.0 version of the agent here which I took at the previous step. If it is different please update accordingly.

Also, note that I’m using 8880 port to export the metrics. You may change this port also if required.

The <IS_HOME>/bin/wso2server.sh file should look like below with the other parameters

....
-Dhttpclient.hostnameVerifier="DefaultAndLocalhost" \
-Dorg.apache.xml.security.ignoreLineBreaks=false \
-javaagent:$CARBON_HOME/bin/jmx_prometheus_javaagent-0.13.0.jar=8880:$CARBON_HOME/bin/is.yml \
org.wso2.carbon.bootstrap.Bootstrap $*
....

Start the Identity Server.

You can verify the exporter is properly configured by accessing http://localhost:8880/metrics (Use the same port you configured in the .sh file)

Configuring Prometheus to Pull Data

Now at Prometheus, we need to pull the data from Identity Server. To do that create a file named is.yml at PROMETHEUS_HOME. And add the following configs there,

global:
scrape_interval: 15s
evaluation_interval: 15s # Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files: # A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
- job_name: 'WSO2 Identity Server' static_configs:
- targets: ['localhost:8880']

Note that as targets we gave the hostname and port of the endpoint where Identity Server was exporting the metrics. By default, the context path will be assumed as /metrics. So we won’t need to explicitly configure it here.

Now start the Prometheus server as

./prometheus --config.file=is.yml

The Prometheus server will start at http://localhost:9090/

Go to Status -> Targets from the main menu, and see If the Identity Server is properly configured. You should see something as below if all are correct.

At Prometheus, we can run different queries and see some basic graphs. But I’ll skip that as we can get a lot better graphs with Grafana.

Configuring Grafana

Download and extract Grafana, and start it by running ./grafana-server from GRAFANA_HOME/bin.

Grafana will start on http://localhost:3000/. Use the default credentials (admin:admin) to log in (and make sure you change them later)

At Grafana, first we need to add Prometheus as a datasource for Grafana. To do that, go to Configuration -> Data Sources from the left navigation as below.

Click add and select ‘Prometheus’ as the type of the data source and provide the connection details as below,

Now we need to create dashboards from the data we retrieve from Prometheus. Well, we don’t need to create them from scratch. Instead, we can reuse an already available dashboard by importing it. There are few dashboards already available for JVM metrics. Out of them, I prefer the one at https://grafana.com/grafana/dashboards/8563. So, Let’s use it here.

In Grafana, hover over the ‘+’ icon on the left menu and click ‘Import’. And under ‘Import via grafana.com’ give the Id of the above mentioned dashboard which is 8563 and click ‘load’.

On the next page give the ‘Job’ as ‘WSO2 Identity Server’ (which is the job name we gave at Prometheus), and click Import.

Now you’ll see a nice dashboard with all the health data from the Identity Server. It will include,

Server status (Up/Down) Uptime CPU usage Memory Usage GC details Threads Classloading

Following are some screenshots from my local server (which was mostly idling :))

Server status/uptime/CPU UsageMemory usage/GC timeThreads/ Classloading/Physical Memory

That’s it! Happy monitoring!

PS. I was also interested in getting the Identity data also here (like Login details, User signups, etc). And I made some good progress with that. I will cover those in a separate blog post.

References:

https://madurangasblogs.blogspot.com/2019/02/monitoring-wso2-identity-server-with.html


Mike Jones: self-issued

SecEvent Delivery specs now unambiguously require TLS

The SecEvent delivery specifications have been revised to unambiguously require the use of TLS, while preserving descriptions of precautions needed for non-TLS use in non-normative appendices. Thanks to the Security Events and Shared Signals and Events working group members who weighed in on this decision. I believe these drafts are now ready to be scheduled […]

The SecEvent delivery specifications have been revised to unambiguously require the use of TLS, while preserving descriptions of precautions needed for non-TLS use in non-normative appendices. Thanks to the Security Events and Shared Signals and Events working group members who weighed in on this decision. I believe these drafts are now ready to be scheduled for an IESG telechat.

The updated specifications are available at:

https://tools.ietf.org/html/draft-ietf-secevent-http-push-12 https://tools.ietf.org/html/draft-ietf-secevent-http-poll-11

HTML-formatted versions are also available at:

https://self-issued.info/docs/draft-ietf-secevent-http-push-12.html https://self-issued.info/docs/draft-ietf-secevent-http-poll-11.html

Sunday, 14. June 2020

Aaron Parecki

How to Leave Facebook

There are many reasons to delete your Facebook account, so let's start with the assumption you've already made the decision. Here are a few things to know before you press the big "Delete" button.

There are many reasons to delete your Facebook account, so let's start with the assumption you've already made the decision. Here are a few things to know before you press the big "Delete" button.

Actually deleting your account can have some unintended consequences, so I actually recommend a slightly different approach instead. The problem with deleting your account is that you will essentially disappear from Facebook and any of your friends there won't be able to find you when they do things like tag you in a photo. 

Remember, even if you delete your account, that isn't going to stop other people from trying to search for you or tag you in posts.

Where this starts to become a problem is it opens up the opportunity for hackers to create a new Facebook account with your name, and start to befriend people in your network, impersonating you and infiltrating your friend network. You might think this would only be a problem for famous people, but believe it or not it can happen to anyone. (If you do see an account that is impersonating someone, you can report it here.)

Another problem is trying to track down every website where you've used "Sign in with Facebook" and remember to add another login method. Keeping your Facebook account around just to be able to sign in to things you've forgotten about later isn't a bad idea.

So instead of deleting your account entirely, a better approach is to remove everything from your profile except for a post that tells people where they can actually follow you elsewhere online.

Let's get started.

Download Your Information

You'll probably want to download all your data before deleting everything, either for posterity, or so that you can import the posts into your own website later as well.

Facebook provides an export that contains a reasonably high fidelity version of everything you've done on Facebook. You can find the link to download by going to "Settings", "Your Facebook Information" and clicking "Download Your Information", or by visiting this link: https://www.facebook.com/dyi/

It will take a while before the download link is available, possibly days, so you may need to come back to the rest of this process later.

The HTML export is a fully browsable archive of everything in your account. If you want something you can use to import your posts into your website later then you can download the JSON version instead. 

Alright, with that out of the way we can start getting into the good stuff.

Clean Up Your Facebook Groups

Groups are one of the few reasons to use Facebook at all, since groups can be made private, it's a reasonable way to have a hidden place to share things with specific people. There also aren't a lot of good alternatives right now.

So you'll want to either remove yourself from all the groups you're in, or at the very least turn off all notifications so that you aren't reminded about them constantly. 

From a group's page, click the three dots and choose "manage notifications".

Turn notifications "off", and also disable member request notifications.

I say "off" in quotes because apparently this will still notify you if you are mentioned in a group. Good enough.

You can also "unfollow" the group which will stop pushing updates from it into your timeline, but you'll still remain a member.

Go ahead and go through your list of groups and clean those up.

Facebook Pages

If you've created any Facebook Pages in the past, you'll want to decide what happens to them. You can delete pages entirely, or hand them over to other admins. 

You can find the list of pages you maintain here: https://www.facebook.com/pages

From a page, click "Page Settings" and then find the "Page Roles" option in the sidebar. If there are other admins, you can remove yourself.

Update your Timeline Settings

Next you'll want to make sure nobody can post stuff on your profile. From your profile page, find the "timeline settings" option. This link might take you there directly.

Update your Timeline and Tagging Settings to look like the below screenshot.

This will prevent anyone from being able to post stuff to your profile without your knowledge, to make sure that only your final "find me elsewhere" post will be visible.

You'll also want to update your privacy settings to the below.

This locks down everything as much as possible so that people won't be prompted to friend you by correlating your contact info.

Under the "Face Recognition" setting, prevent Facebook from recognizing you in photos by disabling this.

Under the section called "Public Posts", you'll want to make a few changes. This is one of the few places where you actually do want one of the settings opened up.

I've turned the bottom three settings to the most restricted option, but "Who Can Follow Me" is set to "Public". That's so that people who don't know you will see your last "find me elsewhere" post when looking up your profile.

Start Deleting Stuff

With these settings out of the way, we can move on to the step of actually deleting old posts. The overall goal here is to make sure people know this is you while also directing them to find your content elsewhere. That means you want to avoid giving people a reason to follow your Facebook account, and deleting your old posts is a good way to demonstrate that you aren't using Facebook anymore.

Start by visiting your profile page and choose "Activity Log". This will bring up a very long list of everything you've done on Facebook. Click "Filter" and choose "Posts". That will show only the things you've posted rather than posts you're tagged in. 

Now you can go through and remove your old posts.

Depending on how thorough you want to be, you can do the same to remove your likes or comments as well. You'll probably want to go through the list of photos as well as photos you're tagged in. You don't necessarily need to untag yourself from everything though, because one of the settings previously mentioned makes it so other people can't see things you're tagged in from your profile page.

If you get bored deleting things, then there's another option to sort of do this in bulk. There's an option to "limit past posts" to be visible to only your friends. While this isn't quite the same as deleting, it at least means people who are searching for you won't be able to see your old posts. 

You can find that option under Settings -> Privacy

If you've previously set up a tool that imports posts into Facebook such as IFTTT, then you can actually bulk-delete posts that were created by that app.

Go to the "Apps and Websites" section in Settings and find the app such as IFTTT. From there, you can click "View and Edit", and then click the "Remove" button and you'll see a prompt asking if you want to delete things that app has posted.

That should do a pretty good job of mass-deleting things. Although in my testing, it missed a few posts, and some of the apps I know I used to post things weren't showing up in the list. Still, this saved me a bunch of time deleting the things it did catch.

You might want to double check that things aren't still showing up on your profile page by visiting your page and looking through the different sections. For example in my testing, there were still some uploaded photos showing up on my photos tab that weren't showing up in the activity log.

Another place to look for more things that may have been missed is the "Albums" section under photos.

Trim Your Profile Page

The next step is to limit what shows up on your profile page. Visit your profile, then click the "More" button and choose "Manage Sections".

Disable everything that you can. Some of them can't be turned off.

You'll probably want to review the list of people and pages you're following to make sure you want to continue to essentially endorse them from your profile. I decided to unfollow everyone who wasn't a mutual friend.

Clean Up Your Profile Info

I was surprised just how much stuff was in my profile. Each of these sections under the "About" tab has content that you should review and decide whether to keep on your profile.

This is where it's a bit of a balancing act between how much data you want to delete from Facebook vs whether you want people to be able to verify that this is in fact your profile. 

I decided to remove all the "family" connections to avoid feeding their graph database but also since most people finding my Facebook page really do not need to know who I am related to.

I removed everything from the "Life Events" section like my work history. If someone wants to know that they can find my LinkedIn.

Under Contact Info I added a link to my website and my Twitter account, since those are the two primary places I want to send people to instead of Facebook.

Make a Final "Find Me Elsewhere" Post

The last thing to do is write a new post that will be the last thing you post to Facebook. That post should tell people that you won't be posting here anymore, and direct them to one or more alternative ways to contact you. Think about why someone would land on your Facebook page and make a call to action to redirect them to a better location.

I like using the "Life Event" feature on Facebook for this post. It makes it a very large post and really drives the point home. Plus you can use this cute animation of someone shining a flashlight around as the image.

Check Your Work

Once you're happy with how things are looking, it's time to see what your Facebook page look like to other people. 

Facebook has a built-in feature to view the public version of your profile. Click the little eye in the menu bar on your profile page. That will show you what someone who is not a friend will see when they visit your profile. 

Now you can poke around your profile and make sure it looks the way you want! 

It might also be helpful to ask a friend to see what your profile looks like to them, since some of your old posts may have been set to "friends-only" and wouldn't show up in the public view.

Congrats! Now enjoy a Facebook-free life!

If you feel so inclined, add a link to this post in a comment under your "I'm leaving" post so that others can follow this advice too!

Saturday, 13. June 2020

Virtual Democracy

Open science builds scholarly commons (plural) across the planet

  PLEASE NOTE: This is a draft of a bit of the Open Scientist Handbook. There are references/links to other parts of this work-in-progress that do not link here in this blog. Sorry. But you can also see what the Handbook will be offering soon. “We see a future in which scientific information and scholarly … Continue reading Open science builds scholarly commons (plural) across the planet
  PLEASE NOTE: This is a draft of a bit of the Open Scientist Handbook. There are references/links to other parts of this work-in-progress that do not link here in this blog. Sorry. But you can also see what the Handbook will be offering soon. “We see a future in which scientific information and scholarly … Continue reading Open science builds scholarly commons (plural) across the planet

Wednesday, 10. June 2020

Phil Windley's Technometria

Held Hostage

Summary: We need to replace platforms that intermediate transactions with protocols built on a universal trust framework like Sovrin to avoid a future of hostage taking and retaliatory regulations. Platforms service two-sided markets. We're all familiar with platform companies like Uber, AirBnB, Monster, eBay, and many others. Visa, Mastercard, and other credit card systems are pla

Summary: We need to replace platforms that intermediate transactions with protocols built on a universal trust framework like Sovrin to avoid a future of hostage taking and retaliatory regulations.

Platforms service two-sided markets. We're all familiar with platform companies like Uber, AirBnB, Monster, eBay, and many others. Visa, Mastercard, and other credit card systems are platforms. Platforms are a popular Web 2.0 business model because they create an attractive way for the provider to extract service fees from one side, and sometimes both sides, of the market. They can have big network effects and tend to natural monopolies.

Platforms provide several things that make them valuable to users:

They provide a means of discovering relevant service providers. They facilitate the transaction. They help participants make the leap over the trust gap, as Rachel Botsman puts it.

Like compound interest, small advantages in any of these roles can have huge effects over time as network effects exploit the advantage to drive participants to a particular platform. This is happening rapidly during the recent crisis in food delivery platforms.

A recent New York Times article, As Diners Flock to Delivery Apps, Restaurants Fear for Their Future, highlights the power that platforms have over their users:

But once the lockdowns began, the apps became essentially the only source of business for the barroom restaurant he ran with a partner, Charlie Greene, in Columbus, Ohio. That was when the fees to the delivery companies turned into the restaurant’s single largest cost—more than what it paid for food or labor.

Pierogi Mountain’s primary delivery company, Grubhub, took more than 40 percent from the average order, Mr. Majesky’s Grubhub statements show. That flipped his restaurant from almost breaking even to plunging deeply into the red. In late April, Pierogi Mountain shut down.

"You have no choice but to sign up, but there is no negotiating," Mr. Majesky, who has applied for unemployment, said of the delivery apps. "It almost turns into a hostage situation."

The standard response to these problems from people is more regulation. The article discusses some of the attempts cities, counties, and states have made to rein in the fees that food delivery platforms are charging.

A better response is to create systems that don't require an intermediary. Sovrin provides an identity metasystem that provides a univeral trust framework for building identity systems that can serve as the foundation for creating markets without intermediaries. When Sovrin has a token it can even facilitate the transaction.

Defining a marketplace requires determining the protocol for interaction between participants and creating the means of discovery. The protocol might be simple or complex depending on the market and could be built on top of DIDComm, or even ride on top of a verifiable credential exchange. There might be businesses that provide discovery, but they don't intermediate, they sit to the side of the interaction providing a service. For example, I might provide a service that allows a restaurant to define it's menu, create a shopping cart, and provide for discovery, but I'm easily replacable by another provider because the trust interaction and transaction are happening via a protocol built on a universal metasystem.

Building markets without intermediaries greatly reduces the cost of participating in the market and frees participants to innovate. And it does it without introducing regulation that stifles innovation and locks in incumbents by making it difficult for new entrants to comply. I'm hopeful that Sovrin and related technology will put an end to the platform era of the Internet by supporting greater, trustworthy autonomy for all participants.

Photo Credit: BE024003 from Tullio Saba (Public Domain Mark 1.0)

Tags: sovrin identity trust platform vrm

Tuesday, 09. June 2020

Mike Jones: self-issued

CBOR Tags for Date Registered

The CBOR tags for the date representations defined by the “Concise Binary Object Representation (CBOR) Tags for Date” specification have been registered in the IANA Concise Binary Object Representation (CBOR) Tags registry. This means that they’re now ready for use by applications. In particular, the full-date tag requested for use by the ISO Mobile Driver’s […]

The CBOR tags for the date representations defined by the “Concise Binary Object Representation (CBOR) Tags for Date” specification have been registered in the IANA Concise Binary Object Representation (CBOR) Tags registry. This means that they’re now ready for use by applications. In particular, the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group is now good to go.

FYI, I also updated the spec to incorporate a few editorial suggestions by Carsten Bormann. The new draft changed “positive or negative” to “unsigned or negative” and added an implementation note about the relationship to Modified Julian Dates. Thanks Carsten, for the useful feedback, as always!

It’s my sense that the spec is now ready for working group last call in the CBOR Working Group.

The specification is available at:

https://tools.ietf.org/html/draft-ietf-cbor-date-tag-01

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-cbor-date-tag-01.html

Virtual Democracy

Joy, fun, and love in open science

How much joy do you get from your research? “Science functions best when scientists are motivated by the joy of discovery and a desire to improve society rather than by wealth, recognition, and professional standing. In spite of current pressures, it is perhaps remarkable that many scientists continue to engage in selfless activities such as … Continue reading Joy, fun, and love in open scienc
How much joy do you get from your research? “Science functions best when scientists are motivated by the joy of discovery and a desire to improve society rather than by wealth, recognition, and professional standing. In spite of current pressures, it is perhaps remarkable that many scientists continue to engage in selfless activities such as … Continue reading Joy, fun, and love in open science

Phil Windley's Technometria

Get the Bike that Speaks to You

Summary: Once you know the kind of bike and the price range, only one thing matters: do you love riding it? My middle son Jacob recently asked me about a number of gravel bikes in the $1000-1500 price range. In looking them over, they all had aluminum frames, carbon forks, and mechanical disk brakes. The other components differed a little from bike to bike but nothing stood out. All

Summary: Once you know the kind of bike and the price range, only one thing matters: do you love riding it?

My middle son Jacob recently asked me about a number of gravel bikes in the $1000-1500 price range. In looking them over, they all had aluminum frames, carbon forks, and mechanical disk brakes. The other components differed a little from bike to bike but nothing stood out. All were from major manufacturers.

How to choose? I've bought a number of bikes over the years (I've only got three right now). I told Jacob that one thing made all the difference in choosing: the test ride. I've often been taken in by the specs on a bike, only to ride it and find that it felt dead. The bikes I've purchased spoke to me. They felt like they were alive, a part of me, and I wanted to ride them. Bottom line: buy the bike you fall in love with.

The implication of this is to buy from a local shop since you need to test ride and they have the bikes. Establishing a relationship with the shop is useful for service and advice too. They often will discount service on bikes they sold. Most bike shops have knowledgeable staff and can help you with everything from accessories to good riding trails. You might pay a bit more, but I think it's worth it.

Tags: cycling

Monday, 08. June 2020

Justin Richer

XYZ: Handles, Passing by Reference, and Polymorphic JSON

This article is part of a series about XYZ and how it works, also including articles on Why?, Interaction, Compatibility, and Cryptographic Agility. One comment I’ve gotten from several people when reading the XYZ spec text and surrounding documentation is about one of its core innovations. Namely, what’s with all the handles everywhere? What is a Handle? The XYZ protocol works by passing
This article is part of a series about XYZ and how it works, also including articles on Why?, Interaction, Compatibility, and Cryptographic Agility.

One comment I’ve gotten from several people when reading the XYZ spec text and surrounding documentation is about one of its core innovations. Namely, what’s with all the handles everywhere?

What is a Handle?

The XYZ protocol works by passing JSON objects around that represent different parts of the request and response. These parts include the keys the client is using to protect the request, the resources the client is asking for, the user information the client wants or knows about, the display information to be used in an interaction, and other stuff. Handles are, in short, a way to pass some of the request objects in as a reference value instead of as an object.

First a quick aside: why the name “handle” for this bit? In all honesty, it was the best I could come up with that wasn’t already massively overloaded within the space. Artifact, token, aspect, reference, etc … all arguably better than handle but they could be confusing. I hope that the final version of this protocol can bikeshed a better name for this concept, but for now they’re handles.

Let’s take keys for an example. In XYZ, the client identifies itself using its key. It can always pass its public key by value:

{
"keys": {
"proof": "jwsd",
"jwks": {
"keys": [
{
"kty": "RSA",
"e": "AQAB",
"kid": "xyz-1",
"alg": "RS256",
"n": "kOB5rR4Jv0GMeLaY6_It_r3ORwdf8ci_JtffXyaSx8xYJCCNaOKNJn_Oz0YhdHbXTeWO5AoyspDWJbN5w_7bdWDxgpD-y6jnD1u9YhBOCWObNPFvpkTM8LC7SdXGRKx2k8Me2r_GssYlyRpqvpBlY5-ejCywKRBfctRcnhTTGNztbbDBUyDSWmFMVCHe5mXT4cL0BwrZC6S-uu-LAx06aKwQOPwYOGOslK8WPm1yGdkaA1uF_FpS6LS63WYPHi_Ap2B7_8Wbw4ttzbMS_doJvuDagW8A1Ip3fXFAHtRAcKw7rdI4_Xln66hJxFekpdfWdiPQddQ6Y1cK2U3obvUg7w"
}
]
}
}
}

This object contains the keying material that will uniquely identify the piece of software making this request, as well as the proofing mechanism that software intends to use with that key. This is especially useful for clients that generate their keys locally: they can show up with a new key and just start working, as long as the server’s policy is OK with that kind of dynamic client. But in the OAuth 2 world, most of the clients are pre-registered, and it would seem silly and wasteful to force all developers to push all of their keying information in every request. So what the client can do instead is present a key handle in lieu of the key itself.

{
"keys": "7C7C4AZ9KHRS6X63AJAO"
}

This handle stands in for the object shown in full above. Upon seeing this handle, the server looks up the key and proofing method that the handle refers to. Instead of passing the key by value, the client has effectively passed the key by reference using the handle construct. Since we’re talking about keying material, it’s important to note that we’re not just talking about a fingerprint or thumbprint here. The value of the key may not have any direct relationship to the value of the key itself, just so long as the AS knows what key it stands for. The key that the key handle points to can even change over time. In addition, the XYZ protocol already has some concessions for passing fingerprints and thumbprints instead of full keys, like this one for a TLS certificate.

{
"keys": {
"proof": "mtls",
"cert#S256": "bwcK0esc3ACC3DB2Y5_lESsXE8o9ltc05O89jdN-dg2"
}
}

From the XYZ protocol perspective, this is still passing the key by value, as it contains enough cryptographic information to identify the key and allow the AS to validate the incoming connection’s signatures. A handle can still be used instead of this hashed certificate value.

But now you might be thinking, where did the client get that key handle in the first place?

Static and Dynamic Handles

As long as the AS understands what the handle stands for and the client knows how to use it, the XYZ protocol doesn’t actually care how the parties learn of this handle. However, judging from experience with OAuth 2 and in particular dynamic registration, clients will probably be getting handles in one of a couple ways.

Static Handles are assigned to represent specific objects at some point before the transaction request. For a key handle, it’s probable that a developer filled in some information into an online registration form, including their public key. The AS generates a key handle that the developer can plug into their software and get started, or the developer could use their key directly. In both cases, the client is identified by its keys.

Dynamic Handles are assigned to represent specific objects in response to a transaction request. If the client presents an object by value in the request, the AS can, at runtime, generate a new handle and return it to the client software. For a key handle, the client can now send the handle in future requests instead of repeating its keying information. This is a huge boon for mobile applications, as they would be allowed to generate their keys locally and present them directly the first time they talked to a server, but then switch to the simpler and smaller key handle for subsequent requests with the same server. This gives us the benefits of dynamic client registration without a separate registration step being required.

You’ll notice that in both cases, the handle always translates from the AS to a specific object. The AS is fully in charge of the mapping between the handle and the object it represents, and so that means the AS can define and change that mapping as it needs to. For example, the AS can allow a developer to update a registered client’s public key without changing the key handle that the client sends. Or instead of one handle representing a single key, the handle could represent a family of certificates signed by a common root. The clients in question can always send the key itself or the handle they’ve been configured with.

What Do Handles Stand For?

In the current XYZ protocol, different parts of the incoming request can be represented by handles. Each of these handles is a little different, as is the information that they represent, but all use the same exact mechanism of a handle value standing in for an object that would otherwise be passed by value. As we’ve covered key handles in our examples above, let’s look at the other parts.

Resource Handles are simple strings that stand in for one of the potentially-complex objects that the client can use to specify what it wants to access. These strings are usually going to be defined by the API’s being protected, and they stand for common access patterns available at the API. In other words, they are equivalent to OAuth 2’s scope parameter. And in XYZ, the client can simultaneously request both handle and non-handle resources, allowing for rich authorization requests without mode switching.

Claims Handles are similar to resource handles in that they stand in for a set of user claims that the client is requesting.

Display Handles stand in for information that can be displayed to the user during an authorization request. These are most useful for dynamic and ephemeral clients, where the client will present all of this once and not need to again. These should be tied to a particular key in order to prevent impersonation of a client. Here we start to get close to the functionality of OAuth 2’s client_id, but dynamic registration and per-instance information of OAuth 2 clients as used in the wild has shown us that it’s valuable for them to vary independently. But for cases where it doesn’t need to vary, you can in fact use a client_id as the key handle and display handle for a hybrid OAuth 2 and XYZ system.

User Handles can be used with trusted client software to allow that client to assert, to the AS, that the same user is still there as they were previously. This can allow the AS to decide if it wants to shortcut the interaction process — after all, if it’s the same user and the client isn’t asking for something that needs additional approval, then just let them in. This is based on the Persisted Claims Token concept from UMA2. Unlike the claims, this represents what the client already knows about the user, not what it’s asking for.

Transaction Handles allow a client to refer to a whole ongoing transaction in the future. For example, while the client is polling, or when it receives the front channel response, or when it needs to respond to a challenge from the AS, the transaction handle allows the client to refer to the ongoing transaction object. It can also be used to do step-up and refresh authorization after an initial transaction has been completed. Unlike the other handles, this does not represent a single part of the request object, but rather the overall state. Therefore it’s not quite a handle in the same way, but it does at least represent something that the client is referring to. I think it’s likely that this needs a better, different name, but for the moment it’s a different kind of handle.

How Can The Protocol Express All This?

XYZ’s protocol is not just based on JSON, but it’s based on a particular technique known as polymorphic JSON. In this, a single field could have a different data type in different circumstances. For example, the resources field can be an array when requesting a single access token, or it can be an object when requesting multiple named access tokens. The difference in type indicates a difference in processing from the server. Within the resources array, each element can be either an object, representing a rich resource request description, or a string, representing a resource handle (scope). Importantly, the array can contain both of these at the same time — and it works semantically because they ultimately both refer to the same thing. The same construct is used for all of the different handles above. As a consequence, it’s clear to a developer that the handle is a replacement for, or a reference to, the object value.

While it might not be a mode that developers working in statically typed languages are used to, we have implemented this in a number of different platforms, including the famously strict Java.

In a more statically typed JSON protocol, these would need to be two different fields, each with its own type information. The fields would need to be defined as mutually-exclusive at the protocol level, like jwks and jwks_uri are for OAuth clients. This type of external interlock leads to consistency errors and complex verification schemes, especially as values change. By building the exclusivity into the protocol syntax, we can bypass all of that error-prone interlock checking. The resulting protocol definition is more clean, more concise, and more consistent.

Sunday, 07. June 2020

Arjun Govind

How Self-Sovereign Identity Can Revolutionize Insurance

How a better approach to identity management can empower customers, enable compliance and cut down costs. Continue reading on R3 Publication »

How a better approach to identity management can empower customers, enable compliance and cut down costs.

Continue reading on R3 Publication »

Saturday, 06. June 2020

Mike Jones: self-issued

SecEvent Delivery Specs Addressing Directorate Reviews

I’ve published updated SecEvent delivery specs addressing the directorate reviews received during IETF last call. Thanks to Joe Clarke, Vijay Gurbani, Mark Nottingham, Robert Sparks, and Valery Smyslov for your useful reviews. The specifications are available at: https://tools.ietf.org/html/draft-ietf-secevent-http-push-11 https://tools.ietf.org/html/draft-ietf-secevent-http-poll-10 HTML-formatted

I’ve published updated SecEvent delivery specs addressing the directorate reviews received during IETF last call. Thanks to Joe Clarke, Vijay Gurbani, Mark Nottingham, Robert Sparks, and Valery Smyslov for your useful reviews.

The specifications are available at:

https://tools.ietf.org/html/draft-ietf-secevent-http-push-11 https://tools.ietf.org/html/draft-ietf-secevent-http-poll-10

HTML-formatted versions are also available at:

https://self-issued.info/docs/draft-ietf-secevent-http-push-11.html https://self-issued.info/docs/draft-ietf-secevent-http-poll-10.html

Thursday, 04. June 2020

Tim Bouma's Blog

Pan-Canadian Trust Framework Version 1.1 — Thematic Issues Going Forward

Pan-Canadian Trust Framework Version 1.1 — Thematic Issues Going Forward We Conjure Our Own Spirit Norval Morrisseau Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member. The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.1 is now available on GitHub. This document reflects the collective ef
Pan-Canadian Trust Framework Version 1.1 — Thematic Issues Going Forward We Conjure Our Own Spirit Norval Morrisseau

Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member.

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.1 is now available on GitHub. This document reflects the collective effort of almost a year since we last posted Version 1.0 in July 2019. Since then, the public sector (federal, provincial, territorial, and municipal) have met on almost a weekly basis with 20–28 participants on each call. The result is a truly Pan-Canadian perspective.

For this post, I won’t focus on what is in the latest version(please read!). Rather, I want to list the thematic issues we have identified as a group which we need to work on together to resolve. While Version 1.1 represents a huge milestone, there is still much exciting work ahead, which we have captured in the 11 thematic issues below.

Thematic Issue 1: Digital Relationships

We need to work on expanding our modeling and discussion of digital relationships — currently, there is not much more than a definition.

Thematic Issue 2: The Evolving State of Credentials

We now find ourselves in the middle of some very interesting developments in the areas of digital credentials. There is a sea-change happening in the industry where there is a movement from ‘information-sharing’ to ‘presenting digital proofs’. There is good work on standards (W3C) relating to verifiable credentials and decentralized identifiers.

Due to these new developments, we are now seeing the possibility that the traditional intermediated services (such as centralized/federated login providers) may disappear due to new technological advancements. This may not happen in the near future, but we are currently adjusting the PCTF model to incorporate the broader notion of a verifiable credential and are generalizing it to allow physical credentials (e.g., birth certificates, driver’s licenses) to evolve digitally within the model.

We are not sure that we have the model completely right (yet), but nonetheless Canada seems to be moving into the lead in understanding the implications of applying these technologies at ecosystem-scale (both public and private). As such, we are getting inquiries about how the PCTF might facilitate the migration to digital ecosystems and to new standards-based digital credentials, open-standards verification systems, and international interoperability.

Thematic Issue 3: Informed Consent

Informed consent is an evolving area and we don’t think the PCTF currently captures all the issues and nuances surrounding this topic especially in relation to the public sector. We have incorporated material from the DIACC and we have adjusted this material for public sector considerations, but we feel that much more work needs to be done. In the meantime, we feel that we have enough clarity in the PCTF to proceed with assessments — but we are ready to make changes if necessary.

Thematic Issue 4: Scope of the PCTF

Some have suggested that the scope of the PCTF should be broadened to include academic qualifications, professional designations, etc. We are currently experimenting with pilots in these areas with other countries. We have anticipated extensibility through the generalization of the PCTF model and the potential addition of new atomic processes. Keep in mind, however, that digital identity is a very specific but hugely important use case that we need to get right first. We are not yet ready to entertain a broadened scope for the PCTF into other areas, but soon we will.

Thematic Issue 5: Additional Detail

Many questions have been asked about the current version of this document in regard to the specific application of the PCTF. While we have a good idea, we still don’t have all of the answers. Much of this detail will be derived from the actual application of the PCTF (as was done with Alberta and British Columbia). The PCTF will be supplemented with detailed guidance in a separate document.

Thematic Issue 6: Unregistered Organizations

Currently, the scope of PCTF includes “all organizations registered in Canada (including inactive organizations) for which an identity has been established in Canada”. There are also many kinds of unregistered organizations operating in Canada such as sole proprietorships, trade unions, co-ops, NGOs, unregistered charities, and trusts. An analysis of these unregistered organizations in relation to the PCTF needs to be undertaken.

Thematic Issue 7: Assessing Outsourced Atomic Processes

Section 2.4.3 states that:

by design, the PCTF does not assume that a single provider is solely responsible for all of the atomic processes. Therefore, several bodies might be involved in the PCTF assessment process, focusing on different atomic processes, or different aspects (e.g., security, privacy, service delivery). Consideration must be given as to how to coordinate several bodies that might need to work together to yield an overall PCTF assessment. The organization being assessed is accountable for all parties within the scope of the assessment. The organization may decide that this is not feasible, nonetheless, the organization remains accountable. Such cases will be noted in the assessment.

The Issuer in this model is the authority ultimately accountable. Although an Issuer may choose to outsource or delegate the responsibility of the Credential Issuance atomic process to another body, the accountability remains with the Issuer.

We need to determine how multi-actor assessments will be conducted. It has been suggested that the organization being assessed should have the authority to speak to how well other organizations perform atomic processes on its behalf.

Thematic Issue 8: The Identity Continuity Atomic Process

The Identity Continuity atomic process is defined as:

the process of dynamically confirming that the Subject has a continuous existence over time (i.e., “genuine presence”). This process can be used to ensure that there is no malicious or fraudulent activity (past or present) and to address identity spoofing concerns.

It has been noted that there are privacy concerns with the notion of “dynamically confirming” the continuous existence of a Subject over time. We need to come up with a more precise and privacy-respecting definition of the Identity Continuity atomic process.

Thematic Issue 9: Signature

Appendix A defines signature as:

an electronic representation where at a minimum: the person signing the data can be associated with the electronic representation, it is clear that the person intended to sign, the reason or purpose for signing is conveyed, and the data integrity of the signed transaction is maintained, including the original.

We need to explore how the concept of signature is to be applied in the context of the PCTF.

Thematic Issue 10: Foundation Name, Primary Name, Legal Name

Appendix A has definitions for Foundation Name, Primary Name, and Legal Name.

The three terms more or less mean the same thing. We need to pick the preferred term and be consistent in its usage.

Thematic Issue 11: Review of the Appendices

At some point, we should undertake a full review of the current appendices.

For each appendix, we need to evaluate its utility, applicability, and appropriateness, and determine if it should continue to be included in the PCTF document. Some appendices will remain; some may be moved to a guidelines document; while others might be discarded outright. Some of the appendices that remain may need to be amended.

In closing, we remain on a journey of defining trust in a new way — not only for each institution or program but for the digital ecosystem as a whole. We are looking for ways to further broaden this perspective and I will keep you posted on new developments.


Mike Jones: self-issued

Successful OpenID Foundation Virtual Workshop

I was pleased by the quality of the discussions and participation at the first OpenID Foundation Virtual Workshop. There were over 50 participants, with useful conversations happening both on the audio channel and in the chat. Topics included current work in the working groups, such as eKYC-IDA, FAPI, MODRNA, FastFed, Shared Signals and Events, and […]

I was pleased by the quality of the discussions and participation at the first OpenID Foundation Virtual Workshop. There were over 50 participants, with useful conversations happening both on the audio channel and in the chat. Topics included current work in the working groups, such as eKYC-IDA, FAPI, MODRNA, FastFed, Shared Signals and Events, and OpenID Connect), OpenID Certification, and a discussion on interactions between browser privacy developments and federated login. Thanks to all who participated!

Here’s my presentation on the OpenID Connect working group and OpenID Certification: (PowerPoint) (PDF).

Update: The presentations from the workshop are available at OIDF Virtual Workshop – May 21, 2020.

Wednesday, 03. June 2020

Phil Windley's Technometria

Using Sovrin for Age Verification

Summary: How can I use Sovrin to perform age verification to minimize the personal data I have to keep? This blog post discusses the basics. Kamea, a member of the Sovrin Telegram channel asked this question: I’m looking for our users to sign up with a Sovrin account, validate their identity, then use it to verify age and passport country on our network using a zk proof so we do

Summary: How can I use Sovrin to perform age verification to minimize the personal data I have to keep? This blog post discusses the basics.

Kamea, a member of the Sovrin Telegram channel asked this question:

I’m looking for our users to sign up with a Sovrin account, validate their identity, then use it to verify age and passport country on our network using a zk proof so we don’t have to handle any personal data.

There are actually two different things here and it's useful to separate them.

First, there's no such thing as a "Sovrin account". SSI doesn't have accounts, it has relationships and credentials. But your company will have accounts. The way that works is you set up an enterprise agency on your side to create a relationship (technically a peer DID exchange) with anyone who wants to create an account with you. After they establish that relationship, you can begin to collect attributes you need to provide service. Some of those will be self asserted and some you might require credentials depending on your use case.

The second thing is determining which credentials you want to require. If your requirement is an attested proof of age, then there could be several choices. In an ideal world, people would have a number of those (e.g. national ID, drivers license, passport, bank KYC, etc.) In the current world, you might have to partner with someone like Onfido or another KYC company do give people a way to get their age verified and a credential to use in proving it. Once people have this credential, however, it gives them the means of proving their age (and likely other attributes) anywhere, not just to you.

Kamea mentioned the desire to avoid handling personal data and this is a good solution for that. The proof would record that the person was over 18 and their country of residence without seeing or recording any other information.

I recommend running through the demo at Connect.me to see how all this could work. There are a number of vendors who can supply help with enterprise agencies and consulting on how to set this up in your organization.

Photo Credit: Under 25? Please be prepared to show proof of age when buying alcohol from Gordon Joly (CC BY-SA 2.0)

Tags: sovrin sovrin+use+cases


Justin Richer

XYZ: Why?

This article is part of a series about XYZ and how it works, also including articles on Handles, Interaction, Compatibility, and Cryptographic Agility. It’s been about a year and a half since I started in earnest on the XYZ project. I’ve talked with a variety of different people and companies about how it fits and doesn’t fit their use cases, I’ve presented it at a number of different conferences
This article is part of a series about XYZ and how it works, also including articles on Handles, Interaction, Compatibility, and Cryptographic Agility.

It’s been about a year and a half since I started in earnest on the XYZ project. I’ve talked with a variety of different people and companies about how it fits and doesn’t fit their use cases, I’ve presented it at a number of different conferences, and I’ve submitted that work into the IETF in the hopes of starting a new standards working group based on what we’ve built and learned.

With that in mind, I wanted to take some time here to discuss a number of the core aspects of XYZ: why it has the engineering decisions it does and what the effects those decisions have on the resulting system. But I wanted to start this with a look at a more fundamental question: Why am I doing this?

Why Not Extend OAuth 2?

I’ve covered OAuth 2’s shortcomings in the past, and overcoming these shortcomings was a key motivation for starting this project. Many of these shortcomings can be overcome by patching the existing framework with new functions, new restrictions, and new techniques. OAuth 2 has a long and storied history of doing this, and I’ve been the editor for three such specifications myself along with a whole litany of proposed extensions, profiles, and other work. OAuth 2 is not going away, and in fact I do think that patching it up makes sense for those that want to or need to keep using it. I know I’m going to keep using it to solve problems, and I expect to keep teaching classes and doing engineering work on it for a long time yet.

But I still think we’ve come to the point where we’ve patched the framework so much that it’s becoming more fragile and less flexible. We need to start looking to the future before the future surprises us. We already have needs that OAuth 2 doesn’t solve very well, even with all its extensions. People are off inventing their own point solutions right now, but that doesn’t have to be the case forever. We stand a much better chance of success if we learn from the evolution of OAuth 2 and build from that, throwing away things that didn’t work out or didn’t age well.

Why Build XYZ?

I started XYZ as an exercise of stepping back from OAuth 2’s model of the world, and the assumptions that come with it, to see what the world could look like without those constraints. XYZ was my attempt of answering the question, “What if OAuth had been made today with what we know now?”

Some have described XYZ as “OAuth 2, if we knew better,” and I take that as a high compliment. But it’s important to know that it’s not a completely radical departure, and in fact XYZ has been designed to be layered into OAuth 2 based platforms and re-use many concepts in a new and consistent way.

Why Make a Standard?

This is always a hard question. The standards process is fraught with stress and peril, and you don’t always get the best results from it. If I had kept XYZ as a simple standalone project, life would almost certainly be easier. The handful of groups that are implementing the draft protocol would still be able to do so, and as an open source project it would continue to grow and be available for anyone who wanted to do with it.

And I would retain complete control. But therein lies a problem: there are a LOT of really smart people out there, and I think this idea is so much bigger than me. And so I set out to start the process of proposing a new working group in the IETF to both bring in smart people and provide a mixing bowl for all their good ideas, along with mine. If this is going to be robust and powerful, like I think it can be, then it needs a strong and diverse community to build it.

What’s In a Name?

The name XYZ is somewhat of an accident, since I bought the domain oauth.xyz simply because I saw it was available (and on sale!), and I didn’t have any plans for it right away. It became a place to host my writeup of “what-comes-after-OAuth-2” and therefore lent its name to the result. Since this wasn’t really OAuth, it became known as XYZ.

I’m the first to admit that XYZ is not a great name for a project. It’s not unique, it’s hard to search for, it doesn’t stand for anything in particular, and more importantly it’s a common way to say something is a stand-in. But that’s just it: it was always meant to be a functional placeholder for what would come next. An experiment in progress, one with real code and real systems working together, but far from a final product. You can deploy XYZ as a protocol today, but it’s going to change going forward as the work continues. I still think we could call the next big thing that we’re working on OAuth 3, but that’s for the wider OAuth community to decide when the time is right.

I think it’s likely that XYZ will live on as an implementation of what comes out of the standards process, even as that changes.

What’s In It For You?

I don’t usually like to get personal here on my professional blog, but some people have been positing why I’ve been doing this work myself. I’ve been accused of trying to subvert the OAuth 2 world and drive OAuth 2 implementors out of business by causing market confusion. This doesn’t make any sense to me considering several of my clients are OAuth 2 implementors. I want them to continue to succeed, and I think the fear of something new is unfounded and bad business sense. I’ve also been accused of doing this to get famous. To that I say I genuinely doubt that being a part of the creation process for a security protocol that most people won’t ever see is going to make me more well-known than I already am, at least in the circles that care about such things. At the moment, I’m lucky enough that my consulting company is fully booked — if not more than full — and I’m having to divert some incoming projects to others. I’ve also been accused of having some secret project that I’m trying to push onto the world, or a stealth start up that I’m trying to drive in a first-to-market situation. Simply put, I don’t have anything like that. All the prototype code and specifications are open source under liberal licenses. So no, I’m not doing this to get rich and/or famous.

The real reason is a lot more mundane and yet it still confuses people: I want the internet to be better, and I see this as an opportunity to make a difference. I started XYZ as a concept and prototype before any companies talked to me about building it. It was after I had shown that there was something real to this idea that people started to find me because it was a solution that fit the problems that they’re seeing. The feedback and experience of the last year and a half have radically changed the details of XYZ, but the core concepts are still there and still feel sound.

So what’s in it for me? The same thing that’s in it for you: more security, easier development, wider use cases, and a better internet. I don’t know why some think that this isn’t motivation enough, but I promise I’m not hiding ulterior motives.

What’s Next?

I hope that at this point it’s more clear why I started XYZ and why I am trying to start the TxAuth group at IETF. In the next few blog posts, I plan to go over some of the key features of the XYZ protocol and explain why they are important and what they bring that OAuth 2 was lacking. Things like passing by reference handles, proof of possession, interaction models, and extensibility all have key roles, and they’ve been explicitly designed into XYZ over the last year and a half. While all of this is covered in some depth on the XYZ website, I think it’s time to push the conversation even further.

Monday, 01. June 2020

Arjun Govind

Hi Philip, thank you for sharing this fascinating read!

Hi Philip, thank you for sharing this fascinating read! Amongst the several points you make in the piece, I think one that I most strongly endorse is the fact that "self-sovereign ID" is not the best name for the DID + VC approach to identity. I remember attending a fascinating discussion at the latest IIW session entitled "Must We Call it Self-Sovereign ID? Hopefully Not". Meeting notes here if y

Hi Philip, thank you for sharing this fascinating read! Amongst the several points you make in the piece, I think one that I most strongly endorse is the fact that "self-sovereign ID" is not the best name for the DID + VC approach to identity. I remember attending a fascinating discussion at the latest IIW session entitled "Must We Call it Self-Sovereign ID? Hopefully Not". Meeting notes here if you're interested: https://iiw.idcommons.net/Must_we_call_it_%22Self-Sovereign_Identity%22%3F_(hopefully_not)

Thanks again for writing!


Eddie Kago

Kweli - Pan-African Self-sovereign Identity Framework

Introduction: This article is a brief on the essence of a Pan-African approach towards technological innovation, resilience, and the possibilities of a robust software approach in problem solving for Africa. Pan-Africanism has been hailed for years by greats like Toussaint Louverture of the Haitian revolution in the 18th century to Kwame Nkrumah in the 20th century as the ideal that individu

Introduction: This article is a brief on the essence of a Pan-African approach towards technological innovation, resilience, and the possibilities of a robust software approach in problem solving for Africa.

Pan-Africanism has been hailed for years by greats like Toussaint Louverture of the Haitian revolution in the 18th century to Kwame Nkrumah in the 20th century as the ideal that individuals of African descent, in the continent and the diaspora, ought to contribute towards a collective self-reliance to fulfill the potential of the continent. Fast forward to the present day with the presence of the African Union, constituted to help achieve the Pan-African agenda and now propelled by Agenda 2063, the blueprint and master plan for transforming Africa into the global powerhouse of the future.

the Black Star Gate, Accra

Agenda 2063 is a mission of the AU to ensure that Africa occupies a powerhouse status by the year 2063. Its most visible component yet is the decision to have the continent as a single trade block allowing free movement of goods and services across borders thus championing intra-continental trade. This is under the African Continental Free Trade Area Agreement that makes Africa the biggest trade block in the world only followed by the World Trade Organization. For starters, this translates to a combined gross domestic product (GDP) of more than $3.4 Trillion ($3,400,000,000,000) and a combined population of 1.4 Billion people when the ACFTA comes into effect on July 1, 2020.

The free trade area will result in significant reductions in trade tariffs and quotas and free movement of people, goods and services while transacting in the single continental market. Gradually, the vision of the founding fathers of the Pan-African movement to achieve unity, self-determination, freedom, progress and collective prosperity of the African people will become a reality. However, to be within the range of achieving Agenda 2063 fully, a Pan-African technological approach is the best shot at getting there.

A Pan-African Technological Approach

The Pan-African technological approach to employ is a combination of bits and atoms, hardware based innovations in harmony with software solutions designed to adapt with the African way of life. As such, this demands resourcefulness, multiple areas of application of the said innovation, and the ability to be adaptable over time and for the widest range of Africans on the continent and in the diaspora. This would be the engine that powers the legal tools, the ideas and earlier policy frameworks to enable implementation at scale and in the shortest time possible.

According to the UN World Population Prospects Report of 2017, Africa will boast of the largest young number of people in the world by 2050. With the quality of life set to improve and with growing numeracy and literacy levels, such a youthful population sets the continent at a massive “leap-frogging” advantage in multiple sectors of her economy. It is this population that will constitute a significant portion of the global labour force and with the increasing integration of technology into daily life and at the workplace, it should not be out of form when creative approaches emerge to solving age-old problems that have faced us as a people.

Comparison of Youthful Population by Region between 2017 and 2050

2020 and Beyond

Digital financial services (DFS) keep contributing to the introduction into Information Communication Technology and usage of mobile phones for most Africans entering the digital economy. Hence DFS solutions maintain their position as the “Gateway” to the Fourth Industrial Revolution (4IR) based innovations. EdTech and Entertainment will also be significant Gateways in the emerging post-Covid world. Specific areas gaining traction are the use of Artificial Intelligence to build smarter systems & inform decision making and the adoption of Blockchain Technologies to build resilient data, financial & verification systems as evident in Kenya, Nigeria, Tunisia and Rwanda.

From Kenya, I co-created Blockchain software for Digital Identity with a group of Pan-African technologists as part of the Identity Working Group of the African Digital Asset Framework. The project, called Kweli (Kiswahili for Truth) is based on a research paper, authored between early 2019 and released in January 2020. Kweli proposes a Digital Trust model to enable secure digital transactions in a privacy preserving way pegged on identities of Africans locally and in the diaspora. The most efficient way we identified to achieve a system like Kweli is if such a model was built as a blockchain identity network that allows for flexibility of control over ones own identity and associated data. This approach is generally termed as Decentralized Identity or Self-sovereign Identity.

A Pan-African Technological Approach would only be efficient if driven by projects built as Open-Source and adopting Open Technology Standards. An Open Source approach seeds innovations based on the initial project either by improvement on it or as inspiration to future complex projects. Consequently, it births a strong community of contributors and supporters all working towards the success of the project. We built Kweli as Free and Open Source Software with this in mind.

It will be exciting to observe similar projects cropping up in fields such as Robotics, Internet of Things powered solutions, Artificial Intelligence, and 3D Printing that will provide creative solutions for Africans. While a solution like Kweli may have barely scratched the surface, it is a bold step worthy enough to contribute towards the visions of Pan-Africanists from the founding days to the present day Pan-African chugging along in their own unique contribution.

Links to References

Agenda 2063 - https://au.int/en/agenda2063

African Continental Free Trade Area - https://au.int/en/ti/cfta/about

UN Population Report - https://esa.un.org/unpd/wpp/Publications/Files/WPP2017_KeyFindings.pdf

The Fourth Industrial Revolution in Africa — https://www.brookings.edu/research/the-fourth-industrial-revolution-and-digitization-will-transform-africa-into-a-global-powerhouse

Kweli: Pan-African Self-sovereign Identity - https://kweli.adaf.io

Kweli - Pan-African Self-sovereign Identity Framework was originally published in Vibranium ID on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 31. May 2020

Mike Jones: self-issued

secp256k1 curve and algorithm registered for JOSE use

IANA has registered the “secp256k1” elliptic curve in the JSON Web Key Elliptic Curve registry and the corresponding “ES256K” signing algorithm in the JSON Web Signature and Encryption Algorithms registry. This curve is widely used among blockchain and decentralized identity implementations. The registrations were specified by the COSE and JOSE Registrations for WebAuthn Algorithms specification, [

IANA has registered the “secp256k1” elliptic curve in the JSON Web Key Elliptic Curve registry and the corresponding “ES256K” signing algorithm in the JSON Web Signature and Encryption Algorithms registry. This curve is widely used among blockchain and decentralized identity implementations.

The registrations were specified by the COSE and JOSE Registrations for WebAuthn Algorithms specification, which was created by the W3C Web Authentication working group and the IETF COSE working group because WebAuthn also allows the use of secp256k1. This specification is now in IETF Last Call. The corresponding COSE registrations will occur after the specification becomes an RFC.


Aaron Parecki

The Real Cause of the Sign In with Apple Zero-Day

The zero-day bug in Sign In with Apple actually had nothing to do with the OAuth or OpenID Connect part of the Sign In with Apple exchange, and very little to do even with JWTs. Let's take a closer look to see what actually happened.

Last week, a security researcher discovered and disclosed a zero-day bug in Sign In with Apple, and collected a $100,000 bounty.

Sign In with Apple is similar to OAuth and OpenID Connect, with Apple’s own spin on it. While there were some critical bugs due to Apple’s initial poor implementation of OpenID Connect, as documented by the OpenID Foundation, those bugs were fixed relatively quickly.

The zero-day bug that was recently discovered actually had nothing to do with the OAuth or OpenID Connect part of the Sign In with Apple exchange, and very little to do even with JWTs. Let’s take a closer look to see what actually happened.

Disclaimer: I do not have any inside knowledge about Apple's systems. The information in post is based on the original writeup by Bhavuk Jain and my own observations using Apple's API.

The original writeup heavily mentions JWTs and emphasizes the OAuth exchange, and I’ve seen many reactions suggesting that the problem was in the JWT creation or validation, or some poor implementation of OpenID Connect. But instead, the problem was much actually much simpler than that.

Breaking down the OAuth Flow

In an OAuth or OpenID Connect flow, the specs talk about the communication from the client requesting the token to the server generating the token. The steps where the user authenticates with the server are intentionally left out of the specs, since those are considered implementation details of the server.

The application initiates an OAuth request by sending the user to the authorization server. This part is the “authorization request” step in the specs. The user authenticates, and may see a consent screen where they approve the request from the application. This part is “out of scope” of the specs. The authorization server generates an authorization code or ID token in the response and sends the user back to the application. This is the “au