Last Update 3:53 PM April 15, 2021 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Wednesday, 14. April 2021

Bill Wendel's Real Estate Cafe

#LetUsDream: Do we dwell together to make money or is this a community?

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?… The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?…

The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.


Mike Jones: self-issued

Second Version of W3C Web Authentication (WebAuthn) Now a Standard

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The compani

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The companion second FIDO2 Client to Authenticator Protocol (CTAP) specification is also approaching becoming a completed standard.

See the W3C announcement of this achievement. Also, see Tim Cappalli’s summary of the changes in the second versions of WebAuthn and FIDO2.


Simon Willison

Why you shouldn't use ENV variables for secret data

Why you shouldn't use ENV variables for secret data I do this all the time, but this article provides a good set of reasons that secrets in environment variables are a bad pattern - even when you know there's no multi-user access to the host you are deploying to. The biggest problem is that they often get captured by error handling scripts, which may not have the right code in place to redact th

Why you shouldn't use ENV variables for secret data

I do this all the time, but this article provides a good set of reasons that secrets in environment variables are a bad pattern - even when you know there's no multi-user access to the host you are deploying to. The biggest problem is that they often get captured by error handling scripts, which may not have the right code in place to redact them. This article suggests using Docker secrets instead, but I'd love to see a comprehensive write-up of other recommended patterns for this that go beyond applications running in Docker.

Via The environ-config tutorial


Karyl Fowler

Takeaways from the Suez Canal Crisis

An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees nearly >12% of global trade move thro
An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons

The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees nearly >12% of global trade move through it annually, dealt an especially brutal blow to the oil and gas industry while blocked (given it represents the primary shipping channel for nearly 10% of gas and 8% of natural gas).

While the Ever Given itself was a container ship, likely loaded with finished goods versus raw materials or commodities, the situation has already — and will continue to — have a massive negative impact on totally unrelated industries…for months to come. Here’s an example of the resulting impact on steel and aluminum prices; this had related impact again to oil and gas (steel pipes flow oil) as well as infrastructure and…finished goods (like cars). And the costs continue to climb as the drama continues with port authorities and insurers battling over what’s owed to who.

Transmute is a software company — a verifiable credentials as a service company to be exact — and we’ve been focused specifically on the credentials involved in moving steel assets around the globe alongside our customers at DHS SVIP and CBP for the last couple years now. Now, there’s no “silver bullet” for mitigating the fiscal impact of the Ever Given on global trade, and ships who arrived the day it got stuck or shortly after certainly faced a tough decision — sail around the Cape of Africa for up to ~$800K [fuel costs alone] + ~26 days to trip or wait it out at an up to $30K per day demurrage expense [without knowing it’d only be stuck for 6 days or ~$180,000.

So what if you’re a shipping manager and you can make this decision faster? Or, make the call before your ship arrives at the canal? [Some did make this decision, by the way]. What if your goods are stuck on the Ever Given — do you wait it out? Switching suppliers is costly, and you’ve likely got existing contracts in place for much of the cargo. Even if you could fulfill existing contracts and demand on time with a new supplier, what do you do with the delayed cargo expense? What if you’re unsure whether you can sell the duplicate and delayed goods when they reach their originally intended destination?

Well, verifiable credentials — a special kind of digital document that’s cryptographically provable, timestamped and anchored to an immutable ledger at the very moment in time it’s created — can give companies the kind of data needed to make these sorts of decisions. With use over time for trade data, verifiable credentials build a natural reputation for all the things the trade documents are about: suppliers, products, contracts, ports, regulations, tariffs, time between supply chain handoff points, etc.

This type of structured data is of such high integrity that supply chain operators can rely on it and feel empowered to make decisions based on it.

What I’m hoping comes from this global trade disaster is a change in the way supply chain operators make critical decisions. Supply chains of the future will be powered by verifiable credentials, which seamlessly bridge all the data silos that exist today — whether software-created silos or even the paper-based manual, offline silos.

Today, it’s possible to move from a static, critical chain style of management where we often find ourselves in a reactive position to supply chains that look more like an octopus. High integrity data about suppliers and products enables proactive, dynamic decision making in anticipation of and in real time response to shifts in the market — ultimately capturing more revenue opportunities and mitigating risk at the same time.

Takeaways from the Suez Canal Crisis was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aaron Parecki

How to Sign Users In with IndieAuth

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

What is IndieAuth? IndieAuth is an extension of OAuth 2.0 that enables an individual website like someone's WordPress, Gitea or OwnCast instance to become its own identity provider. This means you can use your own website to sign in to other websites that support IndieAuth.

You can learn more about the differences between IndieAuth and OAuth by reading OAuth for the Open Web.

What You'll Need

You'll need a few tools and libraries to sign users in with IndieAuth.

An HTTP client. A URL parsing library. A hashing library that supports SHA256. A library to find <link> tags in HTML. The ability to show an HTML form to the user. IndieAuth Flow Summary

Here is a summary of the steps to let people sign in to your website with IndieAuth. We'll dive deeper into each step later in this post.

Present a sign-in form asking the user to enter their server address. Fetch the URL to discover their IndieAuth server. Redirect them to their IndieAuth server with the details of your sign-in request in the query string. Wait for the user to be redirected back to your website with an authorization code in the query string. Exchange the authorization code for the user's profile information by making an HTTP request to their IndieAuth server. Step by Step

Let's dive into the details of each step of the flow. While this is meant to be an approachable guide to IndieAuth, eventually you'll want to make sure to read the spec to make sure you're handling all the edge cases you might encounter properly.

Show the Sign-In Form

First you'll need to ask the user to enter their server address. You should show a form with a single HTML field, <input type="url">. You need to know at least the server name of the user's website.

To improve the user experience, you should add some JavaScript to automatically add the https:// scheme if the user doesn't type it in.

The form should submit to a route on your website that will start the flow. Here's a complete example of an IndieAuth sign-in form.

<form action="/indieauth/start" method="post"> <input type="url" name="url" placeholder="example.com"> <br> <input type="submit" value="Sign In"> </form>

When the user submits this form, you'll start with the URL they enter and you're ready to begin the IndieAuth flow.

Discover the IndieAuth Server

There are potentially two URLs you'll need to find at the URL the user entered in order to complete the flow: the authorization endpoint and token endpoint.

The authorization endpoint is where you'll redirect the user to so they can sign in and approve the request. Eventually they'll be redirected back to your app with an authorization code in the query string. You can take that authorization code and exchange it for their profile information. If your app wanted to read or write additional data from their website, such as when creating posts using Micropub, it could exchange that code at the second endpoint (the token endpoint) to get an access token.

To find these endpoints, you'll fetch the URL the user entered (after validating and normalizing it first) and look for <link> tags on the web page. Specifically, you'll be looking for <link rel="authorization_endpoint" href="..."> and <link rel="token_endpoint" href="..."> to find the endpoints you need for the flow. You'll want to use an HTML parser or a link rel parser library to find these URLs.

Start the Flow by Redirecting the User

Now you're ready to send the user to their IndieAuth server to have them log in and approve your request.

You'll need to take the authorization endpoint you discovered in the previous request and add a bunch of parameters to the query string, then redirect the user to that URL. Here is the list of parameters to add to the query string:

response_type=code - This tells the server you are doing an IndieAuth authorization code flow. client_id= - Set this value to the home page of your website the user is signing in to. redirect_uri= - This is the URL where you want the user to be returned to after they log in and approve the request. It should have the same domain name as the client_id value. state= - Before starting this step, you should generate a random value for the state parameter and store it in a session and include it in the request. This is for CSRF protection for your app. code_challenge= - This is the base64-urlencoded SHA256 hash of a random string you will generate. We'll cover this in more detail below. code_challenge_method=S256 - This tells the server which hashing method you used, which will be SHA256 or S256 for short. me= - (optional) You can provide the URL the user entered in your sign-in form as a parameter here which can be a hint to some IndieAuth servers that support multiple users per server. scope=profile - (optional) If you want to request the user's profile information such as their name, photo, or email, include the scope parameter in the request. The value of the scope parameter can be either profile or profile email. (Make sure to URL-encode the value when including it in a URL, so it will end up as profile+email or profile%20email.)

Calculating the Code Challenge

The Code Challenge is a hash of a secret (called the Code Verifier) that you generate before redirecting the user. This lets the server know that the thing that will later make the request for the user's profile information is the same thing that started the flow. You can see the full details of how to create this parameter in the spec, but the summary is:

Create a random string (called the Code Verifier) between 43-128 characters long Calculate the SHA256 hash of the string Base64-URL encode the hash to create the Code Challenge

The part that people most often make a mistake with is the Base64-URL encoding. Make sure you are encoding the raw hash value, not a hex representation of the hash like some hashing libraries will return.

Once you're ready with all these values, add them all to the query string of the authorization endpoint you previously discovered. For example if the user's authorization endpoint is https://indieauth.rocks/authorize because their website is https://indieauth.rocks, then you'd add these parameters to the query string to create a URL like:

https://indieauth.rocks/authorize?response_type=code &client_id=https://example-app.com &redirect_uri=https://example-app.com/redirect &state=a46a0b27e67c0cb53 &code_challenge=eBKnGb9SEoqsi0RGBv00dsvFDzJNQOyomi6LE87RVSc &code_challenge_method=S256 &me=https://indieauth.rocks &scope=profile

Note: The user's authorization endpoint might not be on the same domain as the URL they entered. That's okay! That just means they have delegated their IndieAuth handling to an external service.

Now you can redirect the user to this URL so that they can approve this request at their own IndieAuth server.

Handle the Redirect Back

You won't see the user again until after they've logged in to their website and approved the request. Eventually the IndieAuth server will redirect the user back to the redirect_uri you provided in the authorization request. The authorization server will add two query parameters to the redirect: code and state. For example:

https://example-app.com/redirect?code=af79b83817b317afc9aa &state=a46a0b27e67c0cb53

First you need to double check that the state value in the redirect matches the state value that you included in the initial request. This is a CSRF protection mechanism. Assuming they match, you're ready to exchange the authorization code for the user's profile information.

Exchange the Authorization Code for the User's Profile Info

Now you'll need to make a POST request to exchange the authorization code for the user's profile information. Since this code was returned in a redirect, the IndieAuth server needs an extra confirmation that it was sent back to the right thing, which is what the Code Verifier and Code Challenge are for. You'll make a POST request to the authorization endpoint with the following parameters:

grant_type=authorization_code code= - The authorization code as received in the redirect. client_id= - The same client_id as was used in the original request. redirect_uri= The same redirect_uri as was used in the original request. code_verifier= The original random string you generated when calculating the Code Challenge.

This is described in additional detail in the spec.

Assuming everything checks out, the IndieAuth server will respond with the full URL of the user, as well as their stated profile information if requested. The response will look like the below:

{ "me": "https://indieauth.rocks/", "profile": { "name": "IndieAuth Rocks", "url": https://indieauth.rocks/" "photo": "https://indieauth.rocks/profile.jpg" } }

Wait! We're not done yet! Just because you get information in this response doesn't necessarily mean you can trust it yet! There are two important points here:

The information under the profile object must ALWAYS be treated as user-supplied data, not treated as canonical or authoritative in any way. This means for example not de-duping users based on the profile.url field or profile.email field. If the me URL is not an exact match of the URL the user initially entered, you need to re-discover the authorization endpoint of the me URL returned in this response and make sure it matches exactly the authorization server you found in the initial discovery step.

You can perform the same discovery step as in the beginning, but this time using the me URL returned in the authorization code response. If that authorization endpoint matches the same authorization endpoint that you used when you started the flow, everything is fine and you can treat this response as valid.

This last validation step is critical, since without it, anyone could set up an authorization endpoint claiming to be anyone else's server. More details are available in the spec.

Now you're done!

The me URL is the value you should use as the canonical and stable identifier for this user. You can use the information in the profile object to augment this user account with information like the user's name or profile information. If the user logs in again later, look up the user from their me URL and update their name/photo/email with the most recent values in the profile object to keep their profile up to date.

Testing Your IndieAuth Client

To test your IndieAuth client, you'll need to find a handful of IndieAuth providers in the wild you can use to sign in to it. Here are some to get you started:

Micro.blog - All micro.blog accounts are IndieAuth identities as well. You can use a free account for testing. WordPress - With the IndieAuth plugin installed, a WordPress site can be its own IndieAuth server as well. Drupal - The IndieWeb module for Drupal will let a Drupal instance be its own IndieAuth server. Selfauth - Selfauth is a single PHP file that acts as an IndieAuth server.

Eventually I will get around to finishing the test suite at indieauth.rocks so that you have a testing tool readily available, but in the mean time the options above should be enough to get you started.

Getting Help

If you get stuck or need help, feel free to drop by the IndieWeb chat to ask questions! Myself and many others are there all the time and happy to help troubleshoot new IndieAuth implementations!

Tuesday, 13. April 2021

John Philpin : Lifestream

“Patience is more than simply learning to wait. It is havi

“Patience is more than simply learning to wait. It is having learned what is worth your time. JMStorm 


“Patience is more than simply learning to wait. It is having learned what is worth your time.

JMStorm


MyDigitalFootprint

What superpowers does a CDO need?

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful. Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/ Day 0 a CDO must: BE a champion of fac

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful.

Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/

Day 0 a CDO must:

BE a champion of fact-based, data-driven decision making. However, complex decisions based on experience, gut instinct, leadership and opinions still play a role, but most decisions can now be underpinned with a firmer foundation. BE curious about how the business operates and makes money and its drivers of cost, revenue, and customer satisfaction through the lens of data and analytical models. BE an ambassador of change. Data uncovers assumptions that unpack previous political decisions and moves power. Data does not create change but will create conflict — how this is managed is a critical CDO skill. BE a great storyteller. KNOW who is the smartest data scientist in the company, where the most sophisticated models are, and understand and appreciate what those data teams do and how they do it. Managing and getting the best from these teams is a skill everyone needs. FIGURE out and articulate the value your team can deliver to the business in the next week, month, and quarter. As the CDO, what is the value you bring to your peers and shareholder in the next 5 years? IMPROVE decision making using data for day to day, how to reduce risk and how to inform the company on achieving and adapting the company’s strategy. BUILD relationships to source data both within your business and the wider ecosystem. This is both to determine the quality of the data and be able to better use data and or roll out solutions that improve quality and decision-making. KNOW what technical questions to ask and being able to live with the complexity involved in the delivery.

Decision making is a complex affair, and as CDO’s we are there to support. Decisions are perceived to be easier when there is lots of data, and the signal is big, loud and really clear. Big data has a place, but we must not forget small signals from ethnographic data sources. Leadership often does not know what to do with critical and challenging small data, especially when it challenges easy assumptions that big data justifies.

A CDO superpower is to shine a light on all data, without bias

Our superpower is to shine a light on all data, without bias, and help strategic thinkers, who often put a higher value on quantitative data. They didn’t know how to handle data that wasn’t easily measurable does not show up in existing paid-for reports. Ethnographic work has a serious perception problem in a data-driven decision world. A key role of the CDO is to uncover all data and its value, not bias to a bigger data set — that is just lazy. I love this image from @triciawang, where the idea of critical small data set is represented as “thick data.” Do follow her work https://www.triciawang.com/ or that of Genevieve BellKate Crawford and danah boyd (@zephoria).

Source: Nokia’s experience of ignoring small data

Note to the CEO

Digital transformation has built a dependence on data, and the bigger the data set, the more weight it is assumed to have. Often, there is a dangerous assumption made that the risk in a decision is reduced because of the data set's size. It may be true for operational issues and automated decision making but not necessarily for strategy.

As the CEO, you need to determine the half-life of the data used to justify or solidify a decision. Half-life in science is when more than 50 per cent of a substance has undergone a radical change; in business terms, this is when half the value of the data is lost or a doubling of the error. The bigger the data set, the quicker (shorter) the half-life will be. Indeed some data’s half-life is less than the time it took to collect and store it. It is big but it really has no value. For small data sets, such as ethnographic data, the half-life can be longer than a 3 to 5 years strategic planning cycle. Since some data might be small and could be a signal to your future, supporting a CDO who puts equal weight on all data is critical to success.


John Philpin : Lifestream

Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines

Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines for Developing Countries. Apparently pharma IP is protected because of the billions they invested. Except ‘they’ didn’t invest billions … others did.

Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines for Developing Countries.

Apparently pharma IP is protected because of the billions they invested.

Except ‘they’ didn’t invest billions … others did.


Apple Just Gave Millions Of iPad, iPhone Users A Reason To L

Apple Just Gave Millions Of iPad, iPhone Users A Reason To Leave No they didn’t. Forbes is the biggest click bait machine out there.

Apple Just Gave Millions Of iPad, iPhone Users A Reason To Leave

No they didn’t.

Forbes is the biggest click bait machine out there.


Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines

Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines for Developing Countries Apparently pharma IP is protected because of the billions they invested. Except ‘they’ didn’t invest billions … others did.

Howard Dean Pushes Biden to Oppose Generic Covid-19 Vaccines for Developing Countries

Apparently pharma IP is protected because of the billions they invested.

Except ‘they’ didn’t invest billions … others did.

Monday, 12. April 2021

Phil Windley's Technometria

The Politics of Vaccination Passports

Summary: The societal ramifications of Covid-19 passports are not easy to parse. Ultimately, I believe they are inevitable, so the questions for us are when, where, and how they should be used. On December 2, 1942, Enrico Fermi and his team at the University of Chicago initiated the first human-made, self-sustaining nuclear chain reactions in history beneath the viewing stands of Stagg

Summary: The societal ramifications of Covid-19 passports are not easy to parse. Ultimately, I believe they are inevitable, so the questions for us are when, where, and how they should be used.

On December 2, 1942, Enrico Fermi and his team at the University of Chicago initiated the first human-made, self-sustaining nuclear chain reactions in history beneath the viewing stands of Stagg Field. Once humans knew how nuclear chain reactions work and how to initiate them, an atomic bomb was inevitable. Someone would build one.

What was not inevitable was when, where, and how nuclear weapons would be used. Global geopolitical events of the last half of the 20th century and many of the international questions of our day deal with the when, where, and how of that particular technology.

A similar, and perhaps just as impactful, discussion is happening now around technologies like artificial intelligence, surveillance, and digital identity. I’d like to focus on just one small facet of the digital identity debate: vaccination passports.

In Vaccination Passports, Devon Loffreto has strong words about the effort to create vaccination passports, writing:

The vaccination passport represents the introduction of the CCP social credit system to America, transmuting people into sub-human data points lasting lifetimes. From Vaccination Passports
Referenced 2021-04-12T11:13:58-0600

Devon’s larger point is that once we get used to having to present a vaccination passport to travel, for example, it could quickly spread. Presenting an ID could become the default with bars, restaurants, churches, stores, and every other public place saying “papers, please!” before allowing entry.

This is a stark contrast to how people have traditionally gone about their lives. Asking for ID is, by social convention and practicality, limited mostly to places where it’s required by law or regulation. We expect to get carded when we buy cigarettes, but not milk. A vaccination passport could change all that and that’s Devon’s point.

Devon specifically calls out the Good Health Pass collaborative as "supporting the administration of people as cattle, as fearful beings 'trusting' their leaders with their compliance."

For their part, participants of the Good Health Pass collaborative argue that they are working to create a “safe path to restore international travel and restart the global economy.” Their principles declare that they are building health-pass systems that are privacy protecting, user-controlled, interoperable, and widely accepted.

I’m sympathetic to Devon’s argument. Once such a passport is in place for travel, there’s nothing stopping it from being used everywhere, moving society from free and open to more regulated and closed. Nothing that is, unless we put something in place.

Like the direct line from Fermi’s atomic pile to an atomic bomb, the path from nearly ubiquitous smartphone use to some kind of digital vaccination passport is likely inevitable. The question for us isn’t whether or not it will exist, but where, how, and when passports will be used.

For example, I’d prefer a vaccination passport that is built according to principles of the Good Health Pass collaborative than, say, one built by Facebook, Google, Apple, or Amazon. Social convention, and regulation where necessary, can limit where such a passport is used. It’s an imperfect system, but social systems are.

As I said, I’m sympathetic to Devon’s arguments. The sheer ease of presenting digital credentials removes some of the practicality barrier that paper IDs naturally have. Consequently, digital IDs are likely to be used more often than paper. I don’t want to live in a society where I’m carded at every turn—whether for proof of vaccination or anything else. But I’m also persuaded that organizations like the Good Health Pass collaborative aren’t the bad guys. They’re just folks who see the inevitability of a vaccination credential and are determined to at least see that it’s done right, in ways that respect individual choice and personal privacy as much as possible.

The societal questions remain regardless.

Photo Credit: COVID-19 Vaccination record card from Jernej Furman (CC BY 2.0)

Tags: verifiable+credentials identity covid


Simon Willison

Porting VaccinateCA to Django

As I mentioned back in February, I've been working with the VaccinateCA project to try to bring the pandemic to an end a little earlier by helping gather as accurate a model as possible of where the Covid vaccine is available in California and how people can get it. The key activity at VaccinateCA is calling places to check on their availability and eligibility criteria. Up until last night this

As I mentioned back in February, I've been working with the VaccinateCA project to try to bring the pandemic to an end a little earlier by helping gather as accurate a model as possible of where the Covid vaccine is available in California and how people can get it.

The key activity at VaccinateCA is calling places to check on their availability and eligibility criteria. Up until last night this was powered by a heavily customized Airtable instance, accompanied by a custom JavaScript app for the callers that communicated with the Airtable API via some Netlify functions.

Today, the flow is powered by a new custom Django backend, running on top of PostgreSQL.

The thing you should never do

Here's one that took me fifteen years to learn: "let's build a new thing and replace this" is hideously dangerous: 90% of the time you won't fully replace the old thing, and now you have two problems!

- Simon Willison (@simonw) June 29, 2019

Replacing an existing system with a from-scratch rewrite is risky. Replacing a system that is built on something as flexible as Airtable that is evolving on a daily basis is positively terrifying!

Airtable served us extremely well, but unfortunately there are hard limits to the number of rows Airtable can handle and we've already bounced up against them and had to archive some of our data. To keep scaling the organization we needed to migrate away.

We needed to build a matching relational database with a comprehensive, permission-controlled interface for editing it, plus APIs to drive our website and application. And we needed to do it using the most boring technology possible, so we could focus on solving problems directly rather than researching anything new.

It will never cease to surprise me that Django has attained boring technology status! VaccineCA sits firmly in Django's sweet-spot. So we used that to build our replacement.

The new Django-based system is called VIAL, for "Vaccine Information Archive and Library" - a neat Jesse Vincent bacronym.

We switched things over to VIAL last night, but we still have activity in Airtable as well. I expect we'll keep using Airtable for the lifetime of the organization - there are plenty of ad-hoc data projects for which it's a perfect fit.

The most important thing here is to have a trusted single point of truth for any piece of information. I'm not quite ready to declare victory on that point just yet, but hopefully once things settle down over the next few days.

Data synchronization patterns

The first challenge, before even writing any code, was how to get stuff out of Airtable. I built a tool for this a while ago called airtable-export, and it turned out the VaccinateCA team were using it already before I joined!

airtable-export was already running several times an hour, backing up the data in JSON format to a GitHub repository (a form of Git scraping). This gave us a detailed history of changes to the Airtable data, which occasionally proved extremely useful for answering questions about when a specific record was changed or deleted.

Having the data in a GitHub repository was also useful because it gave us somewhere to pull data from that wasn't governed by Airtable's rate limits.

I iterated through a number of different approaches for writing importers for the data.

Each Airtable table ended up as a single JSON file in our GitHub repository, containing an array of objects - those files got pretty big, topping out at about 80MB.

I started out with Django management commands, which could be passed a file or a URL. A neat thing about using GitHub for this is that you can use the "raw data" link to obtain a URL with a short-lived token, which grants access to that file. So I could create a short-term URL and paste it directly to my import tool.

I don't have a good pattern for running Django management commands on Google Cloud Run, so I started moving to API-based import scripts instead.

The pattern that ended up working best was to provide a /api/importRecords API endpoint which accepts a JSON array of items.

The API expects the input to have a unique primary key in each record - airtable_id in our case. It then uses Django's update_or_create() ORM method to create new records if they were missing, and update existing records otherwise.

One remaining challenge: posting 80MB of JSON to an API in one go would likely run into resource limits. I needed a way to break that input up into smaller batches.

I ended up building a new tool for this called json-post. It has an extremely specific use-case: it's for when you want to POST a big JSON array to an API endpoint but you want to first break it up into batches!

Here's how to break up the JSON in Reports.json into 50 item arrays and send them to that API as separate POSTs:

json-post Reports.json \ "https://example.com/api/importReports" \ --batch-size 50

Here are some more complex options. Here we need to pass an Authorization: Beraer XXXtokenXXX API key header, run the array in reverse, record our progress (the JSON responses from the API as newline-delimited JSON) to a log file, set a longer HTTP read timeout and filter for just specific items:

% json-post Reports.json \ "https://example.com/api/importReports" \ -h Authorization 'Bearer XXXtokenXXX' \ --batch-size 50 \ --reverse \ --log /tmp/progress.txt \ --http-read-timeout 20 \ --filter 'item.get("is_soft_deleted")'

The --filter option proved particularly useful. As we kicked the tires on VIAL we would spot new bugs - things like the import script failing to correctly record the is_soft_deleted field we were using in Airtable. Being able to filter that input file with a command-line flag meant we could easily re-run the import just for a subset of reports that were affected by a particular bug.

--filter takes a Python expression that gets compiled into a function and passed item as the current item in the list. I borrowed the pattern from my sqlite-transform tool.

The value of API logs

VaccineCA's JavaScript caller application used to send data to Airtable via a Netlify function, which allowed additional authentication to be added built using Auth0.

Back in February, the team had the bright idea to log the API traffic to that function to a separate base in Airtable - including full request and response bodies.

This proved invaluable for debugging. It also meant that when I started building VIAL's alternative implementation of the "submit a call report" API I could replay historic API traffic that had been recorded in that table, giving me a powerful way to exercise the new API with real-world traffic.

This meant that when we turned on VIAL we could switch our existing JavaScript SPA over to talking to it using a fully tested clone of the existing Airtable-backed API.

VIAL implements this logging pattern again, this time using Django and PostgreSQL.

Given that the writable APIs will recieve in the low thousands of requests a day, keeping them in a database table works great. The table has grown to 90MB so far. I'm hoping that the pandemic will be over before we have to worry about logging capacity!

We're using PostgreSQL jsonb columns to store the incoming and returned JSON, via Django's JSONField. This means we can do in-depth API analysis using PostgreSQL's JSON SQL functions! Being able to examine returned JSON error messages or aggregate across incoming request bodies helped enormously when debugging problems with the API import scripts.

Storing the original JSON

Today, almost all of the data stored in VIAL originated in Airtable. One trick that has really helped build the system is that each of the tables that might contain imported data has both an airtable_id nullable column and an import_json JSON field.

Any time we import a record from airtable, we record both the ID and the full, original Airtable JSON that we used for the import.

This is another powerful tool for debugging: we can view the original Airtable JSON directly in the Django admin interface for a record, and confirm that it matches the ORM fields that we set from that.

I came up with a simple pattern for Pretty-printing all read-only JSON in the Django admin that helps with this too.

Staying as flexible as possible

The thing that worried me most about replacing Airtable with Django was Airtable's incredible flexibility. In the organization's short life it has already solved so many problems by adding new columns in Airtable, or building new views.

Is it possible to switch to custom software without losing that huge cultural advantage?

This is the same reason it's so hard for custom software to compete with spreadsheets.

We've only just made the switch, so we won't know for a while how well we've done at handling this. I have a few mechanisms in place that I'm hoping will help.

The first is django-sql-dashboard. I wrote about this project in previous weeknotes here and here - the goal is to bring some of the ideas from Datasette into the Django/PostgreSQL world, by providing a read-only mechanism for constructing SQL queries, bookmarking and saving the results and outputting simple SQL-driven visualizations.

We have a lot of SQL knowledge at VaccinateCA, so my hope is that people with SQL will be able to solve their own problems, and people who don't know SQL yet will have no trouble finding someone who can help them.

In the boring technology model of things, django-sql-dashboard counts as the main innovation token I'm spending for this project. I'm optimistic that it will pay off.

I'm also leaning heavily on Django's migration system, with the aim of making database migrations common and boring, rather than their usual default of being rare and exciting. We're up to 77 migrations already, in a codebase that is just over two months old!

I think a culture that evolves the database schema quickly and with as little drama as possible is crucial to maintaining the agility that this kind of organization needs.

Aside from the Django Admin providing the editing interface, everything that comes into and goes out of VIAL happens through APIs. These are fully documented: I want people to be able to build against the APIs independently, especially for things like data import.

After seeing significant success with PostgreSQL JSON already, I'm considering using it to add even more API-driven flexbility to VIAL in the future. Allowing our client developers to start collecting a new piece of data from our volunteers in an existing JSON field, then migrating that into a separate column once it has proven its value, is very tempting indeed.

Open source tools we are using

An incomplete list of open source packages we are using for VIAL so far:

pydantic - as a validation layer for some of the API endpoints social-auth-app-django - to integrate with Auth0 django-cors-headers python-jose - for JWTs, which were already in use by our Airtable caller app django-reversion and django-reversion-compare to provide a diffable, revertable history of some of our core models django-admin-tools - which adds a handy customizable menu to the admin, good for linking to additional custom tools django-migration-linter - to help avoid accidentally shipping migrations that could cause downtime during a deploy pytest-django, time-machine and pytest-httpx for our unit tests sentry-sdk, honeycomb-beeline and prometheus-client for error logging and observability Want to help out?

VaccinateCA is hiring! It's an interesting gig, because the ultimate goal is to end the pandemic and put this non-profit permanently out of business. So if you want to help end things faster, get in touch.

VaccinateCA is hiring a handful of engineers to help scale our data ingestion and display by more than an order of magnitude.

If you'd like to register interest:https://t.co/BSvi40sW1M

Generalists welcome. Three subprojects; Python backend/pedestrian front-end JS.

- Patrick McKenzie (@patio11) April 7, 2021
TIL this week Language-specific indentation settings in VS Code Efficient bulk deletions in Django Using unnest() to use a comma-separated string as the input to an IN query Releases this week json-post: 0.2 - (3 total releases) - 2021-04-11
Tool for posting JSON to an API, broken into pages airtable-export: 0.7.1 - (10 total releases) - 2021-04-09
Export Airtable data to YAML, JSON or SQLite files on disk django-sql-dashboard: 0.6a0 - (13 total releases) - 2021-04-09
Django app for building dashboards using raw SQL queries

Damien Bod

Securing Blazor Web assembly using Cookies and Auth0

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure […]

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure cookie and anti-forgery tokens to protect against CSRF. This architecture is also known as the Backends for Frontends (BFF) Pattern.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing Blazor Web assembly using Cookies and Azure AD Securing Blazor Web assembly using Cookies and Auth0

The application was built as described in the previous blog in this series. Please refer to that blog for implementation details about the WASM application, user session and anti-forgery tokens. Setting up the Auth0 authentication and the differences are described in this blog.

An Auth0 account is required and a Regular Web Application was setup for this. This is not an SPA application and must always be deployed with a backend which can keep a secret. The WASM client can only use the APIs on the same domain and uses cookies. All application authentication is implemented in the trusted backend and the secure data is encrypted in the cookie.

The Microsoft.AspNetCore.Authentication.OpenIdConnect Nuget package is used to add the authentication to the ASP.NET Core application. User secrets are used for configuration which uses the Auth0 sensitive data

<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net5.0</TargetFramework> <WebProject_DirectoryAccessLevelKey>1</WebProject_DirectoryAccessLevelKey> <UserSecretsId>de0b7f31-65d4-46d6-8382-30c94073cf4a</UserSecretsId> </PropertyGroup> <ItemGroup> <ProjectReference Include="..\Client\BlazorAuth0Bff.Client.csproj" /> <ProjectReference Include="..\Shared\BlazorAuth0Bff.Shared.csproj" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.Server" Version="5.0.5" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.OpenIdConnect" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="IdentityModel" Version="5.1.0" /> <PackageReference Include="IdentityModel.AspNetCore" Version="3.0.0" /> </ItemGroup> </Project>

The ConfigureServices method in the Startup class of the ASP.NET Core Blazor server application is used to add the authentication. The Open ID Connect code flow with PKCE and a client secret is used for the default challenge and a cookie is used to persist the tokens if authenticated. The Blazor client WASM uses the cookie to access the API.

The Open ID Connect is configured to match the Auth0 settings for the client. A client secret is required and used to authenticate the application. The PKCE option is set explicitly to use PKCE with the client configuration. The required scopes are set so that the profile is returned and an email. These are OIDC standard scopes. The user profile API is used to return the profile data and so keep the id_token small. The tokens are persisted. If successful, the data is persisted to an identity cookie. The logout client is configured as documented by Auth0 in its example.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-BlazorServer"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={ Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; } }; });

The Configure method is implement to require authentication. The UseAuthentication extension method is required. Our endpoints are added like in the previous blog.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // IdentityModelEventSource.ShowPII = true; JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); }); }

The Auth0 configuration can be downloaded in the sample application, or you can configure this direct in the Auth0 UI and copy this. Three properties are required. I added these to the user secrets in my application development. If I deployed this to Azure, I would add these to an Azure Key Vault and can then use managed identities to access the secrets.

"Auth0": { "Domain": "your-domain-in-auth0", "ClientId": "--in-secrets--", "ClientSecret": "--in-secrets--" }

Now everything will run and you can now use ASP.NET Core Blazor BFF with Auth0. We don’t need any access tokens in the browser. This was really simple to configure and only ASP.NET Core standard Nuget packages are used. Security best practices are supported by Auth0 and it is really easy to setup. In production I would force MFA and FIDO2 if possible.

Links

Securing Blazor Web assembly using Cookies and Azure AD

https://auth0.com/

https://docs.microsoft.com/en-us/aspnet/core/blazor/components/prerendering-and-integration?view=aspnetcore-5.0&pivots=webassembly#configuration

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/additional-scenarios

Sunday, 11. April 2021

Hyperonomy Digital Identity Lab

Trusted Digital Web Glossary (TDW Glossary): All-In View (latest version)

Click to enlarge.

Click to enlarge.

TDW Glossary: All-In View (latest version)

Simon Willison

Quoting Jacques Chester

In general, relying only on natural keys is a nightmare. Double nightmare if it's PII. Natural keys only work if you are flawlessly omniscient about the domain. And you aren't. — Jacques Chester

In general, relying only on natural keys is a nightmare. Double nightmare if it's PII. Natural keys only work if you are flawlessly omniscient about the domain. And you aren't.

Jacques Chester


Virtual Democracy

On Science Preprints: academic publishing takes a quantum leap into the present

Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual acad
Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual academic libraries should need to continue to pony up for thousands of journal subscriptions for decades to come is now an idea only in the Xeroxed business models of for-profit publishers. Everyone else is looking for a way out; and the internet awaits.

John Philpin : Lifestream

Filed in things you don’t often over hear in ‘normal’ life

Filed in things you don’t often over hear in ‘normal’ life … this one between two doctors strolling along a hospital corridor. ”Yeah … Another transplant redo”

Filed in things you don’t often over hear in ‘normal’ life … this one between two doctors strolling along a hospital corridor.

”Yeah … Another transplant redo”

Saturday, 10. April 2021

Bill Wendel's Real Estate Cafe

Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment… The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment…

The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.


Ben Werdmüller

I just want a computer that works, man

I have a persistent, infuriating problem with typing. The only laptop I own with a functional keyboard is my iPad Pro - the one device in this form factor that doesn’t actually ship with a keyboard. Even my work laptop suffers from the notorious broken butterfly keyboard problem. Keys stick. They misfire. They double-type. Hitting the space bar once results in two spaces, which my computer tur

I have a persistent, infuriating problem with typing.

The only laptop I own with a functional keyboard is my iPad Pro - the one device in this form factor that doesn’t actually ship with a keyboard. Even my work laptop suffers from the notorious broken butterfly keyboard problem.

Keys stick. They misfire. They double-type. Hitting the space bar once results in two spaces, which my computer turns into a period. Other keys have lost sensitivity. It's inconsistent.

The thing is, it creeps up on you. When your keyboard is iffy, you’re less likely to open your laptop to hack on something, or to use it to write. When you’re writing anything - blog posts, fiction, source code, documentation, even emails and Slack messages - keys that double-press or don’t fire at all can be catastrophic. It’s led to me writing less, coding less, and getting far less use out of my computers. Which, given what I do for a living, is not great. And considering what I paid for my Macs, it’s outrageous.

My laptop needs to feel like an extension of me. My outboard brain; my reliable toolkit. It can’t fail.

My personal laptop is five years old, which is beyond my usual threshold for upgrading, so I don’t feel horrible about replacing it with a newer model. But as much as I’d love to acquire a new M1 device, I’m not certain I want to give that money to Apple. I like Macs, but I feel burned.

So what might it look like to jump ship and find something else?

I want: a keyboard that works; excellent battery life; speed; a relatively lightweight form factor; privacy.

I’d also like: a low environmental footprint; repairable hardware; openness; a chassis that will last me at least 4-5 years; a default operating system that isn’t Windows.

Is that even a product that’s on the market?

My living depends on computers, so I’m willing to pay a premium for something that checks all the boxes. But in a world / industry where the default is Mac, I don’t even know where to begin.

What’s worked for you? Does this exist? Or should I just sit tight and wait for the new M1 Macs and be done with it?


A while back, Ma said she wanted to explore Monterey. So for her birthday, I booked a house by the water. The journey was hard for her, but she’s glad she’s here, and so am I.


John Philpin : Lifestream

🎶The Meaning of ‘The Carpet Crawlers Lyrics’ best comment

🎶The Meaning of ‘The Carpet Crawlers Lyrics’ best comment Anyone who’s ever owned the album The Lamb Lies Down On Broadway knows the story of Rael which Peter Gabriel wrote on the inner sleeve. While the story and lyrics are somewhat surreal, they describe the adventures of a character named Rael. The carpet crawlers was a group of monks that Rael encounters in his abstract journey through

🎶The Meaning of ‘The Carpet Crawlers Lyrics’

best comment

Anyone who’s ever owned the album The Lamb Lies Down On Broadway knows the story of Rael which Peter Gabriel wrote on the inner sleeve. While the story and lyrics are somewhat surreal, they describe the adventures of a character named Rael. The carpet crawlers was a group of monks that Rael encounters in his abstract journey through NYC. Yall need to ease up on the weed and learn to READ, you are seriously trippin…


Ben Werdmüller

Received an extraordinarily shady email asking me ...

Received an extraordinarily shady email asking me to vote to remove my name from the letter calling for the removal of RMS from the FSF board. No chance. RMS needs to be removed, and the open letter was good and proper.

Received an extraordinarily shady email asking me to vote to remove my name from the letter calling for the removal of RMS from the FSF board.

No chance.

RMS needs to be removed, and the open letter was good and proper.

Friday, 09. April 2021

John Philpin : Lifestream

”The political arm of House Republicans is deploying a pre

”The political arm of House Republicans is deploying a prechecked box to enroll donors into repeating monthly donations—and using ominous language to warn them of the consequences if they opt out: “If you UNCHECK this box, we will have to tell Trump you’re a DEFECTOR.”

”The political arm of House Republicans is deploying a prechecked box to enroll donors into repeating monthly donations—and using ominous language to warn them of the consequences if they opt out: “If you UNCHECK this box, we will have to tell Trump you’re a DEFECTOR.”


Ben Werdmüller

"[Amazon's] decisive victory deals a crushing blow ...

"[Amazon's] decisive victory deals a crushing blow to organized labor, which had hoped the time was ripe to start making inroads." Disappointing: https://www.nytimes.com/2021/04/09/technology/amazon-defeats-union.html

"[Amazon's] decisive victory deals a crushing blow to organized labor, which had hoped the time was ripe to start making inroads." Disappointing: https://www.nytimes.com/2021/04/09/technology/amazon-defeats-union.html


Good to be reminded of the one ...

Good to be reminded of the one situation where I can accurately be described as a Republican. Also, that I need to catch up with The Crown.

Good to be reminded of the one situation where I can accurately be described as a Republican.

Also, that I need to catch up with The Crown.


Turning off syndication to Twitter, at least ...

Turning off syndication to Twitter, at least for today. If you're reading this, it's via my indieweb feed on my Known site, or on Micro.blog.

Turning off syndication to Twitter, at least for today. If you're reading this, it's via my indieweb feed on my Known site, or on Micro.blog.


John Philpin : Lifestream

ATS systems should just be banned.

They are singularly the most inhuman invention you might imagine. But they aren’t half as bad as humans. I went through a process about three months ago. You have resumes and cover letters stored on your hard drive. You even have a linkedIN profile that contains a wealth of information. But none of that is good enough. No. You need to fill in the forms that you are presented with and then add p

They are singularly the most inhuman invention you might imagine. But they aren’t half as bad as humans.

I went through a process about three months ago. You have resumes and cover letters stored on your hard drive. You even have a linkedIN profile that contains a wealth of information. But none of that is good enough. No. You need to fill in the forms that you are presented with and then add pdfs of the docs to the application.

You just know that the form filling is the filter mechanism that the ATS will use to decide how it is all prioritised. I got it all in within the time specified and then connected with the current incumbent AND the hiring manager (knowing them both) to let them know my hat was in the ring. A little later - the application deadline was extended. No reason provided - but I was told - even though my application was in.

I could only think of two reasons.

1] There weren’t enough candidates - it’s a pretty niche role.
2] The candidate they wanted hadn’t applied - so they told him to - extended the deadline and waited.

Other than that single communication …

Nothing … nothing … nothing … except alerts from the ATS saying that I won’t hear more from it - since the application was ‘now in the hands of the recruiting organisation’.

Nothing … nothing … nothing … and then

**In their occasional newsletter I read that the position has been filled … by someone I also happen to know. Good chap actually. **

Still heard nothing from the ‘recruiting organisation’. At minimum I might expect a pro forma - ‘thanks for applying’ at the start - and then a ‘sorry you didn’t get it’ pro forma at the end.

At best,I look to how I used to operate.

I would meet everyone on candidate short lists. I would insist that my recruiting managers connect with every candidate personally - not the HR wonk - the line manager.

Maybe I am odd. I am certainly an exception, because this story is not new. It happens every single day to people all over the planet.

When you read how tech has lost its humanity - people blame the machines.

I don’t. I blame the people.

The good news - if this is how they operate - glad I am not part of it.


US: Which vaccine is best? The one you can get first, expert

US: Which vaccine is best? The one you can get first, experts say Reminds me of something I was told a few years ago when there were debates about which camera was the best. ”The one you have with you.” Chase Jarvis

US: Which vaccine is best? The one you can get first, experts say

Reminds me of something I was told a few years ago when there were debates about which camera was the best.

”The one you have with you.”

Chase Jarvis

Wednesday, 07. April 2021

Ben Werdmüller

I bought an ebook from Gumroad yesterday ...

I bought an ebook from Gumroad yesterday after seeing a recommendation on Twitter. It was garbage. Short and written badly. I would have been embarrassed to publish it and charge what the author did. Who’s doing this *well*? Which indie ebooks are high quality?

I bought an ebook from Gumroad yesterday after seeing a recommendation on Twitter.

It was garbage. Short and written badly. I would have been embarrassed to publish it and charge what the author did.

Who’s doing this *well*? Which indie ebooks are high quality?

Tuesday, 06. April 2021

Ben Werdmüller

Every media organization needs to own its ...

Every media organization needs to own its own website, distribution, and revenue model. Every independent journalist needs to own their own website, distribution, and revenue model. Use the platforms - but do it on your terms. Don't let them own you.

Every media organization needs to own its own website, distribution, and revenue model.

Every independent journalist needs to own their own website, distribution, and revenue model.

Use the platforms - but do it on your terms.

Don't let them own you.


The Dingle Group

SSI in IoT, The SOFIE Project

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions. For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions.

For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead researchers from the EU H2020 funded The SOFIE Project. The SOFIE Project wrapped up at the end of last year a key part of this research focused on the the use of SSI concepts in three IoT sectors (energy, supply chain, and mixed reality gaming) targeting integrating SSI in without requiring changes to the existing IoT systems.

Our three presenters were from two different European research universities, Aalto University (Dr. Dmitrij Lagutin and Dr. Yki Kortesniemi) and Athens University of Economics and Business (Dr. Nikos Fotiou)

The presentation covered four areas of interest in SSI the IoT sector:

DIDs and VCs on constrained devices

Access control using the W3C Web of Things (WoT) Thing Description

did:self method

Ephemeral DIDs and Ring signatures

Each of these research areas are integrated into real world use cases and connected to the sectors that were part of the SOFIR project’s mandate.

(Note: There were some ‘technical issues’ at the start of the event and the introduction part of the presentation has been truncated, but the good new is all of our presenters content is there.)

To listen to a recording of the event please check out the link: https://vimeo.com/530442817

Time markers:

0:00:00 - SOFIE Project Introduction, (Dr. Dmitrij Lagutin)

0:02:33 - DIDs and VCs on constrained devices

0:14:00 - Access Control for WoT using VCs (Dr. Nikos Fotiou)

0:33:23 - did:self method

0:46:00 - Ephemeral DIDs and Ring Signatures (Dr. Yki Kortesniemi)

1:07:29 - Wrap-up & Upcoming Events


Resources

The SOFIE Project Slide deck: download

And as a reminder, we continue to have online only events.

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Ben Werdmüller

I really want a laptop with a ...

I really want a laptop with a working keyboard. It's getting nuts. Should I: 1. Wait for the new M1 MBPs later this year 2. Get an existing M1 Mac 3. Go back to Windows with a Surface / Lenovo 4. Get a rugged Linux laptop with great battery life 5. Something else?

I really want a laptop with a working keyboard. It's getting nuts. Should I:

1. Wait for the new M1 MBPs later this year
2. Get an existing M1 Mac
3. Go back to Windows with a Surface / Lenovo
4. Get a rugged Linux laptop with great battery life
5. Something else?


John Philpin : Lifestream

Uber may stop letting drivers see destinations and name pric

Uber may stop letting drivers see destinations and name prices Introduced to prove their drivers are ‘Independant’. Now that they have won the argument (in California at least) .. abandoning the program. #Winning

Uber may stop letting drivers see destinations and name prices

Introduced to prove their drivers are ‘Independant’. Now that they have won the argument (in California at least) .. abandoning the program.

#Winning


Ben Werdmüller

On the eve of immunity, 10 reflections

1: I get my first vaccine jab tomorrow. Pfizer. I’m excited: by my reckoning that makes me about five weeks out from being immune. I’m privileged in that the pandemic has been inconvenient at most, but I miss hanging out with my friends and extended family. I can see the light at the end of the tunnel. It feels good. 2: The Moderna and Pfizer vaccines are the first production uses of mRNA tec

1: I get my first vaccine jab tomorrow. Pfizer. I’m excited: by my reckoning that makes me about five weeks out from being immune. I’m privileged in that the pandemic has been inconvenient at most, but I miss hanging out with my friends and extended family. I can see the light at the end of the tunnel. It feels good.

2: The Moderna and Pfizer vaccines are the first production uses of mRNA techniques for vaccination. Although they received emergency authorization from the FDA, the research began 30 years ago; already it looks like an HIV vaccine based on similar technology looks promising. The AstraZeneca and Johnson & Johnson vaccines in particular were based on techniques originally developed to treat HIV. Money is now flowing into mRNA research.

3: I have complicated feelings about vaccine passports. Dr. Fauci says the US federal government won’t introduce them.

On one hand, I think this is right. An internal COVID passport system is effectively akin to an ID card, which can have real knock-on effects on civil liberties. Here’s a thought experiment: what happens when it becomes easier to get a vaccine passport in one location than another? What do we know about provision of services in predominantly white neighborhoods vs in predominantly black neighborhoods?

I see a vaccine passport to travel between countries as less problematic; those already exist. But internal checkpoints to travel or use services are not great and can open the door to other forms of required ID that can perpetuate inequities.

On the other, it seems reasonable that private businesses will start requiring proof of vaccination to enter. You’ll need to show you’ve been vaccinated to go to bars, sports games, schools, and so on. Given the inevitability this private ecosystem, these proofs of vaccination will need to be regulated. So should we get ahead of them? How can we solve those issues of inequity and avoid mass surveillance while also keeping everyone safe?

Is it worse than a driving license? Does the analogy fit? It’s complicated.

4: At least 40,000 children in the US alone have lost a parent to COVID-19. The loss seems unfathomable.

5: It’s been weird watching people I grew up with turn into anti-mask COVID-deniers. I’m not sure what happened, but it’s surreal to find people I consider friends sharing FUD posts from the executive editor of Breitbart UK (also a climate denier!) while opining, “why is nobody thinking critically about this?”

Some of these same friends were also “jet fuel doesn’t melt steel beams” people, and in that light, I suppose the signs were always there. But I find it confronting to say the least to see this happen to people I trusted. I don’t know what happens to those friendships - and I’m fully aware that this post can’t exactly help - but it feels like disinformation that should have been squarely in the realm of the “out there” has become invasive.

It’s a smaller loss than many have endured, but I feel it, and I’m mystified by it.

6: All my immediate loved ones will have been vaccinated by Wednesday. This gives me a lot of peace.

7: My mother continues to decline, completely independently to the pandemic. It’s been a silver lining of this whole situation that I’ve been able to spend time with my parents and support them while this has been happening. She’s nine years out from her double lung transplant and continues to fight hard; an inspiration to all of us in both spirit and action. She resents her decline and dearly wants to be healthy. I wish I could wave a magic wand and make it so.

Pulmonary fibrosis treatment techniques may improve outcomes in patients with long covid damage to their lungs. It’s possible that mRNA techniques may also improve outcomes in patients with dyskeratosis congenita by correcting telomerase production. It’s all connected, but it’s going to be too late for my mother, my aunt, my grandmother, and my cousin.

8: Poorer countries may not be vaccinated until 2024. As a direct result, the pandemic could last for half a decade. One of the reasons Oxford University chose to work with AstraZeneca instead of Merck because of fears that working with a US company would prevent the vaccine from being equitably distributed.

How can we help with this?

I don’t have a satisfying answer, but I appreciate Janet Yellen’s calls for increased aid. I feel like the US should contribute more directly, not least because of its vaccine hoarding. We can and should do better. (That doesn’t mean we will.)

9: Locking down was important. According to the LSE, the stronger government interventions at an early stage were, the more effective they proved to be in slowing down or reversing the growth rate of deaths. We were repeatedly told by skeptics that we’d lose lives to suicide due to isolation; as it turns out, loss of life to suicide in 2020 was lower than the preceding three years. Lockdown was a public good that saved lives.

I note that conservatives who oppose lockdown are less vocal about blanket infringements on the right to protest. I’m much more concerned about these: in particular, 2020 saw important protests for racial equality that should not be impeded. Black Lives Matter, and the pandemic should not be used as an excuse to squash this movement.

10: I’ve said this before, but I hope we don’t “go back to normal” after pandemic. We need to move forward. So much change has been shown to be possible, from workplaces to societal inclusion to scientific endeavor. We’ve shown that we can come together as communities rather than isolated individuals. As the light at the end of the tunnel gets brighter, I see so many possibilities for growth. Let’s embrace them.


John Philpin : Lifestream

Clubhouse Is Partnering With Stripe To Offer Direct Payments

Clubhouse Is Partnering With Stripe To Offer Direct Payments, and Creators Will Get 100% of the Profits Profits? Or Revenue? Also, Stephen Hackett: ”Unwritten: Apple will take nothing. I wonder how this will play out.”

Clubhouse Is Partnering With Stripe To Offer Direct Payments, and Creators Will Get 100% of the Profits

Profits? Or Revenue?

Also, Stephen Hackett:

”Unwritten: Apple will take nothing. I wonder how this will play out.”

Monday, 05. April 2021

John Philpin : Lifestream

Noam makes some very good points … and the US gets so pis

Noam makes some very good points … and the US gets so pissed off when other countries try to shape the US narrative.

Noam makes some very good points

… and the US gets so pissed off when other countries try to shape the US narrative.


Simon Willison

Behind GitHub’s new authentication token formats

Behind GitHub’s new authentication token formats This is a really smart design. GitHub's new tokens use a type prefix of "ghp_" or "gho_" or a few others depending on the type of token, to help support mechanisms that scan for accidental token publication. A further twist is that the last six characters of the tokens are a checksum, which means token scanners can reliably distinguish a real toke

Behind GitHub’s new authentication token formats

This is a really smart design. GitHub's new tokens use a type prefix of "ghp_" or "gho_" or a few others depending on the type of token, to help support mechanisms that scan for accidental token publication. A further twist is that the last six characters of the tokens are a checksum, which means token scanners can reliably distinguish a real token from a coincidental string without needing to check back with the GitHub database. "One other neat thing about _ is it will reliably select the whole token when you double click on it" - what a useful detail!

Via Hacker News


Damien Bod

Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs. […]

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs.

Code: https://github.com/swiss-ssi-group/TrinsicAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic

Setup

We want implement the flow shown in the following figure. The National Driving license application is responsible for issuing driver licenses and administrating licenses for users which have authenticated correctly. The user can see his or her driver license and a verifiable credential displayed as a QR code which can be used to add the credential to a digital wallet. When the application generates the credential, it adds the credential DID to the blockchain ledger with the cryptographic proof of the issuer and the document. When you scan the QR Code, the DID will get validated and will be added to the wallet along with the request claims. The digital wallet must be able to find the DID on the correct network and the schema and needs to search for the ledger in the correct blockchain. A good wallet should take care of this for you. The schema is required so that the data in the DID document can be understood.

Trinsic Setup

Trinsic is used to connect to the blockchain and create the DIDs, credentials in this example. Trinsic provides good getting started docs.

In Trinsic, you need to create an organisation for the Issuer application.

Click on the details of the organisation to get the API key. This is required for the application. This API Key cannot be replaced or updated, so if you make a mistake and lose this, commit it in code, you would have to create a new organisation. It is almost important to note the network. This is where you can find the DID to get the credentials produced by this issuer.

To issuer credentials, you need to create a template or schema with the claims which are issued in the credential using the template. The issuer application provides values for the claims.

Implementing the ASP.NET Core Issuer

The verifiable credentials issuer is implemented in an ASP.NET Core application using Razor pages and Identity. This application needs to authenticate the users before issuing a verifiable credential for the user. FIDO2 with the correct authenticate flow would be a good choice as this would protect against phishing. You could use credentials as well, if the users of the applications had a trusted ID. You would still have to protect against phishing. The quality of the credentials issued depends on the security of the issuing application. If the application has weak user authentication, then the credentials cannot be trusted. For a bank, gov IDs, drivings license, a high level of security is required. Open ID Connect FAPI with FIDO2 would make a good solution to authenticate the user. Or a user with a trusted gov issued credential together with FIDO2 would also be good.

The ASP.NET Core application initializes the services and adds the Trinsic client using the API Key from the organisation which issues the credentials. The Trinsic.ServiceClients Nuget package is used for the Trinsic integration. ASP.NET Core Identity is used to add, remove users and add driving licenses for the users in the administration part of the application. MFA should be setup but as this is a demo, I have not forced this.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<TrinsicCredentialsService>(); services.AddScoped<DriverLicenseService>(); services.AddTrinsicClient(options => { // For CredentialsClient and WalletClient // API key of National Driving License (Organisation which does the verification) options.AccessToken = Configuration["Trinsic:ApiKey"]; // For ProviderClient // options.ProviderKey = providerKey; }); services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer( Configuration.GetConnectionString("DefaultConnection"))); services.AddDatabaseDeveloperPageExceptionFilter(); services.AddIdentity<IdentityUser, IdentityRole>( options => options.SignIn.RequireConfirmedAccount = false) .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); services.AddSingleton<IEmailSender, EmailSender>(); services.AddScoped<IUserClaimsPrincipalFactory<IdentityUser>, AdditionalUserClaimsPrincipalFactory>(); services.AddAuthorization(options => { options.AddPolicy("TwoFactorEnabled", x => x.RequireClaim("amr", "mfa") ); }); services.AddRazorPages(); }

User secrets are used to add the secrets required for the application in development. The secrets can be added to the Json secrets file and not to the code source. If deploying this to Azure, the secrets would be read from Azure Key vault. The application requires the Trinsic API Key and the credential template definition ID created in Trinsic studio.

{ "ConnectionStrings": { "DefaultConnection": "--db-connection-string--" }, "Trinsic": { "ApiKey": "--your-api-key-organisation--", "CredentialTemplateDefinitionId": "--Template-credential-definition-id--" } }

The driving license service is responsible for creating driver license for each user. This is just an example of logic and is not related to SSI.

using Microsoft.EntityFrameworkCore; using NationalDrivingLicense.Data; using System.Threading.Tasks; namespace NationalDrivingLicense { public class DriverLicenseService { private readonly ApplicationDbContext _applicationDbContext; public DriverLicenseService(ApplicationDbContext applicationDbContext) { _applicationDbContext = applicationDbContext; } public async Taskbool> HasIdentityDriverLicense(string username) { if (!string.IsNullOrEmpty(username)) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); if (driverLicense != null) { return true; } } return false; } public async Task<DriverLicense> GetDriverLicense(string username) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); return driverLicense; } public async Task UpdateDriverLicense(DriverLicense driverLicense) { _applicationDbContext.DriverLicenses.Update(driverLicense); await _applicationDbContext.SaveChangesAsync(); } } }

The Trinsic credentials service is responsible for creating the verifiable credentials. It uses the users drivers license and creates a new credential using the Trinsic client API using the CreateCredentialAsync method. The claims must match the template created in the studio. A Trinsic specific URL is returned. This can be used to create a QR Code which can be scanned from a Trinsic digital wallet.

public class TrinsicCredentialsService { private readonly ICredentialsServiceClient _credentialServiceClient; private readonly IConfiguration _configuration; private readonly DriverLicenseService _driverLicenseService; public TrinsicCredentialsService(ICredentialsServiceClient credentialServiceClient, IConfiguration configuration, DriverLicenseService driverLicenseService) { _credentialServiceClient = credentialServiceClient; _configuration = configuration; _driverLicenseService = driverLicenseService; } public async Task<string> GetDriverLicenseCredential(string username) { if (!await _driverLicenseService.HasIdentityDriverLicense(username)) { throw new ArgumentException("user has no valid driver license"); } var driverLicense = await _driverLicenseService.GetDriverLicense(username); if (!string.IsNullOrEmpty(driverLicense.DriverLicenseCredentials)) { return driverLicense.DriverLicenseCredentials; } string connectionId = null; // Can be null | <connection identifier> bool automaticIssuance = false; IDictionary<string, string> credentialValues = new Dictionary<String, String>() { {"Issued At", driverLicense.IssuedAt.ToString()}, {"Name", driverLicense.Name}, {"First Name", driverLicense.FirstName}, {"Date of Birth", driverLicense.DateOfBirth.Date.ToString()}, {"License Type", driverLicense.LicenseType} }; CredentialContract credential = await _credentialServiceClient .CreateCredentialAsync(new CredentialOfferParameters { DefinitionId = _configuration["Trinsic:CredentialTemplateDefinitionId"], ConnectionId = connectionId, AutomaticIssuance = automaticIssuance, CredentialValues = credentialValues }); driverLicense.DriverLicenseCredentials = credential.OfferUrl; await _driverLicenseService.UpdateDriverLicense(driverLicense); return credential.OfferUrl; } }

The DriverLicenseCredentials Razor page uses the Trinsic service and returns the credentials URL if the user has a valid drivers license.

using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc.RazorPages; using NationalDrivingLicense.Data; namespace NationalDrivingLicense.Pages { public class DriverLicenseCredentialsModel : PageModel { private readonly TrinsicCredentialsService _trinsicCredentialsService; private readonly DriverLicenseService _driverLicenseService; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public string CredentialOfferUrl { get; set; } public DriverLicenseCredentialsModel(TrinsicCredentialsService trinsicCredentialsService, DriverLicenseService driverLicenseService) { _trinsicCredentialsService = trinsicCredentialsService; _driverLicenseService = driverLicenseService; } public async Task OnGetAsync() { DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name); if (DriverLicense != null) { var offerUrl = await _trinsicCredentialsService .GetDriverLicenseCredential(HttpContext.User.Identity.Name); DriverLicenseMessage = "Add your driver license credentials to your wallet"; CredentialOfferUrl = offerUrl; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } } }

The Razor page template displays the QR code and information about the driver license issued to the logged in user.

@page @model NationalDrivingLicense.Pages.DriverLicenseCredentialsModel @{ } <h3>@Model.DriverLicenseMessage</h3> <br /> <br /> @if (Model.HasDriverLicense) { <div class="container-fluid"> <div class="row"> <div class="col-sm"> <div class="qr" id="qrCode"></div> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> <div> <b>Driver Licence: @Html.DisplayFor(model => model.DriverLicense.UserName)</b> <hr /> <dl class="row"> <dt class="col-sm-4">Issued</dt> <dd class="col-sm-8"> @Model.DriverLicense.IssuedAt.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Name) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Name) </dd> <dt class="col-sm-4">First Name</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.FirstName) </dd> <dt class="col-sm-4">License Type</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.LicenseType) </dd> <dt class="col-sm-4">Date of Birth</dt> <dd class="col-sm-8"> @Model.DriverLicense.DateOfBirth.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4">Issued by</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Issuedby) </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Valid) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Valid) </dd> </dl> </div> </div> </div> </div> </div> } @section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.getElementById("qrCode"), { text: "@Html.Raw(Model.CredentialOfferUrl)", width: 300, height: 300 }); $(document).ready(() => { document.getElementById('begin_token_check').click(); }); </script> }

When the application is started, you can register and create a new license in the license administration.

Add licences as required. The credentials will not be created here, only when you try to get a driver license as a user.

The QR code of the license is displayed which can be scanned and added to your Trinsic digital wallet.

Notes

This works fairly good but has a number of problems. The digital wallets are vendor specific and the QR Code, credential links are dependent on the product used to create this. The wallet implementations and the URL created for the credentials are all specific and rely on good will of the different implementations of the different vendors. This requires an RFC specification or something like this, if SSI should become easy to use and mainstream. Without this, users would require n-wallets for all the different applications and would also have problems using credentials between different systems.

Another problem is the organisations API keys use the represent the issuer or the verifier applications. If this API keys get leaked which they will, the keys are hard to replace.

Using the wallet, the user also needs to know which network to use to load the credentials, or to login to your product. A default user will not know where to find the required DID.

If signing in using the wallet credentials, the application does not protect against phishing. This is not good enough for high security authentication. FIDO2 and WebAuthn should be used if handling such sensitive data as this is designed for.

Self sovereign identities is in the very early stages but holds lots of potential. A lot will depend on how easy it is to use and how easy it is to implement and share credentials between systems. The quality of the credential will depend on the quality of the application issuing it.

In a follow up blog to this one, Matteo will use the verifiable credentials added to the digital wallet and verify them in a second application.

Links

https://studio.trinsic.id/

https://www.youtube.com/watch?v=mvF5gfMG9ps

https://github.com/trinsic-id/verifier-reference-app

https://docs.trinsic.id/docs/tutorial

Self sovereign identity

https://techcommunity.microsoft.com/t5/identity-standards-blog/ion-we-have-liftoff/ba-p/1441555


John Philpin : Lifestream

Quick question for the ‘Wordpress cognoscenti’ Is there a

Quick question for the ‘Wordpress cognoscenti’ Is there an easy way in Wordpress that when someone tweets your article - the post excerpt is sent - not just the title and the link? My thanks in anticioation.

Quick question for the ‘Wordpress cognoscenti’

Is there an easy way in Wordpress that when someone tweets your article - the post excerpt is sent - not just the title and the link?

My thanks in anticioation.


Simon Willison

Render single selected county on a map

Render single selected county on a map Another experiment at the intersection of Datasette and Observable notebooks. This one imports a full Datasette table (3,200 US counties) using streaming CSV and loads that into Observable's new Search and Table filter widgets. Once you select a single county a second Datasette SQL query (this time retuning JSON) fetches a GeoJSON representation of that cou

Render single selected county on a map

Another experiment at the intersection of Datasette and Observable notebooks. This one imports a full Datasette table (3,200 US counties) using streaming CSV and loads that into Observable's new Search and Table filter widgets. Once you select a single county a second Datasette SQL query (this time retuning JSON) fetches a GeoJSON representation of that county which is then rendered as SVG using D3.

Via @simonw

Sunday, 04. April 2021

John Philpin : Lifestream

I thought the cabling behind my TV was bad …

I thought the cabling behind my TV was bad …

I thought the cabling behind my TV was bad …


🎼

🎼

🎼


Wow

Wow

Wow


Simon Willison

Spatialite Speed Test

Spatialite Speed Test Part of an excellent series of posts about SpatiaLite from 2012 - here John C. Zastrow reports on running polygon intersection queries against a 1.9GB database file in 40 seconds without an index and 0.186 seconds using the SpatialIndex virtual table mechanism.

Spatialite Speed Test

Part of an excellent series of posts about SpatiaLite from 2012 - here John C. Zastrow reports on running polygon intersection queries against a 1.9GB database file in 40 seconds without an index and 0.186 seconds using the SpatialIndex virtual table mechanism.


Animated choropleth of vaccinations by US county

Last week I mentioned that I've recently started scraping and storing the CDC's per-county vaccination numbers in my cdc-vaccination-history GitHub repository. This week I used an Observable notebook and d3's TopoJSON support to render those numbers on an animated choropleth map. The full code is available at https://observablehq.com/@simonw/us-county-vaccinations-choropleth-map From scrape

Last week I mentioned that I've recently started scraping and storing the CDC's per-county vaccination numbers in my cdc-vaccination-history GitHub repository. This week I used an Observable notebook and d3's TopoJSON support to render those numbers on an animated choropleth map.

The full code is available at https://observablehq.com/@simonw/us-county-vaccinations-choropleth-map

From scraper to Datasette

My scraper for this data is a single line in a GitHub Actions workflow:

curl https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_county_condensed_data \ | jq . > counties.json

I pipe the data through jq to pretty-print it, just to get nicer diffs.

My build_database.py script then iterates over the accumulated git history of that counties.json file and uses sqlite-utils to build a SQLite table:

for i, (when, hash, content) in enumerate( iterate_file_versions(".", ("counties.json",)) ): try: counties = json.loads( content )["vaccination_county_condensed_data"] except ValueError: # Bad JSON continue for county in counties: id = county["FIPS"] + "-" + county["Date"] db[ "daily_reports_counties" ].insert( dict(county, id=id), pk="id", alter=True, replace=True )

The resulting table can be seen at cdc/daily_reports_counties.

From Datasette to Observable

Observable notebooks are my absolute favourite tool for prototyping new visualizations. There are examples of pretty much anything you could possibly want to create, and the Observable ecosystem actively encourages forking and sharing new patterns.

Loading data from Datasette into Observable is easy, using Datasette's various HTTP APIs. For this visualization I needed to pull two separate things from Datasette.

Firstly, for any given date I need the full per-county vaccination data. Here's the full table filtered for April 2nd for example.

Since that's 3,221 rows Datasette's JSON export would need to be paginated... but Datasette's CSV export can stream all 3,000+ rows in a single request. So I'm using that, fetched using the d3.csv() function:

county_data = await d3.csv( `https://cdc-vaccination-history.datasette.io/cdc/daily_reports_counties.csv?_stream=on&Date=${county_date}&_size=max` );

In order to animate the different dates, I need a list of available dates. I can get those with a SQL query:

select distinct Date from daily_reports_counties order by Date

Datasette's JSON API has a ?_shape=arrayfirst option which will return a single JSON array of the first values in each row, which means I can do this:

https://cdc-vaccination-history.datasette.io/cdc.json?sql=select%20distinct%20Date%20from%20daily_reports_counties%20order%20by%20Date&_shape=arrayfirst

And get back just the dates as an array:

[ "2021-03-26", "2021-03-27", "2021-03-28", "2021-03-29", "2021-03-30", "2021-03-31", "2021-04-01", "2021-04-02", "2021-04-03" ]

Mike Bostock has a handy Scrubber implementation which can provide a slider with the ability to play and stop iterating through values. In the notebook that can be used like so:

viewof county_date = Scrubber(county_dates, { delay: 500, autoplay: false }) county_dates = (await fetch( "https://cdc-vaccination-history.datasette.io/cdc.json?sql=select%20distinct%20Date%20from%20daily_reports_counties%20order%20by%20Date&_shape=arrayfirst" )).json() import { Scrubber } from "@mbostock/scrubber" Drawing the map

The map itself is rendered using TopoJSON, an extension to GeoJSON that efficiently encodes topology.

Consider the map of 3,200 counties in the USA: since counties border each other, most of those border polygons end up duplicating each other to a certain extent.

TopoJSON only stores each shared boundary once, but still knows how they relate to each other which means the data can be used to draw shapes filled with colours.

I'm using the https://d3js.org/us-10m.v1.json TopoJSON file built and published with d3. Here's my JavaScript for rendering that into an SVG map:

{ const svg = d3 .create("svg") .attr("viewBox", [0, 0, width, 700]) .style("width", "100%") .style("height", "auto"); svg .append("g") .selectAll("path") .data( topojson.feature(topojson_data, topojson_data.objects.counties).features ) .enter() .append("path") .attr("fill", function(d) { if (!county_data[d.id]) { return 'white'; } let v = county_data[d.id].Series_Complete_65PlusPop_Pct; return d3.interpolate("white", "green")(v / 100); }) .attr("d", path) .append("title") // Tooltip .text(function(d) { if (!county_data[d.id]) { return ''; } return `${ county_data[d.id].Series_Complete_65PlusPop_Pct }% of the 65+ population in ${county_data[d.id].County}, ${county_data[d.id].StateAbbr.trim()} have had the complete vaccination`; }); return svg.node(); } Next step: a plugin

Now that I have a working map, my next goal is to package this up as a Datasette plugin. I'm hoping to create a generic choropleth plugin which bundles TopoJSON for some common maps - probably world countries, US states and US counties to start off with - but also allows custom maps to be supported as easily as possible.

Datasette 0.56

Also this week, I shipped Datasette 0.56. It's a relatively small release - mostly documentation improvements and bug fixes, but I've alse bundled SpatiaLite 5 with the official Datasette Docker image.

TIL this week Useful Markdown extensions in Python Releases this week airtable-export: 0.6 - (8 total releases) - 2021-04-02
Export Airtable data to YAML, JSON or SQLite files on disk datasette: 0.56 - (85 total releases) - 2021-03-29
An open source multi-tool for exploring and publishing data

John Philpin : Lifestream

If you are watching NFT auctions - Metakovan is biffing for

If you are watching NFT auctions - Metakovan is biffing for everything … (s)he’s pumping the market I think.

If you are watching NFT auctions - Metakovan is biffing for everything … (s)he’s pumping the market I think.


Just listened to the latest Core Intuition. For me the topic

Just listened to the latest Core Intuition. For me the topic wasn’t interesting BUT the laughter, humour, fun, pleasure of @manton and @danielpunkass kept me there - it sounded that this show was something they enjoyed in a way they haven’t for a while. Just Me?

Just listened to the latest Core Intuition. For me the topic wasn’t interesting BUT the laughter, humour, fun, pleasure of @manton and @danielpunkass kept me there - it sounded that this show was something they enjoyed in a way they haven’t for a while.

Just Me?


Identity Woman

Quoted In: Everything You Need to Know About “Vaccine Passports”

Earlier this week I spoke to Molly who wrote this article about so called “vaccine passports” we don’t call them that though (Only government’s issue passports). Digital Vaccination Certificates would be more accurate. Early on when the Covid-19 Credentials Initiative was founded I joined to help. In December the initiative joined LFPH and I become […] The post Quoted In: Everything You Need to

Earlier this week I spoke to Molly who wrote this article about so called “vaccine passports” we don’t call them that though (Only government’s issue passports). Digital Vaccination Certificates would be more accurate. Early on when the Covid-19 Credentials Initiative was founded I joined to help. In December the initiative joined LFPH and I become […]

The post Quoted In: Everything You Need to Know About “Vaccine Passports” appeared first on Identity Woman.


Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain

This article is about what it says it is and quotes me. CoinTelegraph, Women Changing Face of Enterprise Blockchain The post Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain appeared first on Identity Woman.

This article is about what it says it is and quotes me. CoinTelegraph, Women Changing Face of Enterprise Blockchain

The post Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain appeared first on Identity Woman.


John Philpin : Lifestream

”Patient thought, patient labor, and firmness of purpose a

”Patient thought, patient labor, and firmness of purpose are almost omnipotent.” More at Brain Pickings

”Patient thought, patient labor, and firmness of purpose are almost omnipotent.”

More at Brain Pickings


LexisNexis to Provide Giant Database of Personal Data to ICE

LexisNexis to Provide Giant Database of Personal Data to ICE You gots to make your data pay for itself ..

LexisNexis to Provide Giant Database of Personal Data to ICE

You gots to make your data pay for itself ..


Uber will pay a blind woman $1.1 million after drivers stran

Uber will pay a blind woman $1.1 million after drivers stranded her 14 times - The Verge Not enough. To them it’s just a cost doing of business.

Uber will pay a blind woman $1.1 million after drivers stranded her 14 times - The Verge

Not enough. To them it’s just a cost doing of business.


Graffiti art defaced by spectators at South Korea gallery

Graffiti art defaced by spectators at South Korea gallery How could they tell?

This river in Canada is now a ‘legal person’ I wonder how

This river in Canada is now a ‘legal person’ I wonder how it does its taxes.

This river in Canada is now a ‘legal person’

I wonder how it does its taxes.


‘Kill the bill’: Hundreds in UK protest against new crime la

‘Kill the bill’: Hundreds in UK protest against new crime law. No social distancing there then 🤷‍♀️ But it’s important. BoJo needs to go if he doesn’t understand why this is wrong. Meanwhile, J Pie’s take on the matter.

‘Kill the bill’: Hundreds in UK protest against new crime law.

No social distancing there then 🤷‍♀️

But it’s important. BoJo needs to go if he doesn’t understand why this is wrong.

Meanwhile, J Pie’s take on the matter.


How Trump Steered Supporters Into Unwitting Donations. Ho

How Trump Steered Supporters Into Unwitting Donations. How is it possible to keep being surprised by how low this excuse for a human being can go?

How Trump Steered Supporters Into Unwitting Donations.

How is it possible to keep being surprised by how low this excuse for a human being can go?

Thursday, 01. April 2021

Simon Willison

Quoting Aaron Straup Cope

If you measure things by foot traffic we [the SFO Museum] are one of the busiest museums in the world. If that is the case we are also one of the busiest museums in the world that no one knows about. Nothing in modern life really prepares you for the idea that a museum should be part of an airport. San Francisco, as I've mentioned, is funny that way. — Aaron Straup Cope

If you measure things by foot traffic we [the SFO Museum] are one of the busiest museums in the world. If that is the case we are also one of the busiest museums in the world that no one knows about. Nothing in modern life really prepares you for the idea that a museum should be part of an airport. San Francisco, as I've mentioned, is funny that way.

Aaron Straup Cope


Bill Wendel's Real Estate Cafe

Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars?

It’s April Fool’s Day again, a favorite opportunity to poke fun at #GamesREAgentsPlay and irrational exuberance in real estate.  Each year we ask, how can… The post Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars? first appeared on Real Estate Cafe.

It’s April Fool’s Day again, a favorite opportunity to poke fun at #GamesREAgentsPlay and irrational exuberance in real estate.  Each year we ask, how can…

The post Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars? first appeared on Real Estate Cafe.

Wednesday, 31. March 2021

Mike Jones: self-issued

Second Version of FIDO2 Client to Authenticator Protocol (CTAP) advanced to Public Review Draft

The FIDO Alliance has published this Public Review Draft for the FIDO2 Client to Authenticator Protocol (CTAP) specification, bringing the second version of FIDO2 one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and

The FIDO Alliance has published this Public Review Draft for the FIDO2 Client to Authenticator Protocol (CTAP) specification, bringing the second version of FIDO2 one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format.

This parallels the similar progress of the closely related second version of the W3C Web Authentication (WebAuthn) specification, which recently achieved Proposed Recommendation (PR) status.


DustyCloud Brainstorms

The hurt of this moment, hopes for the future

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope. I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though …

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope.

I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though I hold clear positions in this moment, there are few roots of feeling and emotion about the moment I feel I haven't steeped in myself at some time. Sometimes I tell this to friends, and they think maybe I am drifting from a mutual position, and this is painful for them. Perhaps they fear this could constitute or signal some kind of betrayal. I don't know what to say: I've been here too long to feel just one thing, even if I can commit to one position.

So I open my journal of feelings, and here I share some of the pages collecting the pain I see around me:

The irony of a movement wanting to be so logical and above feelings being drowned in them.

The feelings of those who found a comfortable and welcoming home in a world of loneliness, and the split between despondence and outrage for that unraveling.

The feelings of those who wanted to join that home too, but did not feel welcome.

The pent up feelings of those unheard for so long, uncorked and flowing.

The weight and shadow of a central person who seems to feel things so strongly but cannot, and does not care to learn to, understand the feelings of those around them.

I flip a few pages ahead. The pages are blank, and I interpret this as new chapters for us to write, together.

I hope we might re-discover the heart of our movement.

I hope we can find a place past the pain of the present, healing to build the future.

I hope we can build a new home, strong enough to serve us and keep us safe, but without the walls, moat, and throne of a fortress.

I hope we can be a movement that lives up to our claims: of justice, of freedom, of human rights, to bring these to everyone, especially those we haven't reached.


Simon Willison

Quoting Corey Quinn

This teaches us that—when it’s a big enough deal—Amazon will lie to us. And coming from the company that runs the production infrastructure for our companies, stores our data, and has been granted an outsized position of trust based upon having earned it over 15 years, this is a nightmare. — Corey Quinn

This teaches us that—when it’s a big enough deal—Amazon will lie to us. And coming from the company that runs the production infrastructure for our companies, stores our data, and has been granted an outsized position of trust based upon having earned it over 15 years, this is a nightmare.

Corey Quinn

Tuesday, 30. March 2021

Simon Willison

ifconfig.co

ifconfig.co I really like this: "curl ifconfig.co" gives you your IP address as plain text, "curl ifconfig.co/city" tells you your city according to MaxMind GeoLite2, "curl ifconfig.co/json" gives you all sorts of useful extra data. Suggested rate limit is one per minute, but the code is open source Go that you can run yourself. Via Hacker News

ifconfig.co

I really like this: "curl ifconfig.co" gives you your IP address as plain text, "curl ifconfig.co/city" tells you your city according to MaxMind GeoLite2, "curl ifconfig.co/json" gives you all sorts of useful extra data. Suggested rate limit is one per minute, but the code is open source Go that you can run yourself.

Via Hacker News


Hyperonomy Digital Identity Lab

Why is a Glossary like a Network of Balls connected by Elastics?

Why is it good to think of a Glossary as a Network of Balls connected by Elastics? From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>Sent: March 24, 2021 8:47 AMTo: Leonard Rosenthol <lrosenth@adobe.com>; David Waite <dwaite@pingidentity.com>; Jim St.Clair <jim.stclair@lumedic.io>Cc: Drummond Reed … Continue reading →

Why is it good to think of a Glossary as a Network of Balls connected by Elastics?

From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Sent: March 24, 2021 8:47 AM
To: Leonard Rosenthol <lrosenth@adobe.com>; David Waite <dwaite@pingidentity.com>; Jim St.Clair <jim.stclair@lumedic.io>
Cc: Drummond Reed <drummond.reed@evernym.com>; sankarshan <sankarshan@dhiway.com>; W3C Credentials CG (Public List) <public-credentials@w3.org>; Hardman, Daniel <Daniel.Hardman@sicpa.com>
Subject: TDW Glossary [was: The “self-sovereign” problem (was: The SSI protocols challenge)]

RE: First and foremost, without a definition/clarification of “Verifiable”, both of your statements are ambiguous. 

Leonard, I don’t disagree with your feedback. What I have been rolling out to the community is selected neighborhoods of closely related terms from the TDW Glossary project.

[A Glossary is] like a bunch of balls explicitly connected by elastics: add a new ball to the model and all of the existing balls in the neighborhood need to adjust themselves. The more balls you have in the network, the more stable the network becomes. So it is with visual glossaries of terms, definitions, and relationships.

Michael Herman, March 2021

Verifiable and Verifiable Data Registry are in the model but currently, they don’t have specific verified definitions.

The TDW Glossary is a huge, visual, highly interrelated, multi-disciplinary, multi-standard, 6-domain, semantic network model (https://hyperonomy.com/2021/03/10/tdw-glossary-management-platform-gmp-initial-results/) that includes:

Common English Language concepts – various dictionaries and reference web sites Sovrin Glossary concepts – https://sovrin.org/library/glossary/ and https://mwherman2000.github.io/sovrin-arm/ Enterprise Architecture concepts – https://pubs.opengroup.org/architecture/archimate3-doc/ HyperLedger Indy Architecture Reference Model concepts – https://mwherman2000.github.io/indy-arm/ HyperLedger Aries Architecture Reference Model concepts – https://mwherman2000.github.io/indy-arm/ Did-core concepts – https://w3c.github.io/did-core/ VC concepts – https://w3c.github.io/vc-data-model/ Others?

All new and updated terms, their definitions, metadata, and relationships, are automatically being published here: https://github.com/mwherman2000/tdw-glossary-1 (e.g. https://github.com/mwherman2000/tdw-glossary-1/blob/e4b96a0a21dd352f67b6bd93fdac66a1599ed35f/model/motivation/Principle_id-72c83ae5b01346b7892e6d2a076e787f.xml)

Other references:

https://hyperonomy.com/?s=tdw+glossary https://hyperonomy.com/2016/04/06/definitions-controlled-vocabulary-dictionary-glossary-taxonomy-folksonomy-ontology-semantic-network/

Here’s a snapshot of what the TDW Glossary “all-in” view looks like today (aka the “network of balls connected by elastics”). The TDW Glossary has (or will very soon have) more than 500 terms and definitions plus associated metadata and relationships.

Thank you for the feedback, Leonard. Keep it coming.

Cheers,
Michael

Figure 1. TDW Glossary: “All In” View

MyDigitalFootprint

Why framing “data” as an asset or liability is dangerous

If there is one thing that can change finance’s power and dominance as a decision-making tool, it is the rest of the data. According to Google (2020), 3% of company data is finance data when considered part of an entire company’s data lake. McKinsey reports that 90% of company decisions are based on finance data alone, the same 3% of data.   If you are in accounting, audit or financ

If there is one thing that can change finance’s power and dominance as a decision-making tool, it is the rest of the data. According to Google (2020), 3% of company data is finance data when considered part of an entire company’s data lake. McKinsey reports that 90% of company decisions are based on finance data alone, the same 3% of data.  

If you are in accounting, audit or finance shoes, how would you play the game to retain control when something more powerful comes on the scene?  You ensure that data is within your domain, you bring out the big guns and declare that data is just another asset or liability, and its rightful position is on the balance sheet.  We get to value it as part of the business. If we reflect on it, finance has been shoring up its position for a while.  HR, tech, processes, methods, branding, IP, legal, and culture have become subservient and controlled by finance. In the finance control game, we are all just an asset or liability and set a budget!  In the context of control and power and how to make better decisions, as a CDO, your friends and partners are human resources, tech, legal, operations, marketing, sales and strategy; your threat and enemy is finance.  

A critical inquiry you will have on day 0 is what weight do we, this organisation, put on the aspects of good decision making? How do we order and with what authority data, finance,  team, processes/methods, justifications, regulation, culture/brand/reputation, compliance/oversight/ governance,  reporting and stewardship? What is more important to us as a team; the trustworthiness or truthfulness of a decision?  The quality of a decision? The explainability of a decision? The ability to communicate a decision or diversity in a decision?  If data is controlled by finance and seen as an asset or liability, how does it affect your decision making capability?

As the CDO, if you determine that the axis of control remains with the CEO/ CFO it may be time to align your skill to a CEO gets data. 

Note to the CEO

It is your choice, but your new CDO is your new CFO in terms of power for decision making, which means there will be a swing in the power game. Your existing CEO/ CFO axis is under threat, and we know that underhanded and political games will emerge in all changes of power.  You will lead the choice about how you want this to play out given that all CEO’s need to find their next role every 5 to 7 years, which will only ever require more interactions with the CDO and data, defending the CFO power plays will not bring favour to your next role.  The CFO remains critically essential, but the existing two-way axis (CEO/CFO) has to become a three-way game that enables the CDO to shine until a new power balance is reached.  



Hyperonomy Digital Identity Lab

TDW Glossary: Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios)

To download this user scenario white paper, click here: Trusted Digital Web Glossary (TDW Glossary): Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios)

To download this user scenario white paper, click here:

Trusted Digital Web Glossary (TDW Glossary): Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios) Figure 1. Trusted Digital Web Glossary (TDW Glossary): Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios)

Monday, 29. March 2021

Simon Willison

Hello, HPy

Hello, HPy HPy provides a new way to write C extensions for Python in a way that is compatible with multiple Python implementations at once, including PyPy. Via @antocuni

Hello, HPy

HPy provides a new way to write C extensions for Python in a way that is compatible with multiple Python implementations at once, including PyPy.

Via @antocuni


Damien Bod

Getting started with Self Sovereign Identity SSI

The blog is my getting started with Self Sovereign identity. I plan to explore developing solutions using Self Sovereign Identities, the different services and evaluate some of the user cases in the next couple of blogs. Some of the definitions are explained, but mainly it is a list of resources, links for getting started. I’m […]

The blog is my getting started with Self Sovereign identity. I plan to explore developing solutions using Self Sovereign Identities, the different services and evaluate some of the user cases in the next couple of blogs. Some of the definitions are explained, but mainly it is a list of resources, links for getting started. I’m developing this blog series together with Matteo and will create several repos, blogs together.

Blogs in this series

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic

What is Self Sovereign Identity SSI?

Self-sovereign identity is an emerging solution built on blockchain technology for solving digital identities which gives the management of identities to the users and not organisations. It makes it possible the solve consent and data privacy of your data and makes it possible to authenticate your identity data across organisations or revoke it. It does not solve the process of authenticating users in applications. You can authenticate into your application using credentials from any trusted issuer, but this is vulnerable to phishing attacks. FIDO2 would be a better solution for this together with an OIDC flow for the application type. Or if you could use your credentials together with a registered FIDO2 key for the application, this would work. The user data is stored in a digital wallet, which is usually stored on your mobile phone. Recovery of this wallet does not seem so clear but a lot of work is going on here which should result in good solutions for this. The credentials DIDs are stored to a blockchain and to verify the credentials you need to search in the same blockchain network.

What are the players?

Digital Identity, Decentralized identifiers (DIDs)

A digital identity can be expressed as a universal identifier which can be owned and can be publicly shared. A digital identity provides a way of showing a subject (user, organisation, thing), a way of exchanging credentials to other identities and a way to verify the identity without storing data on a shared server. This can be all done across organisational boundaries. A digital identity can be found using decentralized identifiers (DID) and this has working group standards in the process of specifying this. The DIDs are saved to a blockchain network which can be resolved.

https://w3c.github.io/did-core/

The DIDs representing identities are published to a blockchain network.

Digital wallet

A digital wallet is a database which stores all your verified credentials which you added to your data. This wallet is usually stored on your mobile phone and needs encryption. You want to prevent all third party access to this wallet. Some type of recovery process is required, if you use a digital wallet. A user can add or revoke credentials in the wallet. When you own a wallet, you would publish a public key to a blockchain network. A DID is returned representing the digital identity for this wallet and a public DID was saved to the network which can be used to authenticate anything interacting with the wallet. Digital wallets seem to be vendor locked at the moment which will be problematic for mainstream adoption. 

Credentials, Verifiable credentials

https://www.w3.org/TR/vc-data-model/

A verifiable credential is an immutable set of claims created by an issuer which can be verified. A verifiable credential has claims, metadata and proof to validate the credential. A credential can be saved to a digital wallet, so no data is persisted anywhere apart from the issuer and the digital wallet.

This credential can then be used anywhere.

The credential is created by the issuer for the holder of the credential. This credential is presented to the verifier by the holder from a digital wallet and the verifier can validate the credential using the issuer DID which can be resolved from the blockchain network.

Networks

The networks are different distributed blockchains with verifiable data registries using DIDs. You need to know how to resolve each DID, issuer DID to verify or use a credential and so you need to know where to find the network on which the DID is persisted. The networks are really just persisted distributed databases. Sovrin or other blockchains can be used as a network. The blockchain holds public key DIDs, DID documents, ie credentials and schemas.

Energy consumption

This is something I would like to evaluate, and if this technology was to become widespread, how much energy would this cost. I have no answers to this at the moment.

Youtube videos, channels

An introduction to decentralized identities | Azure Friday

SSI Meetup

An introduction to Self-Sovereign Identity

Intro to SSI for Developers: Architecting Software Using Verifiable Credentials

SSI Ambassador

Decentralized identity explained

Evernym channel

Books, Blogs, articles, info

Self-Sovereign Identity: The Ultimate Beginners Guide!

Decentralized Identity Foundation

SELF-SOVEREIGN IDENTITY PDF by Marcos Allende Lopez

https://en.wikipedia.org/wiki/Self-sovereign_identity

https://decentralized-id.com/

https://github.com/animo/awesome-self-sovereign-identity

Organisations

https://identity.foundation/

https://github.com/decentralized-identity

sovrin

People

Drummond Reed @drummondreed
Rieks Joosten
Oskar van Deventer
Alex Preukschat @AlexPreukschat
Danny Strockis @dStrockis
Tomislav Markovski @tmarkovski
Riley Hughes @rileyphughes
Michael Boyd @michael_boyd_
Marcos Allende Lope @MarcosAllendeL
Adrian Doerk @doerkadrian
Mathieu Glaude @mathieu_glaude
Markus Sabadello @peacekeeper
Ankur Patel @_AnkurPatel
Daniel Ƀrrr @csuwildcat
Matthijs Hoekstra @mahoekst
Kaliya-Identity Woman @IdentityWoman

Products

https://docs.trinsic.id/docs

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

Companies

https://tykn.tech/

https://trinsic.id/

Microsoft Azure AD

evernym

northernblock.io

Specs

https://w3c.github.io/did-core/

https://w3c.github.io/vc-data-model/

https://www.w3.org/TR/vc-data-model/

Links

https://github.com/swiss-ssi-group

https://www.hyperledger.org/use/aries

sovrin

https://github.com/evernym

what-is-self-sovereign-identity

https://techcommunity.microsoft.com/t5/identity-standards-blog/ion-we-have-liftoff/ba-p/1441555

Sunday, 28. March 2021

Hyperonomy Digital Identity Lab

Healthcare Claim Processing User Scenario

Related References Here in Alberta, a province-wide digital ID (MyAlberta Digital ID) is being rolled to to each citizen (https://account.alberta.ca/available-services). The MyAlberta Digital ID will used to access and manage all Alberta Health Services (https://myhealth.alberta.ca/myhealthrecords) including vaccination records. MyAHS Connect … Continue reading →
Figure 1. Healthcare Claim Data Flow Related References Here in Alberta, a province-wide digital ID (MyAlberta Digital ID) is being rolled to to each citizen (https://account.alberta.ca/available-services). The MyAlberta Digital ID will used to access and manage all Alberta Health Services (https://myhealth.alberta.ca/myhealthrecords) including vaccination records. MyAHS Connect Frequently Asked Questions (https://myahsconnect.albertahealthservices.ca/MyChartPRD/Authentication/Login?mode=stdfile&option=faq)

Jon Udell

Acknowledgement of uncertainty

In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories. Here are examples of such signals: – Authors cite expert sources (positive) – Title is clickbaity (negative) And my favorite: – Authors acknowledge uncertainty (positive) Will the … Continue reading Acknowledgement of uncertainty

In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories.

Here are examples of such signals:

– Authors cite expert sources (positive)

– Title is clickbaity (negative)

And my favorite:

– Authors acknowledge uncertainty (positive)

Will the news ecosystem ever be able to label stories automatically based on automatic detection of such signals, and if so, should it? These are open questions. The best way to improve news literacy may be the SIFT method advocated by Mike Caulfield, which shifts attention away from intrinsic properties of individual news stories and advises readers to:

– Stop

– Investigate the source

– Find better coverage

– Trace claims, quotes, and media to original context

“The goal of SIFT,” writes Charlie Warzel in Don’t Go Down the Rabbit Hole, “isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not.”

SIFT favors extrinsic signals over the intrinsic ones that were the focus of the W3C Credible Web Community Group. But intrinsic signals may yet play an important role, if not as part of a large-scale automated labeling effort then at least as another kind of news literacy reflex.

This morning, in How public health officials can convince those reluctant to get the COVID-19 vaccine, I read the following:

What made these Trump supporters shift their views on vaccines? Science — offered straight-up and with a dash of humility.

The unlikely change agent was Dr. Tom Frieden, who headed the Centers for Disease Control and Prevention during the Obama administration. Frieden appealed to facts, not his credentials. He noted that the theory behind the vaccine was backed by 20 years of research, that tens of thousands of people had participated in well-controlled clinical trials, and that the overwhelming share of doctors have opted for the shots.

He leavened those facts with an acknowledgment of uncertainty. He conceded that the vaccine’s potential long-term risks were unknown. He pointed out that the virus’s long-term effects were also uncertain.

“He’s just honest with us and telling us nothing is 100% here, people,” one participant noted.

Here’s evidence that acknowledgement of uncertainty really is a powerful signal of credibility. Maybe machines will be able to detect it and label it; maybe those labels will matter to people. Meanwhile, it’s something people can detect and do care about. Teaching students to value sources that acknowledge uncertainty, and discount ones that don’t, ought to be part of any strategy to improve news literacy.


Hyperonomy Digital Identity Lab

DIF SDS/CS WG: CS Refactoring Proposal 0.2 – March 24, 2021

Contents Latest Version of the Proposal (0.2 – March 24, 2021) Agent-Hub-EDV Architecture Reference Model (AHE-ARM) 0.1 Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call OSI Stack Proposal for Confidential Storage Specification 1. Latest Version … Continue reading →
Contents Latest Version of the Proposal (0.2 – March 24, 2021) Agent-Hub-EDV Architecture Reference Model (AHE-ARM) 0.1 Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call OSI Stack Proposal for Confidential Storage Specification 1. Latest Version of the Proposal (0.2 – March 24, 2021)

From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Sent: March 24, 2021 4:14 PM
To: sds-wg@lists.identity.foundation; Adam Stallard <adam.stallard@gmail.com>; Daniel Buchner (Personal) (danieljb2@gmail.com) <danieljb2@gmail.com>; Manu Sporny (msporny@digitalbazaar.com) <msporny@digitalbazaar.com>; Dmitri Zagidulin (dzagidulin@gmail.com) <dzagidulin@gmail.com>
Cc: sds-wg@dif.groups.io; Credentials Community Group <public-credentials@w3.org>; Daniel Buchner <daniel.buchner@microsoft.com>; Chris Were (chris@verida.io) <chris@verida.io>; Orie Steele (orie@transmute.industries) <orie@transmute.industries>
Subject: PROPOSAL: Confidential Storage Specification Refactoring 0.2 – March 24, 2021 – updated from version 0.1

PROPOSAL: Confidential Storage Specification Refactoring 0.2 – March 24, 2021

Based on the March 11 Zoom discussion where we worked hard to discern the differences between Agents, Hubs, and EDVs (and I believe were largely successful IMO), I’ve like to propose to the SDS/CS WG that we refactor the current Confidential Storage specification into 3 separable parts/specifications.  I also present a high-level roadmap (simple ordering) for how the WG might proceed if this refactoring is accepted (or at least, if the first part/first new specification is accepted).

Version 0.2 adds some comments about inter-specification and specification version dependencies.

Separable Part 1: Factor the current EDV-related components of the current Confidential Specification into its own specification document. This document would be a ZCAP/HTTP-specific specification document for EDVs. I also propose that the title of this specification document clearly reflect that orientation.  For example, the proposed title for this specification document is: EDV Specification 1.0: ZCAP/HTTP Data Vault Storage.

Separable Part 2: Factor the Hub-related components of the current Confidential Specification into its own specification document. This document would define the Hub components that an Agent or App can talk to as well as describe how a Hub “sits on top of an EDV service instance”. I also propose that the title of this specification document clearly reflect that orientation.  For example, the proposed title for this specification document is: Data Hub Specification 1.0: Federated (or Aggregated) Personal Data Access (or something like that).

Separable Part 3: Develop a specification for the Layer A Trusted Content Storage Kernel as its own specification document (see the diagram below). This document would document a public lower-level interface for directly interacting with local-device hosted/attached EDVs without needing or requiring a higher-level remote access protocol (e.g. HTTP). I also propose that the title of this specification document clearly reflect that orientation.  For example, the proposed title for this specification document is: EDV Kernel Specification 1.0: Layer A Trusted Content Storage Kernel. This is in support of apps like the Fully Decentralized Dewitter scenario.

Roadmap: The scope of the above specifications and a high-level roadmap (simple ordering) for these specifications is illustrated below.

Figure 1. CS Specification Refactoring Proposal 0.2

Dependencies

EDV Specification 1.0: ZCAP/HTTP Data Vault Storage. The intent is for this specification to be fast-tracked based on the 3 existing prototype/PoC implementations. This specification would neither have nor take any dependencies on either of the 2 specifications below.  In particular, this specification would neither have nor take any dependencies on the EDV Kernel Specification.  A future version or variation of the EDV Specification may take a dependency against whatever is the current version of the EDV Kernel Specification – if the WG chooses to. Data Hub Specification 1.0: Federated (or Aggregated) Personal Data Access. This specification would likely take a dependency against whatever is the current version of the EDV Specification (likely EDV Specification 1.0) – if the WG chooses to. EDV Kernel Specification 1.0: Layer A Trusted Content Storage Kernel. This specification would not have nor take any hard dependencies against either of the above specifications. The EDV Kernel Specification would be guided by the needs/requirements of the prevailing EDV Specification 1.0: ZCAP/HTTP Data Vault Storage implementations in addition to the Fully Decentralized Twitter (Dewitter) user scenario. Ideally, Layer A of one of the prevailing implementations may act as the reference implementation for the EDV Kernel Specification (assuming its source code and documentation are freely licensed and open-sourced).

Best regards,
Michael Herman
Far Left Self-Sovereignist
Self-Sovereign Blockchain Architect
Trusted Digital Web
Hyperonomy Digital Identity Lab
Parallelspace Corporation

_._,_._,_

Links:

You receive all messages sent to this group.

View/Reply Online (#122) | Reply To Group | Reply To Sender | Mute This Topic | New Topic
Your Subscription | Contact Group Owner | Unsubscribe [mwherman@parallelspace.net]

_._,_._,_

2. Agent-Hub-EDV Architecture Reference Model (AHE-ARM) 0.1

From: Michael Herman (Trusted Digital Web)
Sent: March 24, 2021 8:03 AM
To: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>; sds-wg@lists.identity.foundation; sds-wg@dif.groups.io; Credentials Community Group <public-credentials@w3.org>; Daniel Buchner <daniel.buchner@microsoft.com>
Subject: (Updated) Agent-Hub-EDV Architecture Reference Model (AHE-ARM) 0.1

After relistening to the March 11 recording with more intent and building the partial transcription (see my previous email), I’ve come up with an updated architecture reference model (ARM) for this Agent-Hub-EDV stack that is emerging.  Here’s a snapshot as of a few minutes ago…

Figure 2. Agent-Hub-EDV Architecture Reference Model (AHE-ARM) 0.1

Michael

3. Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call

From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Sent: March 24, 2021 7:38 AM
To: sds-wg@lists.identity.foundation; sds-wg@dif.groups.io; Credentials Community Group <public-credentials@w3.org>; Daniel Buchner <daniel.buchner@microsoft.com>
Subject: Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call: Hub and EDV Discussion featuring Daniel Buchner’s Description of a Hub

Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call:
Hub and EDV Discussion featuring Daniel Buchner’s Description of a Hub

I’ve posted this partial transcription here: https://hyperonomy.com/2021/03/24/transcription-of-selected-parts-of-the-dif-sds-cs-march-11-2021-zoom-call-hub-and-edv-discussion-featuring-daniel-buchners-description-of-a-hub/

Context

This is a transcription of selected parts of the EDV-Hub conversation during the DIF SDS/CS Thursday weekly Zoom call on March 11, 2021. This is the call where Daniel Buchner described (verbally) several aspects about what is and what is not a Hub.

This partial transcription focuses primarily on Daniel’s comments as they relate to the question “what is a Hub?”.

Have a great day, afternoon, or evening,

Michael

4. OSI Stack Proposal for Confidential Storage Specification

From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Sent: March 24, 2021 7:10 AM
To: sds-wg@lists.identity.foundation; sds-wg@dif.groups.io; Credentials Community Group <public-credentials@w3.org>; Daniel Buchner <daniel.buchner@microsoft.com>
Subject: RE: Is there an equivalent to the “OSI Network Stack” but for storage and storage access?

I tweaked (twerked?) up a version of https://commons.wikimedia.org/wiki/File:Osi-model-jb.svg to produce this …just an idea. It follows from a transcription of DanielB’s March 11 description of a Hub and where it sits between an Agent and an EDV.

Your thoughts? …maybe this becomes a key aspect/contribution in our CS specifications?

Figure 4. OSI Stack Proposal for Confidential Storage Specification

Best regards,
Michael Herman
Far Left Self-Sovereignist
Self-Sovereign Blockchain Architect
Trusted Digital Web
Hyperonomy Digital Identity Lab
Parallelspace Corporation


Simon Willison

Weeknotes: SpatiaLite 5, Datasette on Azure, more CDC vaccination history

This week I got SpatiaLite 5 working in the Datasette Docker image, improved the CDC vaccination history git scraper, figured out Datasette on Azure and we closed on a new home! SpatiaLite 5 for Datasette SpatiaLite 5 came out earlier this year with a bunch of exciting improvements, most notably an implementation of KNN (K-nearest neighbours) - a way to efficiently answer the question "what a

This week I got SpatiaLite 5 working in the Datasette Docker image, improved the CDC vaccination history git scraper, figured out Datasette on Azure and we closed on a new home!

SpatiaLite 5 for Datasette

SpatiaLite 5 came out earlier this year with a bunch of exciting improvements, most notably an implementation of KNN (K-nearest neighbours) - a way to efficiently answer the question "what are the 10 closest rows to this latitude/longitude point".

I love building X near me websites so I expect I'll be using this a lot in the future.

I spent a bunch of time this week figuring out how best to install it into a Docker container for use with Datasette. I finally cracked it in issue 1249 and the Dockerfile in the Datasette repository now builds with the SpatiaLite 5.0 extension, using a pattern I figured out for installing Debian unstable packages into a Debian stable base container.

When Datasette 0.56 is released the official Datasette Docker image will bundle SpatiaLite 5.0.

CDC vaccination history in Datasette

I'm tracking the CDC's per-state vaccination numbers in my cdc-vaccination-history repository, as described in my Git scraping lightning talk.

Scraping data into a git repository to track changes to it over time is easy. What's harder is extracting that data back out of the commit history in order to analyze and visualize it later.

To demonstrate how this can work I added a build_database.py script to that repository which iterates through the git history and uses it to build a SQLite database containing daily state reports. I also added steps to the GitHub Actions workflow to publish that SQLite database using Datasette and Vercel.

I installed the datasette-vega visualization plugin there too. Here's a chart showing the number of doses administered over time in California.

This morning I started capturing the CDC's per-county data too, but I've not yet written code to load that into Datasette. [UPDATE: that table is now available: cdc/daily_reports_counties]

Datasette on Azure

I'm keen to make Datasette easy to deploy in as many places as possible. I already have mechanisms for publishing to Heroku, Cloud Run, Vercel and Fly.io - today I worked out the recipe needed for Azure Functions.

I haven't bundled it into a datasette-publish-azure plugin yet but that's the next step. In the meantime the azure-functions-datasette repo has a working example with instructions on how to deploy it.

Thanks go to Anthony Shaw for building out the ASGI wrapper needed to run ASGI applications like Datasette on Azure Functions.

iam-to-sqlite

I spend way too much time whinging about IAM on Twitter. I'm certain that properly learning IAM will unlock the entire world of AWS, but I have so far been unable to overcome my discomfort with it long enough to actually figure it out.

After yet another unproductive whinge this week I guilted myself into putting in some effort, and it's already started to pay off: I figured out how to dump out all existing IAM data (users, groups, roles and policies) as JSON using the aws iam get-account-authorization-details command, and got so excited about it that I built iam-to-sqlite as a wrapper around that command that writes the results into SQLite so I can browse them using Datasette!

I'm increasingly realizing that the key to me understanding how pretty much any service works is to pull their JSON into a SQLite database so I can explore it as relational tables.

A useful trick for writing weeknotes

When writing weeknotes like these, it's really useful to be able to see all of the commits from the past week across many different projects.

Today I realized you can use GitHub search for this. Run a search for author:simonw created:>2021-03-20 and filter to commits, ordered by "Recently committed".

Here's that search for me.

Django pull request accepted!

I had a pull request accepted to Django this week! It was a documentation fix for the RawSQL query expression - I found a pattern for using it as part of an .filter(id__in=RawSQL(...)) query that wasn't covered by the documentation.

And we found a new home

One other project this week: Natalie and I closed on a new home! We're moving to El Granada, a tiny town just north of Half Moon Bay, on the coast 40 minutes south of San Francisco. We'll be ten minutes from the ocean, with plenty of pinnipeds and pelicans. Exciting!

TIL this week Running gdb against a Python process in a running Docker container Tracing every executed Python statement Installing packages from Debian unstable in a Docker image based on stable Closest locations to a point Redirecting all paths on a Vercel instance Writing an Azure Function that serves all traffic to a subdomain Releases this week datasette-publish-vercel: 0.9.3 - (15 releases total) - 2021-03-26
Datasette plugin for publishing data using Vercel sqlite-transform: 0.5 - (6 releases total) - 2021-03-24
Tool for running transformations on columns in a SQLite database django-sql-dashboard: 0.5a0 - (12 releases total) - 2021-03-24
Django app for building dashboards using raw SQL queries iam-to-sqlite: 0.1 - 2021-03-24
Load Amazon IAM data into a SQLite database tableau-to-sqlite: 0.2.1 - (4 releases total) - 2021-03-22
Fetch data from Tableau into a SQLite database c64: 0.1a0 - 2021-03-21
Experimental package of ASGI utilities extracted from Datasette

Saturday, 27. March 2021

Jon Udell

The paradox of abundance

Several years ago I bought two 5-packs of reading glasses. There was a 1.75-diopter set for books, magazines, newspapers, and my laptop (when it’s in my lap), plus a 1.25-diopter set for the screens I look at when working in my Captain Kirk chair. They were cheap, and the idea was that they’d be an … Continue reading The paradox of abundance

Several years ago I bought two 5-packs of reading glasses. There was a 1.75-diopter set for books, magazines, newspapers, and my laptop (when it’s in my lap), plus a 1.25-diopter set for the screens I look at when working in my Captain Kirk chair. They were cheap, and the idea was that they’d be an abundant resource. I could leave spectacles lying around in various places, there would always be a pair handy, no worries about losing them.

So of course I did lose them like crazy. At one point I bought another 5-pack but still, somehow, I’m down to a single 1.75 and a single 1.25. And I just realized it’s been that way for quite a while. Now that the resource is scarce, I value it more highly and take care to preserve it.

I’m sorely tempted to restock. It’s so easy! A couple of clicks and two more 5-packs will be here tomorrow. And they’re cheap, so what’s not to like?

For now, I’m resisting the temptation because I don’t like the effect such radical abundance has had on me. It’s ridiculous to lose 13 pairs of glasses in a couple of years. I can’t imagine how I’d explain that to my pre-Amazon self.

For now, I’m going to try to assign greater value to the glasses I do have, and treat them accordingly. And when I finally do lose them, I hope I’ll resist the one-click solution. I thought it was brilliant at the time, and part of me still does. But it just doesn’t feel good.


Hyperonomy Digital Identity Lab

Microsoft SharePoint Products and Technologies: 20th Anniversary Celebration

Microsoft “Tahoe” Airlift and RC1 Announcements https://news.microsoft.com/2001/01/08/microsoft-announces-branding-and-rc1-availability-of-tahoe-server/ https://searchtools.com/tools/sharepoint.html Microsoft Products and Technologies Whitepapers Client: Microsoft Corporation SharePoint Product Group / Microsoft IT Showcase Microsoft Office SharePoint Portal Server 2003: Document Lib
Microsoft “Tahoe” Airlift and RC1 Announcements https://news.microsoft.com/2001/01/08/microsoft-announces-branding-and-rc1-availability-of-tahoe-server/ https://searchtools.com/tools/sharepoint.html Figure 1. Microsoft SharePoint Evolution Figure 2. Microsoft SharePoint Portal Server 2001: Installation Splash Screen Microsoft Products and Technologies Whitepapers

Client: Microsoft Corporation SharePoint Product Group / Microsoft IT Showcase

Microsoft Office SharePoint Portal Server 2003: Document Library Migration Tools (online product help) (20 pages) Microsoft Web Enterprise Portal: Deploying Microsoft’s Enterprise Intranet Portal Using Microsoft Office SharePoint Portal Server 2003 (34 pages) Microsoft Web Enterprise Portal: Deploying Microsoft’s Enterprise Intranet Portal Using Microsoft Office SharePoint Portal Server 2003 (presentation) (25 slides) Deploying SharePoint Portal Server 2003 Shared Services at Microsoft (33 pages) Deploying SharePoint Portal Server 2003 Shared Services at Microsoft (presentation) (35 slides) Microsoft SharePoint Portal Server 2003 Advanced Migration Scenarios (16 pages) Migrating from SharePoint Team Services and SharePoint Portal Server 2001 to Microsoft SharePoint Products and Technologies (saved HTML page) (17 pages) Managing SharePoint Products and Technologies Performance at Microsoft (24 pages) Appendix A – SharePoint Report Tool (SPReport) (6 pages) Microsoft Windows SharePoint Services: Content Migration Strategies and Solutions (30 pages) Creating Just-In-Time Business Solutions with Windows SharePoint Services 2.0 (50 pages) Microsoft Windows SharePoint Services 3.0: Evaluation Guide (65 pages) Microsoft Office SharePoint 2007: Evaluation Guide (114 pages) Microsoft Office SharePoint 2007: Evaluation Guide (2nd edition) (126 pages) Microsoft Windows SharePoint Services 3.0: Evaluation Guide (2nd edition) (74 pages) Exchange 2000 Web Storage System Articles Developing Microsoft .NET collaboration solutions (Microsoft Exchange Server .NET Strategy Whitepaper) https://docs.microsoft.com/en-us/archive/msdn-magazine/2000/july/exchange-2000-web-storage-system-workflow-tools-and-cdo-turbocharge-collaboration-apps https://docs.microsoft.com/en-us/archive/msdn-magazine/2001/may/exchange-2000-wss-web-storage-system-improves-exchange-data-accessibility https://gilbane.com/2000/10/microsoft-exchange-2000-tahoe-beta-available/ https://www.computerworld.com/article/2800650/microsoft-to-highlight-collaboration-at-conference.html Outlook 10 Drops Support for the Local Web Storage System Articles https://www.itprotoday.com/storage/microsoft-omits-local-web-storage-system-and-office-designer-office-10 https://www.zdnet.com/article/microsoft-cuts-two-office-10-features/ https://redmondmag.com/articles/2000/10/09/developer-tool-leverages-exchange-2000-tahoe-office-10.aspx?m=1 https://www.eweek.com/development/microsoft-trims-office-10-features-list/

Friday, 26. March 2021

Identity Woman

IPR - what is it? why does it matter?

I am writing this essay to support those of you who are confused about why some of the technologists keep going on and on about Intellectual Property Rights (IPR). First of all, what the heck is it? Why does it matter? How does it work? Why should we get it figured out “now” rather than […] The post IPR - what is it? why does it matter? appeared first on Identity Woman.

I am writing this essay to support those of you who are confused about why some of the technologists keep going on and on about Intellectual Property Rights (IPR). First of all, what the heck is it? Why does it matter? How does it work? Why should we get it figured out “now” rather than […]

The post IPR - what is it? why does it matter? appeared first on Identity Woman.


Tim Bouma's Blog

Verifiable Credentials: Mapping to a Generic Policy Terminology

Note: This post is the sole opinion and perspective of the author. Over the past several months I have been diligently attempting to map the dynamically evolving world of trust frameworks and verifiable credentials into a straightforward and hopefully timeless terminology that can be used for policymaking. The storyboard diagram above is what I’ve come up with so far. Counterparty — f

Note: This post is the sole opinion and perspective of the author.

Over the past several months I have been diligently attempting to map the dynamically evolving world of trust frameworks and verifiable credentials into a straightforward and hopefully timeless terminology that can be used for policymaking. The storyboard diagram above is what I’ve come up with so far.

Counterparty — for every consequential relationship or transaction there are at a minimum of two parties involved. Regardless of whether the interaction is collaborative, competitive, zero positive sum, they can be considered as counterparties to one another. Claim — is the something that is the matter of concern between the counterparties — it can be financial, tangible, intangible; something in the present, or a promise of something in the future. Offer — a counterparty offers something that usually relates to a Claim. Commit — a counterparty can commit to its Offer. Present — a counterparty can present an Offer (or a Claim). Accept — on the other side, the other counterparty accepts an Offer. Issue — An Offer, once formed, can be issued in whatever form — usually a document or credential that is signed by the counterparty. Hold — An offer can be held. How it is held depends on its embodiment (e.g.., digital, paper, verbal, etc.) Verify — An offer, or more specifically its embodiment can be verified for its origin and integrity.

All of the above is made possible by:

Business Trust — how the counterparties decide to trust one another. This is the non-technical aspect of agreements, rules, treaties, legislation, etc.

And underpinned by:

Technical Trust: how the counterparties prove to another that their trust has not been compromised. This the technical aspect that includes, cryptographic protocols, data formats, etc.

Why is this useful? When writing policy, you need a succinct model which is clear enough for subsequent interpretation. To do this, you need conceptual buckets to drop things into. Yes, this model is likely to change, but it’s my best and latest crack at it to synthesize the complex world of digital credentials with an abstraction that might be useful to help us align existing solutions while adopting exciting new capabilities.

As always, I am open for comment and constructive feedback. You know where to find me.


Hyperonomy Digital Identity Lab

TDW Glossary: SOVRONA Ecosystem Neighborhood: What is the SOVRONA Mesh (SOVRONA Network)?

Click the neighborhood to open it in a new tab. Key Definitions SOVRONA Mesh (SOVRONA Network) The SOVRONA Mesh is comprised of a network of SOVRONA Nodes (each hosting its own replica of the SOVRONA Ledger) and includes and is … Continue reading →

Click the neighborhood to open it in a new tab.

Figure 1. SOVRONA Ecosystem Neighborhood Key Definitions SOVRONA Mesh (SOVRONA Network)

The SOVRONA Mesh is comprised of a network of SOVRONA Nodes (each hosting its own replica of the SOVRONA Ledger) and includes and is governed by the SOVRONA Governance Framework (SGF). The SOVRONA Mesh excludes the Agents that communicate with the SOVRONA Mesh via the HyperLedger Indy Service Endpoint exposed by each SOVRONA Node.

Resources INDY-ARM (https://mwherman2000.github.io/indy-arm/) SOVRIN-ARM (https://mwherman2000.github.io/sovrin-arm/)


Information Answers

Is Marketing about to wake up from its Adtech nightmare?

I did not manage to watch this talk from Bob Hoffman, The Ad Contrarian yesterday. I wish I had; from the write up in Marketing Week […]
I did not manage to watch this talk from Bob Hoffman, The Ad Contrarian yesterday. I wish I had; from the write up in Marketing Week […]

MyDigitalFootprint

#lockdown one year in, and I now question. What is a Better Normal?

I have written my fair amount over lockdown, but a core tenant of my hope was to leave the old normal behind, not wanting a new normal but a better normal.  The old normal (pre-#covid19) was as exhausting as I felt like a dog whose sole objective was to chase its own tail.   I perceived that a new normal (post-lockdown) would be straight back to doing the same.  My hope was fo


I have written my fair amount over lockdown, but a core tenant of my hope was to leave the old normal behind, not wanting a new normal but a better normal.  The old normal (pre-#covid19) was as exhausting as I felt like a dog whose sole objective was to chase its own tail.   I perceived that a new normal (post-lockdown) would be straight back to doing the same.  My hope was for a  “better normal” where I got to change/ pick a new objective. 

Suppose I unpack old and new normal with a time lens on both ideas.  What am I really (really honestly) doing differently hour by hour, day by day, week by week, month by month and year by year, today compared to the old normal.  My new brighter, shinny, hope-filled, better normal looks remarkable like the old when viewed by the lens of time.  Meetings, calls, reading, writing, communicating and thinking. Less travel and walking has been replaced with more allocation to the other activities, but I have lost the time I used to throw away, the time to reflect, time to dwell, time to ponder, time to prepare.  

Time and its allocation is indicating that the old and the new normals look the same.  Where has “My Better Normal Gone?”

 Where has “My Better Normal Gone?”


If the work to be done is the same, then time is not an appropriate measure of observing change.  How about looking at my processes and methods?   My methods of work have changed, but not for the better.  My old normal heuristics and rules were better as I created more time to walk, travel and, therefore, time to reflect and prepare.  I try to allocate more time to different approaches and methods, but “screen-time” appears only to have one determinant - attention (distraction and diversion included) 

So it appears to me that if I want a better normal, I have to change the work to be done (a nod to Clayton Christensen).  There has been one change in the work to be done, which has been detrimental from my perspective; I have been forced, like everyone, to exchanged time with family and friends for either time alone or jobs.

So as the anniversary passes and I reflect on a second lockdown birthday, have I spent enough time changing the work to be done? Probably not, but I now plan to.


Thursday, 25. March 2021

Nader Helmy

Why we’re launching MATTR VII

It’s no secret we need a better web. The original vision of an open and decentralized network that’s universally accessible continues to be a north star for those working to design the future of digital infrastructure for everyday people. Despite the progress that has been made in democratising access to massive amounts of information, the dire state of cybersecurity and privacy on the internet to

It’s no secret we need a better web. The original vision of an open and decentralized network that’s universally accessible continues to be a north star for those working to design the future of digital infrastructure for everyday people. Despite the progress that has been made in democratising access to massive amounts of information, the dire state of cybersecurity and privacy on the internet today present significant barriers to access for too many of our most vulnerable populations. We started MATTR because we believe that standards, transparency, and openness are not only better for users; they make for stronger systems and more resilient networks. We recognize that a decentralized web of digital trust, based on transparency, consent, and verifiable data, can help us address critical challenges on a global scale. It represents a significant opportunity to give people real agency and control over their digital lives.

Our story

At its inception, we chose “MATTR” as a moniker because we strongly believed that the movement towards more decentralized systems will fundamentally change the nature of data and privacy on the internet. Matter, in its varying states, forms the building blocks of the universe, symbolically representing the capacity for change and transformation that allows us all to grow and adapt. In another sense, people matter, and the impact of decisions we make as builders of technology extend beyond ourselves. It’s a responsibility we take seriously, as Tim Berners Lee puts it, “to preserve new frontiers for the common good.” We proudly bear the name MATTR and the potential it represents as we’ve built out our little universe of products.

In September 2020, we introduced our decentralized identity platform. Our goal was to deliver standards-based digital trust to developers in a scalable manner. We designed our platform with a modular security architecture to enable our tools to work across many different contexts. By investing deeply in open standards and open source communities as well as developing insights through collaboration and research, we realized that developers want to use something that’s convenient without compromising on flexibility, choice, or security. That’s why we launched our platform with standards-based cryptography and configurable building blocks to suit a broad array of use cases and user experiences in a way that can evolve as technology matures.

At the same time, we’ve continued to work in open source and open standards communities with greater commitment than ever to make sure we’re helping to build a digital ecosystem that can support global scale. We launched MATTR Learn and MATTR Resources as hubs for those interested in these new technologies, developing educational content to explore concepts around decentralized identity, offering guided developer tutorials and videos, and providing documentation and API references. We also unveiled a new website, introduced a novel approach to selective disclosure of verifiable credentials, built and defined a new secure messaging standard, developed a prototype for paper-based credentials to cater for low-tech environments, and made a bridge to extend OpenID Connect with verifiable credentials. We’ve consistently released tools and added features to make our products more secure, extensible, and easy to use. In parallel, we also joined the U.S. Department of Homeland Security’s SVIP program in October to help advance the goals of decentralized identity and demonstrate provable interoperability with other vendors in a transparent and globally-visible manner. Zooming out a bit, our journey at MATTR is part of a much larger picture of passionate people working in collaborative networks across the world to make this happen.

The bigger picture

It has been an incredible year for decentralized and self-sovereign identity as a whole. In light of the global-scale disruption of COVID-19, the demand for more secure digital systems became even more critical to our everyday lives. Start-ups, corporations, governments, and standards organizations alike have been heavily investing in building technology and infrastructure to support an increasingly digital world. We’re seeing this innovation happen across the globe, from the work being done by the DHS Silicon Valley Innovation Program to the Pan-Canadian Trust Framework and New Zealand Digital Identity Trust Framework. Many global leaders are stepping up to support and invest in more privacy-preserving digital security, and for good reason. Recent legislation like GDPR and CCPA have made the role of big tech companies and user data rights increasingly important, providing a clear mandate for a wave of change that promises to strengthen the internet for the public good. This provides an incredible catalyst for all the work happening in areas such as cryptography, decentralized computing and digital governance. Just in the last year, we’ve seen the following advancements:

Secure Data Storage WG created at DIF and W3C to realize an interoperable technology for encrypted and confidential data storage Decentralized Identifiers v1.0 specification reached “Candidate Recommendation” stage at the W3C, establishing stability in anticipation of standardization later this year Sidetree protocol v1.0 released at DIF, providing a layer-2 blockchain solution for scalable Decentralized Identifiers built on top of ledgers such as Bitcoin and Ethereum DIDComm Messaging v2.0 specification launched at DIF, a new protocol for secure messaging based on Decentralized Identifiers and built on JOSE encryption standards Self-Issued OpenID (SIOP) became an official working group item at the OpenID Foundation, advancing the conversation around the role of identity providers on the web Google’s WebID project started developing features to allow the browser to mediate interactions between end-users and identity providers in a privacy-preserving way

For more information on how all of these technologies are interconnected, read our latest paper, The State of Identity on the Web.

In addition, as part of our involvement with the DHS SVIP program, in March of this year we participated in the DHS SVIP Interoperability Plugfest. This event saw 8 different companies, representing both human-centric identity credentials as well as asset-centric supply chain traceability credentials, come together to showcase standards-compliance and genuine cross-vendor and cross-platform interoperability via testing and live demonstrations. The full presentation, including demos and videos from the public showcase day, can be found here.

These are just a handful of the significant accomplishments achieved over the last year. It’s been incredibly inspiring to see so many people working towards a common set of goals for the betterment of the web. As we’ve built our products and developed alongside the broader market, we’ve learned quite a bit about how to solve some of the core business and technical challenges associated with this new digital infrastructure. We’ve also gained a lot of insight from working directly with governments and companies across the globe to demonstrate interoperability and build bridges across different technology ecosystems.

Launching MATTR VII

We’ve been hard at work making our decentralized identity platform better than ever, and we’re proud to announce that as of today, we’re ready to support solutions that solve real-world problems for your users, in production — and it’s open to everybody.

That’s why we’re rebranding our platform to MATTR VII. Inspired by the seven states of matter, our platform gives builders and developers all the tools they need at their fingertips to create a whole new universe of decentralized products and applications. We provide all the raw technical building blocks to allow you to create exactly what you have in mind. MATTR VII is composable and configurable to fit your needs, whether you’re a well-established business with legacy systems or a start-up looking to build the next best thing in digital privacy. Best of all, MATTR VII is use-case-agnostic, meaning we’ve baked minimal dependencies into our products so you can use them the way that makes the most sense for you.

Starting today, we’re opening our platform for general availability. Specifically, that means if you’re ready to build a solution to solve a real-world problem for your users, we’re ready to support you. Simply contact us to get the ball rolling and to have your production environment set up. Of course, if you’re not quite ready for that, you can still test drive the platform by signing up for a free trial of MATTR VII to get started right away. It’s an exciting time in the MATTR universe, and we’re just getting started.

We’re continuing to build-out features to operationalize and support production use cases. To this end, in the near future we will be enhancing the sign-up and onboarding experience as well as providing tools to monitor your usage of the platform. Please reach out to give us feedback on how we can improve our products to support the solutions you’re building.

We’re excited to be in this new phase of our journey with MATTR VII. It will no doubt be another big year for decentralized identity, bringing us closer to the ultimate goal of bringing cryptography and digital trust to every person on the web.

Follow us on GitHub, Medium or Twitter for further updates.

Why we’re launching MATTR VII was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Simon Willison

sqlite-plus

sqlite-plus Anton Zhiyanov bundled together a bunch of useful SQLite C extensions for things like statistical functions, unicode string normalization and handling CSV files as virtual tables. The GitHub Actions workflow here is a particularly useful example of compiling SQLite extensions for three different platforms. Via SQLite is not a toy database

sqlite-plus

Anton Zhiyanov bundled together a bunch of useful SQLite C extensions for things like statistical functions, unicode string normalization and handling CSV files as virtual tables. The GitHub Actions workflow here is a particularly useful example of compiling SQLite extensions for three different platforms.

Via SQLite is not a toy database


Why all my servers have an 8GB empty file

Why all my servers have an 8GB empty file This is such a good idea: keep a 8GB empty spacer.img file on any servers you operate, purely so that if the server runs out of space you can delete the file and get some breathing room for getting everything else working again. I've had servers run out of space in the past and it's an absolute pain to sort out - this trick would have really helped.

Why all my servers have an 8GB empty file

This is such a good idea: keep a 8GB empty spacer.img file on any servers you operate, purely so that if the server runs out of space you can delete the file and get some breathing room for getting everything else working again. I've had servers run out of space in the past and it's an absolute pain to sort out - this trick would have really helped.

Via Hacker News


Phil Windley's Technometria

Relationships in the Self-Sovereign Internet of Things

Summary: DIDComm-capable agents provide a flexible infrastructure for numerous internet of things use cases. This post looks at Alice and her digital relationship with her F-150 truck. She and the truck have relationships and interactions with the people and institutions she engages as she co-owns, lends and sells it. These and other complicated workflows are all supported by a standards-bas

Summary: DIDComm-capable agents provide a flexible infrastructure for numerous internet of things use cases. This post looks at Alice and her digital relationship with her F-150 truck. She and the truck have relationships and interactions with the people and institutions she engages as she co-owns, lends and sells it. These and other complicated workflows are all supported by a standards-based, open-source, protocol-supporting system for secure, privacy-preserving messaging.

In The Self-Sovereign Internet of Things, I introduced the role that Self-Sovereign Identity (SSI) can play in the internet of things (IoT). The self-sovereign internet of things (SSIoT) relies on the DID-based relationships that SSI provides, and their support for standardized protocols running over DIDComm, to create an internet of things that is much richer, secure, and privacy respecting than the CompuServe of Things we're being offered today. In this post, I extend the use cases I offered in the previous post and discuss the role the heterarchical relationships found in the SSIoT play.

For this post, we're going to focus on Alice's relationship with her F-150 truck and its relationships with other entities. Why a vehicle? Because in 2013 and 2014 I built a commercial connected car product called Fuse that used the relationship-centric model I'm discussing here1. In addition, vehicles exist in a rich, complicated ecosystem that offers many opportunities for interesting relationships. Figure 1 shows some of these.

Figure 1: Vehicle relationships (click to enlarge)

The most important relationship that a car has is with its owner. But there's more than one owner over the car's lifetime. At the beginning of its life, the car's owner is the manufacturer. Later the car is owned by the dealership, and then by a person or finance company. And, of course, cars are frequently resold. Over the course of its lifetime a car will have many owners. Consequently, the car's agent must be smart enough to handle these changes in ownership and the resulting changes in authorizations.

In addition to the owner, the car has relationships with other people: drivers, passengers, and pedestrians. The nature of relationships change over time. For example, the car probably needs to maintain a relationship with the manufacturer and dealer even after they are no longer owners. With these changes to the relationship come changes in rights and responsibilities.

In addition to relationships with owners, cars also have relationships with other players in the vehicle ecosystem including: mechanics, gas stations, insurance companies, finance companies, and government agencies. Vehicles exchange data and money with these players over time. And the car might have relationships with other vehicles, traffic signals, the roadway, and even potholes.

The following sections discuss three scenarios involvoing Alice, the truck, and other people, institutions, and things.

Multiple Owners

One of the relationship types that the CompuServe of Things fails to handle well is multiple owners. Some companies try and others just ignore it. The problem is that when the service provider intermediates the connection to the thing, they have to account for multiple owners and allow those relationships to change over time. For a high-value product, the engineering effort is justified, but for many others, it simple doesn't happen.

Figure 2: Multiple Owners (click to enlarge)

Figure 2 shows the relationships of two owners, Alice and Bob, with the truck. The diagram is simple and hides some of the complexity of the truck dealing with multiple owners. But as I discuss in Fuse with Two Owners some of this is simply ensuring that developers don't assume a single owner when they develop services. The infrastructure for supporting it is built into DIDComm, including standardized support for sub protocols like Introduction.

Lending the Truck

People lend things to friends and neighbors all the time. And people rent things out. Platforms like AirBnB and Outdoorsy are built to support this for high value rentals. But what if we could do it for anything at any time without an intermediating platform? Figure 3 shows the relationships between Alice and her friend Carol who wants to borrow the truck.

Figure 3: Borrowing the Truck (click to enlarge)

Like the multiple owner scenario, Alice would first have a connection with Carol and introduce her to the truck using the Introduction sub protocol. The introduction would give the truck permission to connect to Carol and also tell the truck's agent what protocols to expose to Carol's agent. Alice would also set the relationship's longevity. The specific permissions that the "borrower" relationship enables depend, of course, on the nature of the thing.

The data that the truck stores for different activities is dependent on these relationships. For example, the owner is entitled to know everything, including trips. But someone who borrows the car should be able to see their trips, but not those of other drivers. Relationships dictate the interactions. Of course, a truck is a very complicated thing in a complicated ecosystem. Simpler things, like a shovel might simply be keeping track of who has the thing and where it is. But, as we saw in The Self-Sovereign Internet of Things, there is value in having the thing itself keep track of its interactions, location, and status.

Selling the Truck

Selling the vehicle is more complicated than the previous scenarios. In 2012, we prototyped this scenario for Swift's Innotribe innovations group and presented it at Sibos. Heather Vescent of Purple Tornado created a video that visualizes how a sale of a motorcycle might happen in a heterarchical DIDComm environment2. You can see a screencast of the prototype in operation here. One important goal of the prototype was to support Doc Searls's vision of the Intention Economy. In what follows, I've left out some of the details of what we built. You can find the complete write-up in Buying a Motorcycle: A VRM Scenario using Personal Clouds.

Figure 4: Selling the Truck (click to enlarge)

In Figure 4, Alice is selling the truck to Doug. I'm ignoring how Alice and Doug got connected3 and am just focusing on the sale itself. To complete the transaction, Alice and Doug create a relationship. They both have relationships with their respective credit unions where Doug initiates and Alice confirms the transaction. At the same time, Alice has introduced the truck to Doug as the new owner.

Alice, Doug, and the truck are all connected to the DMV and use these relationships to transfer the title. Doug can use his agent to register the truck and get plates. Doug also has a relationship with his insurance company. He introduces the truck to the insurance company so it can serve as the service intermediary for the policy issuance.

Alice is no longer the owner, but the truck knows things about her that Doug shouldn't have access to and she wants to maintain. We can create a digital twin of the truck that is no long attached to the physical device, but has a copy of all the trip and maintenance information that Alice had co-created with the truck over the years she owned it. This digital twin has all the same functionality for accessing this data that the truck did. At the same time, Alice and Doug can negotiate what data also stays on the truck. Doug likely doesn't care about her trips and fuel purchases, but might want the maintenance data.

Implementation

A few notes on implementation:

The relationships posited in these use cases are all DIDComm-capable relationships. The workflows in these scenarios use DIDComm messaging to communicate. I pointed out several places where the Introduction DIDComm protocol might be used. But there might be other DIDComm protocols defined. For example, we could imagine workflow-specific messages for the scenario where Carol borrows the truck. The scenario where Doug buys the truck is rife with possibilities for protocols on DIDComm that would standardize many of the interactions. Standardizing these workflows through protocol (e.g., a common protocol for vehicle registration) reduces the effort for participants in the ecosystem. Some features, like attenuated permissions on channel are a mix of capabilities. DIDComm supports a Discovery protocol that allows Alice, say, to determine if Doug is open to engaging in a sale transaction. Other permissioning would be done by the agent outside the messaging system. The agents I'm envisioning here are smart, rule-executing agents like those available in picos. Picos provide a powerful model for how a decentralized, heterarchical, interoperable internet of things can be built. Picos provide an DIDComm agent programming platform that is easily extensible. Picos live on an open-source pico engine that can run on anything that supports Node JS. They have been used to build and deploy several production systems, including the Fuse connected-car system discussed above. Conclusion

DIDComm-capable agents can be used to create a sophisticated relationship network that includes people, institutions, things and even soft artifacts like interaction logs. The relationships in that network are rich and varied—just like relationships in the real world. Things, whether they are capable of running their own agents or employ a soft agent as a digital twin, are much more useful when they exist persistently, control their own agent and digital wallet, and can act independently. Things now react and respond to messages from others in the relationship network as they autonomously follow their specific rules.

Everything I've discussed here and in the previous post are doable now. By removing the intermediating administrative systems that make up the CompuServe of Things and moving to a decentralized, peer-to-peer architecture we can unlock the tremendous potential of the Self-Sovereign Internet of Things.

Notes Before Fuse, we'd built a more generalized IoT system based on a relationship network called SquareTag. SquareTag was a social product platform (using the vernacular of the day) that promised to help companies have a relationship with their customers through the product, rather than merely having information about them. My company, Kynetx, and others, including Drummond Reed, were working to introduce something we called "personal clouds" that were arrayed in a relationship network. We built this on a actor-like programming model called "picos". The pico engine and programming environment are still available and have been updated to provide DID-based relationships and support for DIDComm. In 2012, DIDComm didn't exist of course. We were envisioning something that Innotribe called the Digital Asset Grid (DAG) and speaking about "personal clouds" but the envisioned operation of the DAG was very much like what exists now in the DIDComm-enabled peer-to-peer network enabled by DIDs. In the intentcasting prototype, Alice and Doug would have found each other through a system that matches Alice's intent to buy with Doug's intent to sell. But a simpler scenario would have Alice tell the truck to list itself on Craig's List so Alice and Doug can meet up there.

Photo Credit: F-150 from ArtisticOperations (Pixabay License)

Tags: ssiot iot vrm me2b ssi identity decentralized+identifiers relationships picos


Simon Willison

Homebrew Python Is Not For You

Homebrew Python Is Not For You If you've been running into frustrations with your Homebrew Python environments breaking over the past few months (the dreaded "Reason: image not found" error) Justin Mayer has a good explanation. Python in a Homebrew is designed to work as a dependency for their other packages, and recent policy changes that they made to support smoother upgrades have had catastro

Homebrew Python Is Not For You

If you've been running into frustrations with your Homebrew Python environments breaking over the past few months (the dreaded "Reason: image not found" error) Justin Mayer has a good explanation. Python in a Homebrew is designed to work as a dependency for their other packages, and recent policy changes that they made to support smoother upgrades have had catastrophic problems effects on those of us who try to use it for development environments.

Wednesday, 24. March 2021

Bill Wendel's Real Estate Cafe

Sweetest Deals of 2020: What’s really happening in luxury real estate during pandemic?

What started as an impassioned thread in a leading agent-to-agent Facebook group yesterday about Generation Priced Out morphed into a debate about whether the housing… The post Sweetest Deals of 2020: What's really happening in luxury real estate during pandemic? first appeared on Real Estate Cafe.

What started as an impassioned thread in a leading agent-to-agent Facebook group yesterday about Generation Priced Out morphed into a debate about whether the housing…

The post Sweetest Deals of 2020: What's really happening in luxury real estate during pandemic? first appeared on Real Estate Cafe.


Hyperonomy Digital Identity Lab

Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call: Hub and EDV Discussion featuring Daniel Buchner’s Description of a Hub

Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call: Hub and EDV Discussion featuring Daniel Buchner’s Description of a Hub Context This is a transcription of selected parts of the EDV-Hub conversation during the DIF SDS/CS … Continue reading →

Transcription of Selected Parts of the DIF SDS/CS March 11, 2021 Zoom Call: Hub and EDV Discussion featuring Daniel Buchner’s Description of a Hub

Context

This is a transcription of selected parts of the EDV-Hub conversation during the DIF SDS/CS Thursday weekly Zoom call on March 11, 2021. This is the call where Daniel Buchner described (verbally) several aspects about what is and what is not a Hub.

This partial transcription focuses primarily on Daniel’s comments as they relate to the question “what is a Hub?”.

NOTE: The time code timestamps are accurate but not precise. They may be out by +/- a couple of seconds.

Recording

Link to the Zoom recording (audio plus chat): https://us02web.zoom.us/rec/share/-6PUTYTQfZt-2E23VYFFUcAQsdjiocqGy8hlOaCk1jNOC41EuEHmB8UP7hpmOmfs.qbMCfoK4E_wwDXmV

Transcription

28:00 Dmitri: EDVs are for the most part defined.

28:35 Michael: What I was looking for is a litmus test. Oh yah, it goes in this bucket. Oh yah, it goes in that bucket. We don’t have that simple working definition – pair of working definitions – that easily contrasts the two.

29:40 Adrian: I’ve never felt I’ve understood Hubs in those terms.

30:14 Daniel: I think what a Hub is definitely not completely mutable by the user. It’s a standard interface for at a basic …

30:50 Daniel: What a Hub is really is a friendly application-level set of interfaces and functionality over an EDV that provides stuff like queuing up push-style messages to the owner of the Hub when they come online and go grab all of the outstanding objects. It allows people to store data by semantic type so that I can say here’s my Tweet objects [31:12 Daniel continuing], here’s my list objects, here’s playlist objects, and be able to give permissions in capability form out to people to see and decrypt parts of those objects.

So, this is what it is in a nutshell. An EDV is a great thing but you have to answer the question as Michael did in his paper with Twitter. Can you build Twitter off of just the EDV API elegantly, maybe you could torture yourself to do it but could you build it elegantly, would it be something [31:42 Daniel continuing] that speaks to app developers – probably not. You need a layer that is more app-focused…in my opinion.

32:10 Daniel: A Hub was always designed to be a rather dumb datastore. It’s not trying to do complex data transformations. [32:16 Daniel continuing] It’s a semantic searchable store of data that you can get permissions to certain subsets of the data as an app asking Alice for those permissions. [32:27 Daniel continuing] An Agent is very much more powerful. An Agent is actually pulling data down, decrypting it, doing a bunch of things. [32:35 Daniel continuing] I would see a Hub like an application datastore that is yours. That you allow apps to store data for you or other people. You can have one living on a local device; you could have one in the cloud. You could have an instance in several places and they all can sync together and even though they are implemented differently by a cloud provider (like Microsoft), they would have all the same APIs and guarantees because it is a standard. That’s how I envision it.

32:43 Orie (in chat): https://medium.com/decentralized-identity/rhythm-and-melody-how-Hubs-and-agents-rock-together-ac2dd6bf8cf4

33:00 Michael: So, does the Hub’s datastore wrap or build upon on an EDV? …or is it completely like left vs. right?

33:01 Daniel (in chat): A Hub is a gateway/router between apps and EDVs.

33:10 Daniel: If you read that post that Orie posted in the chat. It is really great that he referenced that. [33:16 Daniel continuing] A Hub is essentially a layer that …

33:20 Daniel: We hoping that at the end of this, a Hub can sit above an EDV is where the EDV is where everything is encrypted and a sort of level-level thing. The Hub is like the application-style interface. Maybe like the Firebase API that speaks internally to store encrypted objects in a low-level way with the EDV.

33:31 Orie (in chat): EDV <> Hub <> Agent

33:37 Daniel: Orie is absolutely right but I would probably reverse the order. An Agent is super powerful; a Hub is less powerful; an EDV is very low-level storage and it is sort of like a Hub is the app layer that sits on top of an EDV …hopefully.

47:32 Daniel (in chat): It is a semantically indexed datastore, of which it can only see truly public data

52:17 Michael (in chat): Hub = intelligent public service endpoint … for EDV data

52:31 Daniel (in chat): YES!

52:39 Daniel (in chat): slightly intelligent

53:02 Daniel (in chat): only in the sense that it can layer some semantic APIs over the internal public objects.

End of Transcription


Margo Johnson

Interoperability is Not a Choice

This post describes Transmute’s approach to interoperable software and includes video and technical results from a recent interoperability demonstration with US DHS SVIP cohort companies. Photo by ian dooley on Unsplash The future of software is all about choice. Customers want to chose technology that best solves their business problems, knowing that they will not be locked in with a v

This post describes Transmute’s approach to interoperable software and includes video and technical results from a recent interoperability demonstration with US DHS SVIP cohort companies.

Photo by ian dooley on Unsplash

The future of software is all about choice.

Customers want to chose technology that best solves their business problems, knowing that they will not be locked in with a vendor if that solution is no longer the best fit.

Businesses are also demanding choice about when and how they consume important data — a reaction to the data silos and expensive systems integrations of the past.

Interoperability moves from theory to reality when companies have meaningful ability to choose. It is predicated on open standards foundations that enable easy movement of data and vendors.

Interoperability with DHS SVIP Companies

Our team was proud to participate in the US Department of Homeland Security Silicon Valley Innovation Program Interoperability Plug-fest this month. DHS SVIP has been leading the charge on interoperability for years now, putting their funding and networks on the table to lead the charge.

This was Transmute’s second time participating as an awarded company of the SVIP program, and we were joined by 7 other companies from around the globe, addressing topics from supply chain traceability to digital assets for humans.

While each company is focused on slightly different industries — and therefore nuanced solutions for those customers — we are all committed (and contractually obligated by the US Government) to implement open standards infrastructure in a way that ensures verifiable information can be issued, consumed, and verified across systems using different technical “stacks”.

Technical foundations for interoperability include the W3C Verifiable Credential Data Model, JSON Linked Data, the Verifiable Credentials HTTP API, and the Credential Handler API. Companies also worked from shared vocabularies based on use case, such as the Traceability Vocabulary that aggregates global supply chain ontologies for use in linked data credentials.

The following two videos show examples of interoperability in action using both Transmute and other cohort company systems. Note that the use cases have been simplified to allow for ease of demonstration to diverse audiences.

Transmute and other companies also publicly shared the results of our interoperability testing — Transmute’s results are here.

Interoperability in steel supply chain

Transmute is working directly with US Customs and Border Protection to trace the origins of steel materials using verifiable credentials. This video shows an example of multiple steel supply chain actors exchanging verifiable trade information culminating in a seamless review process from CBP.

Interoperability across industries

We also worked with other cohort companies to demonstrate how important credentials like a vaccination certificate can be used to help supply chain professionals get back to work safely. This demo includes the use of selective disclosure technology as well as off-line verification of a paper credential.

Charting the Course

Interoperability across systems moves the internet towards a more open-network approach for trustworthy exchange of information. Choice is increasingly becoming the network feature that governments and enterprises will not do without. It is pre-competitive table stakes for doing business. The path is clear for those of us developing technology in this space: interoperate, or get out. Fortunately, the competitive “pie” is big enough for all of us.

By creating interoperable systems that can seamlessly exchange trusted information we are creating a global network of information that grows in value as more players enter it.

Transmute is proud to build with talented teams from around the globe, including our cohort friends: Mattr, Mavennet, Mesur, Digital Bazaar, Secure Key, Danube Tech, and Spherity.

Thank you also to the DHS SVIP team for funding this interoperability work, and to our partners at US CBP for your support moving from technology to tactical solutions.

To learn more about Transmute’s platform and solutions contact us today.

Interoperability is Not a Choice was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Hyperonomy Digital Identity Lab

TDW Glossary: (Part of) The Big Picture

Click the diagram below to enlarge it…

Click the diagram below to enlarge it…

Figure 1. Digital Identity, Verifiable Data Registry, and Sovrin Utility Neighborhoods

Simon Willison

Understanding JSON Schema

Understanding JSON Schema Useful, comprehensive short book guide to JSON Schema, which finally helped me feel like I fully understand the specification. Via @kelseyhightower

Understanding JSON Schema

Useful, comprehensive short book guide to JSON Schema, which finally helped me feel like I fully understand the specification.

Via @kelseyhightower

Tuesday, 23. March 2021

Bill Wendel's Real Estate Cafe

iBuyers compounding confusion, lack of affordable inventory in rigged housing market

“The ShapeShifters of real estate,” that’s what this consumer advocate has called iBuyers for years because they abandon the role of real estate agent during… The post iBuyers compounding confusion, lack of affordable inventory in rigged housing market first appeared on Real Estate Cafe.

“The ShapeShifters of real estate,” that’s what this consumer advocate has called iBuyers for years because they abandon the role of real estate agent during…

The post iBuyers compounding confusion, lack of affordable inventory in rigged housing market first appeared on Real Estate Cafe.


Damien Bod

Setting dynamic Metadata for Blazor Web assembly

This post shows how HTML header meta data can be dynamically updated or changed for a Blazor Web assembly application routes hosted in ASP.NET Core. This can be usually for changing how URL link previews are displayed when sharing links. Code: https://github.com/damienbod/BlazorMetaData Updating the HTTP Header data to match the URL route used in the […]

This post shows how HTML header meta data can be dynamically updated or changed for a Blazor Web assembly application routes hosted in ASP.NET Core. This can be usually for changing how URL link previews are displayed when sharing links.

Code: https://github.com/damienbod/BlazorMetaData

Updating the HTTP Header data to match the URL route used in the Blazor WASM can be supported using a Razor Page host file instead of using a static html file. The Razor Page _Host file can use code behind and a model from this class. The Model can the be used to display the different values as required. This is a Hosted WASM application using ASP.NET Core as the server.

@page "/" @model BlazorMeta.Server.Pages._HostModel @namespace BlazorMeta.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @{ Layout = null; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <meta property="og:type" content="website" /> <meta property="og:title" content="Blazor BFF AAD Cookie 2021 @Model.SiteName" /> <meta property="og:url" content="https://damienbod.com" /> <meta property="og:image" content="https://avatars.githubusercontent.com/u/3442158?s=400&v=4"> <meta property="og:image:height" content="384" /> <meta property="og:image:width" content="384" /> <meta property="og:site_name" content="@Model.SiteName" /> <meta property="og:description" content="@Model.PageDescription" /> <meta name="twitter:site" content="damien_bod" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:description" content="@Model.PageDescription" /> <meta name="twitter:title" content="Blazor BFF AAD Cookie 2021 @Model.SiteName" /> <title>Blazor AAD Cookie</title> <base href="~/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/app.css" rel="stylesheet" /> <link href="BlazorMeta.Client.styles.css" rel="stylesheet" /> <link rel="apple-touch-icon" sizes="512x512" href="icon-512.png" /> </head> <body> <div id="app"> <!-- Spinner --> <div class="spinner d-flex align-items-center justify-content-center" style="position:absolute; width: 100%; height: 100%; background: #d3d3d39c; left: 0; top: 0; border-radius: 10px;"> <div class="spinner-border text-success" role="status"> <span class="sr-only">Loading...</span> </div> </div> </div> @*<component type="typeof(App)" render-mode="WebAssembly" />*@ <div id="blazor-error-ui"> <environment include="Staging,Production"> An error has occurred. This application may no longer respond until reloaded. </environment> <environment include="Development"> An unhandled exception has occurred. See browser dev tools for details. </environment> <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> <script src="_framework/blazor.webassembly.js"></script> </body> </html>

The code behind _Host class adds the public properties of the model which are used in the template _cshtml file. The OnGet sets the values of the properties using the path property of the HTTP request.

using Microsoft.AspNetCore.Mvc.RazorPages; namespace BlazorMeta.Server.Pages { public class _HostModel : PageModel { public string SiteName { get; set; } = "damienbod"; public string PageDescription { get; set; } = "damienbod init description"; public void OnGet() { (SiteName, PageDescription) = GetMetaData(); } private (string, string) GetMetaData() { var metadata = Request.Path.Value switch { "/counter" => ("damienbod/counter", "This is the meta data for the counter"), "/fetchdata" => ("damienbod/fetchdata", "This is the meta data for the fetchdata"), _ => ("damienbod", "general description") }; return metadata; } } }

The MapFallbackToPage must be set to use the _Host Razor Page layout or fallback file.

app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); });

When the application is deployed to a public server, an URL with the Blazor route can be copied and pasted into the software services or tools which then display the preview data.

Each service uses different meta data headers and you would need to add the headers with the dynamic content as required. Underneath are some examples of what can be displayed.

LinkedIn URL preview

Slack URL preview

Twitter URL preview

Microsoft teams URL preview

Links:

https://www.w3schools.com/tags/tag_meta.asp

https://cards-dev.twitter.com/validator

https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup

https://swimburger.net/blog/dotnet/pre-render-blazor-webassembly-at-build-time-to-optimize-for-search-engines


Just a Theory

Assume Positive Intensifies

How “Assume positive intent” downplays impact, gaslights employees, and absolves leaders of responsibility.

Lets talk about that well-worn bit of wisdom: “assume positive intent.” On the surface it’s excellent advice: practice empathy by mindfully assuming that people may create issues despite their best intentions. You’ve heard the parables, from Steven Covey’s paradigm shift on the subway to David Foster Wallace’s latent condemnation of gas-guzzling traffic and soul-sucking supermarkets. Pepsi CEO Indra Nooyi has popularized the notion to ubiquity in corporate America.

In practice, the assumption of positive intent enables some pretty serious anti-patterns.

First, focusing on intent downplays impact. Good intentions don’t change the outcomes of one’s actions: we still must deal with whatever broke. At best, good intentions enable openness to feedback and growth, but do not erase those mistakes.

Which leads us to a more fundamental dilemma. In a piece for Medium last year, Ruth Terry, quoting the Kirwan Institute’s Lena Tenney, summarizes it aptly:

By downplaying actual impact, assuming positive intent can deprioritize the experience of already marginalized people.

“All of this focus on intention essentially remarginalizes a person of color who’s speaking up about racism by telling them that their experience doesn’t matter because the person didn’t mean it that way,” says Tenney, who helped create interactive implicit bias learning tools for the Kirwan Institute.

This remarginalization of the vulnerable seriously undermines the convictions behind “assume positive intent,” not to mention the culture at large. But the impact transcends racial contexts: it appears wherever people present uncomfortable issues to people in a dominant position.

Take the workplace. A brave employee publicly calls out a problematic behavior or practice, often highlighting implicit bias or, at the very least, patterns that contradict the professed values of the organization. Management nods and says, “I’m glad you brought that up, but it’s important for us all to assume positive intent in our interactions with our co-workers.” Then they explain the context for the actions, or, more likely, list potential mitigating details — without the diligence of investigation or even consequences. Assume positive intent, guess at or manufacture explanations, but little more.

This response minimizes the report’s impact to management while simultaneously de-emphasizing the experience of the worker who voiced it. Such brave folks, speaking just a little truth to power, may start to doubt themselves or what they’ve seen. The manager has successfully gaslighted the worker.

Leaders: please don’t do this. The phrase is not “Assume positive intent for me, but not for thee.” Extend the assumption only to the people reporting uncomfortable issues. There’s a damn good chance they came to you only by the assumption of positive intent: if your coworkers thought you had ill-intent, they would not speak at all.

If you feel inclined to defend behavior or patterns based on presumption of good intent, avoid that reflex, too. Good intent may be key to transgressors accepting difficult feedback, but hold them accountable and don’t let assumptions stand on their own. Impact matters, and so must consequences.

Most importantly, Never use the assumption of good intent to downplay or dismiss the crucial but uncomfortable or inconvenient feedback brave souls bring to you.

Assume positive intent in yourself, never assert it in others, and know that, regardless of intent, problems still must be addressed without making excuses or devaluing or dismissing the people who have suffered them.

More about… Culture Gaslighting Management Leadership Ruth Terry Lena Tenney

Simon Willison

A Complete Guide To Accessible Front-End Components

A Complete Guide To Accessible Front-End Components I'm so excited about this article: it brings together an absolute wealth of resources on accessible front-end components, including many existing component implementations that are accessible out of the box. Date pickers, autocomplete widgets, modals, menus - all sorts of things that I've been dragging my heels on implementing because I didn't

A Complete Guide To Accessible Front-End Components

I'm so excited about this article: it brings together an absolute wealth of resources on accessible front-end components, including many existing component implementations that are accessible out of the box. Date pickers, autocomplete widgets, modals, menus - all sorts of things that I've been dragging my heels on implementing because I didn't fully understand their accessibility implications.

Monday, 22. March 2021

Bill Wendel's Real Estate Cafe

WSJ’s article on oversupply of real estate agents exposes RECartel

Applaud the Wall Street Journal’s headline about “New Realtors Pile Into Hot Housing Market” and am not surprised to find there are now “more real-estate… The post WSJ's article on oversupply of real estate agents exposes RECartel first appeared on Real Estate Cafe.

Applaud the Wall Street Journal’s headline about “New Realtors Pile Into Hot Housing Market” and am not surprised to find there are now “more real-estate…

The post WSJ's article on oversupply of real estate agents exposes RECartel first appeared on Real Estate Cafe.


Simon Willison

The Accountability Project Datasettes

The Accountability Project Datasettes The Accountability Project "curates, standardizes and indexes public data to give journalists, researchers and others a simple way to search across otherwise siloed records" - they have a wide range of useful data, and they've started experimenting with Datasette to provide SQL access to a subset of the information that they have collected.

The Accountability Project Datasettes

The Accountability Project "curates, standardizes and indexes public data to give journalists, researchers and others a simple way to search across otherwise siloed records" - they have a wide range of useful data, and they've started experimenting with Datasette to provide SQL access to a subset of the information that they have collected.

Sunday, 21. March 2021

Simon Willison

Weeknotes: django-sql-dashboard widgets

A few small releases this week, for django-sql-dashboard, datasette-auth-passwords and datasette-publish-vercel. django-sql-dashboard widgets and permissions django-sql-dashboard, my subset-of-Datasette-for-Django-and-PostgreSQL continues to come together. New this week: widgets and permissions. To recap: this Django app borrows some ideas from Datasette: it encourages you to create a read

A few small releases this week, for django-sql-dashboard, datasette-auth-passwords and datasette-publish-vercel.

django-sql-dashboard widgets and permissions

django-sql-dashboard, my subset-of-Datasette-for-Django-and-PostgreSQL continues to come together.

New this week: widgets and permissions.

To recap: this Django app borrows some ideas from Datasette: it encourages you to create a read-only PostgreSQL user and grant authenticated users the ability to run one or more raw SQL queries directly against your database.

You can execute more than one SQL query and combine them into a saved dashboard, which will then show multiple tables containing the results.

This week I added support for dashboard widgets. You can construct SQL queries to return specific column patterns which will then be rendered on the page in different ways.

There are four widgets at the moment: "big number", bar chart, HTML and Markdown.

Big number is the simplest: define a SQL query that returns two columns called label and big_number and the dashboard will display that result as a big number:

select 'Entries' as label, count(*) as big_number from blog_entry;

Bar chart is more sophisticated: return columns named bar_label and bar_quantity to display a bar chart of the results:

select to_char(date_trunc('month', created), 'YYYY-MM') as bar_label, count(*) as bar_quantity from blog_entry group by bar_label order by count(*) desc

HTML and Markdown are simpler: they display the rendered HTML or Markdown, after filtering it through the Bleach library to strip any harmful elements or scripts.

select '## Ten most recent blogmarks (of ' || count(*) || ' total)' as markdown from blog_blogmark;

I'm running the dashboard application on this blog, and I've set up an example dashboard here that illustrates the different types of widget.

Defining custom widgets is easy: take the column names you would like to respond to, sort them alphabetically, join them with hyphens and create a custom widget in a template file with that name.

So if you wanted to build a widget that looks for label and geojson columns and renders that data on a Leaflet map, you would create a geojson-label.html template and drop it into your Django templates/django-sql-dashboard/widgets folder. See the custom widgets documentation for details.

Which reminds me: I decided a README wasn't quite enough space for documentation here, so I started a Read The Docs documentation site for the project.

Datasette and sqlite-utils both use Sphinx and reStructuredText for their documentation.

For django-sql-dashboard I've decided to try out Sphinx and Markdown instead, using MyST - a Markdown flavour and parser for Sphinx.

I picked this because I want to add inline help to django-sql-dashboard, and since it ships with Markdown as a dependency already (to power the Markdown widget) my hope is that using Markdown for the documentation will allow me to ship some of the user-facing docs as part of the application itself. But it's also a fun excuse to try out MyST, which so far is working exactly as advertised.

I've seen people in the past avoid Sphinx entirely because they preferred Markdown to reStructuredText, so MyST feels like an important addition to the Python documentation ecosystem.

HTTP Basic authentication

datasette-auth-passwords implements password-based authentication to Datasette. The plugin defaults to providing a username and password login form which sets a signed cookie identifying the current user.

Version 0.4 introduces optional support for HTTP Basic authentication instead - where the user's browser handles the authentication prompt.

Basic auth has some disadvantages - most notably that it doesn't support logout without the user entirely closing down their browser. But it's useful for a number of reasons:

It's easy to protect every resource on a website with it - including static assets. Adding "http_basic_auth": true to your plugin configuration adds this protection, covering all of Datasette's resources. It's much easier to authenticate with from automated scripts. curl and roquests and httpx all have simple built-in support for passing basic authentication usernames and passwords, which makes it a useful target for scripting - without having to install an additional authentication plugin such as datasette-auth-tokens.

I'm continuing to flesh out authentication options for Datasette, and adding this to datasette-auth-passwords is one of those small improvements that should pay off long into the future.

A fix for datasette-publish-vercel

Datasette instances published to Vercel using the datasette-publish-vercel have previously been affected by an obscure Vercel bug: characters such as + in the query string were being lost due to Vercel unescaping encoded characters before the request got to the Python application server.

Vercel fixed this earlier this month, and the latest release of datasette-publish-vercel includes their fix by switching to the new @vercel/python builder. Thanks @styfle from Vercel for shepherding this fix through!

New photos on Niche Museums

My Niche Museums project has been in hiberation since the start of the pandemic. Now that vaccines are rolling out it feels like there might be an end to this thing, so I've started thinking about my museum hobby again.

I added some new photos to the site today - on the entries for Novelty Automation, DEVIL-ish Little Things, Evergreen Aviation & Space Museum and California State Capitol Dioramas.

Hopefully someday soon I'll get to visit and add an entirely new museum!

Releases this week django-sql-dashboard: 0.4a1 - (10 releases total) - 2021-03-21
Django app for building dashboards using raw SQL queries datasette-publish-vercel: 0.9.2 - (14 releases total) - 2021-03-20
Datasette plugin for publishing data using Vercel datasette-auth-passwords: 0.4 - (9 releases total) - 2021-03-19
Datasette plugin for authentication using passwords

Quoting Scott Arbeit

GitHub, by default, writes five replicas of each repository across our three data centers to protect against failures at the server, rack, network, and data center levels. When we need to update Git references, we briefly take a lock across all of the replicas in all of our data centers, and release the lock when our three-phase-commit (3PC) protocol reports success. — Scott Arbeit

GitHub, by default, writes five replicas of each repository across our three data centers to protect against failures at the server, rack, network, and data center levels. When we need to update Git references, we briefly take a lock across all of the replicas in all of our data centers, and release the lock when our three-phase-commit (3PC) protocol reports success.

Scott Arbeit

Saturday, 20. March 2021

Jon Udell

Original memories

Were it not for the Wayback Machine, a lot of my post-1995 writing would now be gone. Since the advent of online-only publications, getting published has been a lousy way to stay published. When pubs change hands, or die, the works of their writers tend to evaporate. I’m not a great self-archivist, despite having better-than-average … Continue reading Original memories

Were it not for the Wayback Machine, a lot of my post-1995 writing would now be gone. Since the advent of online-only publications, getting published has been a lousy way to stay published. When pubs change hands, or die, the works of their writers tend to evaporate.

I’m not a great self-archivist, despite having better-than-average skills for the job. Many but not all of my professional archives are preserved — for now! — on my website. Occasionally, when I reach for a long-forgotten and newly-relevant item, only to find it 404, I’ll dig around and try to resurrect it. The forensic effort can be a big challenge; an even bigger one is avoiding self-blame.

The same thing happens with personal archives. When our family lived in New Delhi in the early 1960s, my dad captured thousands of images. Those color slides, curated in carousels and projected onto our living room wall in the years following, solidified the memories of what my five-year-old self had directly experienced. When we moved my parents to the facility where they spent their last years, one big box of those slides went missing. I try, not always successfully, to avoid blaming myself for that loss.

When our kids were little we didn’t own a videocassette recorder, which was how you captured home movies in that era. Instead we’d rent a VCR from Blockbuster every 6 months or so and spend the weekend filming. It turned out to be a great strategy. We’d set it on a table or on the floor, turn it on, and just let it run. The kids would forget it was there, and we recorded hours of precious daily life in episodic installments.

Five years ago our son-in-law volunteered the services of a friend of his to digitize those tapes, and brought us the MP4s on a thumb drive. I put copies in various “safe” places. Then we moved a couple of times, and when I reached for the digitized videos, they were gone. As were the original cassettes. This time around, there was no avoiding the self-blame. I beat myself up about it, and was so mortified that I hesitated to ask our daughter and son-in-law if they have safe copies. (Spoiler alert: they do.) Instead I’d periodically dig around in various hard drives, clouds, and boxes, looking for files or thumb drives that had to be there somewhere.

During this period of self-flagellation, I thought constantly about something I heard Roger Angell say about Carlton Fisk. Roger Angell was one of the greatest baseball writers, and Carlton Fisk one of the greatest players. One day I happened to walk into a bookstore in Harvard Square when Angell was giving a talk. In the Q and A, somebody asked: “What’s the most surprising thing you’ve ever heard a player say?”

The player was Carlton Fisk, and the surprise was his answer to the question: “How many time have you seen the video clip of your most famous moment?”

That moment is one of the most-watched sports clips ever: Fisk’s walk-off home run in game 6 of the 1975 World Series. He belts the ball deep to left field, it veers toward foul territory, he dances and waves it fair.

So, how often did Fisk watch that clip? Never.

Why not? He didn’t want to overwrite the original memory.

Of course we are always revising our memories. Photographic evidence arguably prevents us from doing so. Is that good or bad? I honestly don’t know. Maybe both.

For a while, when I thought those home videos were gone for good, I tried to convince myself that it was OK. The original memories live in my mind, I hold them in my heart, nothing can take them away, no recording can improve them.

Although that sort of worked, I was massively relieved when I finally fessed up to my negligence and learned that there are safe copies. For now, I haven’t requested them and don’t need to see them. It’s enough to know that they exist.


Hyperonomy Digital Identity Lab

ORTHOGONAL DEFECT CLASSIFICATION (ODC4MSFT)

Billg Fall 1997 Retreat: Improving the Software Development Processes at Microsoft Click here to download: ORTHOGONAL DEFECT CLASSIFICATION (ODC4MSFT)

Billg Fall 1997 Retreat: Improving the Software Development Processes at Microsoft

Click here to download:

ORTHOGONAL DEFECT CLASSIFICATION (ODC4MSFT)

Friday, 19. March 2021

Mike Jones: self-issued

OAuth 2.0 JWT Secured Authorization Request (JAR) updates addressing remaining review comments

After the OAuth 2.0 JWT Secured Authorization Request (JAR) specification was sent to the RFC Editor, the IESG requested an additional round of IETF feedback. We’ve published an updated draft addressing the remaining review comments, specifically, SecDir comments from Watson Ladd. The only normative change made since the 28 was to change the MIME Type […]

After the OAuth 2.0 JWT Secured Authorization Request (JAR) specification was sent to the RFC Editor, the IESG requested an additional round of IETF feedback. We’ve published an updated draft addressing the remaining review comments, specifically, SecDir comments from Watson Ladd. The only normative change made since the 28 was to change the MIME Type from “oauth.authz.req+jwt” to “oauth-authz-req+jwt”, per advice from the designated experts.

As a reminder, this specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and does so without introducing breaking changes. This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-31

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-31.html

Thursday, 18. March 2021

Simon Willison

How we found and fixed a rare race condition in our session handling

How we found and fixed a rare race condition in our session handling GitHub had a terrifying bug this month where a user reported suddenly being signed in as another user. This is a particularly great example of a security incident report, explaining how GitHub identified the underlying bug, what caused it and the steps they are taking to ensure bugs like that never happen in the future. The roo

How we found and fixed a rare race condition in our session handling

GitHub had a terrifying bug this month where a user reported suddenly being signed in as another user. This is a particularly great example of a security incident report, explaining how GitHub identified the underlying bug, what caused it and the steps they are taking to ensure bugs like that never happen in the future. The root cause was a convoluted sequence of events which could cause a Ruby Hash to be accidentally shared between two requests, caused as a result of a new background thread that was introduced as a performance optimization.


MyDigitalFootprint

Is there a requirement for a “Data Attestation” in a Board paper?

This article is about how to ensure Directors gain assurance about “data” that is supporting the recommendations in a Board paper. I have read, written and presented my fair share of Board and Investment Committee papers over the past 25 years. As Directors, we are collectively accountable and responsible for the decisions we take. I can now observe a skills gap regarding “data”, with many b
This article is about how to ensure Directors gain assurance about “data” that is supporting the recommendations in a Board paper.


I have read, written and presented my fair share of Board and Investment Committee papers over the past 25 years. As Directors, we are collectively accountable and responsible for the decisions we take. I can now observe a skills gap regarding “data”, with many board members assuming and trusting the data that forms the basis on which they are asked to approve. There are good processes, methods and procedures for ensuring that any Board papers presented are factual. However, decisions using big-data and their associated analysis tools, including ML and AI, which drives automation, is new and requires different expertise at a higher level of detail.  Challenging data is different from finding it hard to question in detail any C-suite on their specific expertise and, more generally, the general counsel, CFO and CTO.  The CDO/ CIO axis bridges the value line being both a cost and revenue.  With “data” as the business driver, it remains superficially easier to question costs without understanding the consequences on our future decision ability and even harder to unpack unethical revenue. 

A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structure/ Operations, Illustrative Financials & Scenarios, Competition, Risks and Legal. Case by case, there are always minor adjustments. Finally, some form of recommendation will invite the board to note key facts and approve the action.  I believe it is time for the Chair or CEO, with the support of their senior data lead (#CDO), to ask that each board paper has a new section heading called “Data Attestation.”  A section on Data Attestation will be a declaration that there are traceable evidence and proof of the data and the action of the presenter being a witness to certifying it.  Some teams will favour this as an addition to the main flow, some as a new part of legal, others as an appendix and some will claim it is already inherent in the process. How and where matters little compared to its intent.

Such a section could provide a solution until such time that we can gain sufficient skills at the Board and test data correctly.  Yes, there is a high duty of care that is already intrinsic in anyone who presents a board paper (already inherent). However, the data expertise and skills at most senior levels are also well below what we need because all the politics, bias and complexity is in the weeds, which is both easy not to know and hide.  Board members have to continue to question performance metrics (KPI and BSC) to determine the motivation for any decision, but having to trust “data sets” a different standard to those we have with audit, finance, legal and compliance.  If nothing else, a “data attestation statement” will set a hurdle for those presenting to prioritise bias, ethics and consequences of data used in their proposal. 

Having to trust data sets a different standard to those we have with audit, finance, legal and compliance.

Arguments for and against

Key assumptions

Data is critically important to our future and is foundational for decision making going forward.

Data is more complex today and continues to increase in complexity.

The C-suite and leadership team are experts in their disciplines and has deep expertise in their critical areas, but there is a data skills gap.

There is a recognition at the board that data bias, a lack of audibility, provenance, and data lineage can lead to flawed/bad decision making.

Based on these working assumptions, I do not believe that adding a “Data Attestation” section is a long term fix.  Whilst to comply with Section 172 of the Companies Act, it is an absolute requirement to meet the fiduciary duties that we upskill.  But data is not like marking, technology, operations, finance or HR - data is new, and the vast majority of boards and senior leadership team have little experience in big data,  data analytics or coding.  It is a recognition that education and skills development is a better solution, but in the gap between today and skills arriving, we should do something?   Critically, I would support introducing a data attestation section with a set date where it falls away. 

It is essential to consider as insurance companies who offer D&O policies are looking at new clauses related to the capability of Directors who make decisions based on data and their ability to know the data was “fit for purpose” for the decision. Insurance companies need to protect their claims business and might feel that the upskilling might take to long.

Why might this work? Do you get on a plane and ask to pilot it?  Do you go to the hospital with the correct google answer or ask a qualified Doctor?  We need to form our own view that someone has checked whether the pilot and doctor are qualified.  Today, we outsource Audit to a committee because of this same issue; it is complex. But Data is not finance, and data is not an Audit committee issue. Data is a different skill set. 


Each Board has to make its own choice. The easiest is to justify to oneself that our existing processes are good enough and we are following “best practices”, compliance thinking.   Given the 76 recommendations in the Sir Donald Brydon Review of Audit, assuming that our existing processes are good enough is difficult to justify. If we want to make better decisions with data, we need to make sure we can. 

Recommendation

A strong recommendation would be to put in place an “Attestation Clause”, a drop-dead date, a 2-year mandatory data training program aimed at the senior leadership team and Directors/ Board members and a succession plan that priorities data skills for new senior and board (inc NXD) roles.

Proposal

A “data attestation” section intends that the board receives a *signed* declaration from the proposer(s) and independent data expert that the proposer has:

proven attestation of the data used in the board paper, 

proven rights to use the data

what difference/ delta third-party data makes the recommendation/ outcome

ensured, to best efforts, that there is no bias or selection in the data or analysis

clearly specified any decision making that is or becomes automated 

if relevant, created the hypothesis before the analysis 

run scenarios using different data and tools

not miss-led the board using data

highlighted the conflicts of interest between their BSC/KPI and the approval sort

The independent auditor should not be the companies financial auditor or data lake provider; this should be an independent forensic data expert. Audit suggests sampling; this is not about sampling. It is not about creating more hurdles or handing power to an external body; this is about 3rd party verification and validation. As a company, you build a list of experts and cycle through them regularly. The auditor does not need to see the board paper, the outcome from the analysis or the recommendations - they are there to check the attestation and efficacy from end to end.  Critical will be proof of their expertise and an insurance certificate.    

Whilst this is not the final wording you will use, it is the intent that is important; this does not negate or novate data risks from the risk section.

Example of a Data Attestation section

We certify by our signatures that we, the proposer and auditor, can prove to OurCompany (PLC) Board that we have provable attestation and rights to all the data used in this paper’s presentation.   We have presented in this paper sensitivity of the selected data, model and tools and have provided evidence that different data and analysis tool selection equally favours the recommendation.  We have tested and can verify that our data, analysis, insights, and knowledge is traceable and justifiable.  We declare that there are no Conflicts of Interest, and no automation of decision making will result from this approval. 




Wednesday, 17. March 2021

Simon Willison

Quoting Thea Flowers

When you have to mock a collaborator, avoid using the Mock object directly. Either use mock.create_autospec() or mock.patch(autospec=True) if at all possible. Autospeccing from the real collaborator means that if the collaborator's interface changes, your tests will fail. Manually speccing or not speccing at all means that changes in the collaborator's interface will not break your tests that use

When you have to mock a collaborator, avoid using the Mock object directly. Either use mock.create_autospec() or mock.patch(autospec=True) if at all possible. Autospeccing from the real collaborator means that if the collaborator's interface changes, your tests will fail. Manually speccing or not speccing at all means that changes in the collaborator's interface will not break your tests that use the collaborator: you could have 100% test coverage and your library would fall over when used!

Thea Flowers


logpaste

logpaste Useful example of how to use the Litestream SQLite replication tool in a Dockerized application: S3 credentials are passed to the container on startup, it then attempts to restore the SQLite database from S3 and starts a Litestream process in the same container to periodically synchronize changes back up to the S3 bucket. Via @deliberatecoder

logpaste

Useful example of how to use the Litestream SQLite replication tool in a Dockerized application: S3 credentials are passed to the container on startup, it then attempts to restore the SQLite database from S3 and starts a Litestream process in the same container to periodically synchronize changes back up to the S3 bucket.

Via @deliberatecoder


Damien Bod

The authentication pyramid

This article looks at the authentication pyramid for signing into different applications. I only compare flows which have user interaction and only compare the 2FA, MFA differences. A lot of incorrect and aggressive marketing from large companies are blurring out the differences so that they can sell their products and so on. When you as […]

This article looks at the authentication pyramid for signing into different applications. I only compare flows which have user interaction and only compare the 2FA, MFA differences. A lot of incorrect and aggressive marketing from large companies are blurring out the differences so that they can sell their products and so on.

When you as a user need to use an application, you need to login. The process of logging in or signing in requires authentication of the user and the application. The sign in or login is an authentication process.

To explain the different user and application (identity) authentication possibilities, I created an authentication pyramid diagram from worse to best. FIDO2 is the best way of authenticating as this is only one which protects against phishing.

Passwords which rotate

Passwords which rotate without a second factor is the worst way of authenticating your users. By forcing password rotation, people are forced to update passwords regularly and this discourages the use of password managers and encourages users to use something simple which they can remember. In companies which use this policy, a lot of users have a simple password with a two-digit number at the end of the password. You can guess the number by calculating the length of time the user was at the company and the amount of time a password is active.

Passwords

Passwords without a second factor which don’t rotate are better than password which rotate as people tend to use better passwords. It is easier to educate the organisation users to use a password manager and then any complicated password can be easily used without constant rotation, annoyance. If it’s a pain to use in your daily routine, then people will try to avoid using it. I encourage users to use bitwarden but most password managers work good. Ease of use in the browser is important.

SMS MFA

SMS as a second factor is way better than passwords alone. But SMS as a second factor has too many security problems and is too easy to bypass. NIST no longer recommends using SMS as a second factor.

https://pages.nist.gov/800-63-3/sp800-63b.html

Authenticators

Authenticators are a good solution for second factor authentication and does improve the quality and the security of the authentication process compared to passwords and SMS second factor. Authenticators do have many problems, but the major fault of authenticators is that it does NOT protect against phishing. When using authenticators, your users are still vulnerable to phishing attacks.

If a user accesses a phishing website, the push notification will still get sent to the mobile device or the OTP can still be validated. A lot of companies are now moving to 2FA using authenticators. One problem with this is that the push notifications seems to get sent randomly from Authenticators. Authenticators are NOT enough if you have high security requirements.

Most people will recognise the popup underneath. This popup opens on one of my domains on almost a daily basis. I see users conditioned now to just click the checkbox, give in the code and continue working without even checking which application requested this. It has become routine now to just fill this in and click it away so that you can continue working. Sometimes the popup just opens because the last code has timed out. If someone had acquired your password and was logging in, a few users would just enter the code for the attacker due to this conditioning. Also, if you were accessing your application through a phishing website, you would not notice any difference here and continue to validate with the code.

Another problem with authenticators is that it requires a mobile phone to install the application. To complete the login, you require a phone. Most of us only have one mobile phone, so if you lose your phone, then you are locked out of your accounts. The account recovery comes into play which is usually a password, or SMS. Your security is reduced to SMS or password if you use this type of recovery. If recovery requires your IT admin to reset the account, you must wait for the IT from your organisation to reset your account, but this way, the security stays at the authenticator security level.

It is important that the account recovery does not use a reduced authentication process.

FIDO2 (Fast IDentity Online)

FIDO2 is the best way to authenticate identities where user interaction is required. FIDO2 protects against phishing. This is the standout feature which none of the other authentication processes protect against. If you lose your FIDO2 key, then you can use a second FIDO2 key. You as an organisation no longer need to worry about phishing. This could change in the future, but at present FIDO2 protects against phishing.

If you are an organisation with 100 employees, you could only allow FIDO2 and block all other authentication methods. Each employee would require 2 FIDO2 keys which would cost roughly 100$. This would cost about 10k for 100 users and so fully protect an organisation and rid yourself from phishing. You could save all the costs of the phishing exercises which so my companies force us to do now.

Here’s how you could configure this in Azure AD using the security, Authentication methods:

Notes:

Passwordless is great when using FIDO2. FIDO2 has best practice and recommendations for which FIDO2 flow, when and where to use. There are different type of FIDO2 flows for different use cases.

Passwordless is not only FIDO2, so be careful implementing a Passwordless flow. Make sure it’s a FIDO2 solution.

If you are using FIDO2, you will require NFC on the mobile phones to use applications from the organisation, unless you allow FIDO2 hardware from the device.

Identity authentication is only one part of the authentication. If you use a weak OAuth2/OIDC (OpenID Connect) flow, or something created by yourself, you also can have a weak authentication even if using FIDO2. It is important to follow standards. For example using FIDO2 authentication with OIDC FAPI would give you a really good authentication solution.

Try to avoid non-transparent solutions. Big companies will always push their own solutions and if these are not built using standards, there is no way of knowing how good the solution is. Security for your users is not the focus for the companies selling security solutions, selling their SOLUTION is the focus. It is up to you to know what you are buying. Security requires a good solution for application security and network security. A good solution will give you the chance to use best practice in both.

I would consider OIDC FAPI state of the art for high security in applications. This is what I would now consider when evaluating solutions for banks, insurance or government E-Ids. Using this together with FIDO2 and you have an easy to use best practice security solution.

Links

Home

https://github.com/OWASP/ASVS

NIST

https://pages.nist.gov/800-63-3/sp800-63b.html

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-passwordless

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key

https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AdminAuthMethods

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/mfa

FIDO2: WebAuthn & CTAP

https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html

https://github.com/herrjemand/awesome-webauthn

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-passwordless#fido2-security-keys

https://bitwarden.com/

Tuesday, 16. March 2021

Doc Searls Weblog

How anywhere is everywhere

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following. If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. […]

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following.

If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. But many people in rural Alaska served by KSKO and its tiny repeaters don’t have Internet access, so the station is either their only choice, or one of a few. So we use the gear we have to get the content we can.

TV viewing is also drifting from cable to à la carte subscription services (Netflix, et. al.) delivered over the Internet, in much the same way that it drifted earlier from over-the-air to cable. And yet over-the-air is still with us. It’s also significant that most of us get our Internet over connections originally meant only for cable TV, or over cellular connections originally meant only for telephony.

Marshall and Eric McLuhan, in Laws of Media, say every new medium or technology does four things: enhance, retrieve, obsolesce and reverse. (These are also caled the Tetrad of Media Effects.) And there are many answers in each category. For example, the Internet—

enhances content delivery; retrieves radio, TV and telephone technologies; obsolesces over-the-air listening and viewing; reverses into tribalism;

—among many other effects within each of those.

The McLuhans also note that few things get completely obsolesced. For example, there are still steam engines in the world. Some people still make stone tools.

It should also help to note that the Internet is not a technology. At its base it’s a protocol—TCP/IP—that can be used by a boundless variety of technologies. A protocol is a set of manners among things that compute and communicate. What made the Internet ubiquitous and all-consuming was the adoption of TCP/IP by things that compute and communicate everywhere in the world.

This development—the worldwide adoption of TCP/IP—is beyond profound. It’s a change as radical as we might have if all the world suddenly spoke one common language. Even more radically, it creates a second digital world that coexists with our physical one.

In this digital world, we are at a functional distance apart of zero. We also have no gravity. We are simply present with each other. This means the only preposition that accurately applies to our experience of the Internet is with. Because we are not really on or through or over anything. Those prepositions refer to the physical world. The digital world is some(non)thing else.

This is why referring to the Internet as a medium isn’t quite right. It is a one-of-one, an example only of itself. Like the Universe. That you can broadcast through the Internet is just one of the countless activities it supports. (Even though the it is not an it in the material sense.)

I think we are only at the beginning of coming to grips with what it all means, besides a lot.

Monday, 15. March 2021

@_Nat Zone

Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか

2021年3月17日 午前9時10分から、Fin/… The post Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか first appeared on @_Nat Zone.

2021年3月17日 午前9時10分から、Fin/Sum 2021 1にモデレーターとして出演します2

Fin/Sum 2021 は、日本経済新聞社と金融庁の共催ですが、Day 2 のメインホールは金融庁が中心になって作っているプログラムです。赤澤副大臣のご挨拶で始まって、麻生副総理のご挨拶で占める形を取ります。その中で、わたしのセッションは、赤澤 亮正内閣府副大臣(金融担当)のご挨拶の直後で、セッションのタイトルは「ポストコロナで金融サービスとテクノロジーは如何にあるべきか」。ある意味この1日の方向づけをする大切なセッションです。

パネルの出演者は

サムソン・モウ Blockstream CSO Pixelmatic CEO ブラッド・カー 国際金融協会デジタルファイナンスマネージングディレクター 横田 浩二 みんなの銀行代表取締役頭取 兼 ふくおかフィナンシャルグループ取締役執行役員 松尾 元信 金融庁 証券取引等監視委員会 事務局長

と、堂々たる顔ぶれです。

パネルの構造としては、まず、私から、「トラストとは」ということを若干お話して、その後を受けて、横田頭取、カー氏、モウ氏と行って、それを受けて松尾事務局長に当局の視点からのお話をいただき、15分ほどフリーディスカッション→クロージング、という形を考えております。

今年は、オンライン・オフライン組み合わせたハイブリッド開催のようです。もしお時間があるようでしたら、ご覧いただければと思います。チケットはこちらから購入可能です。会場での参加は10万円と高額ですが、リモートには無料版(アーカイブアクセス不能)、5000円(アーカイブアクセス可能)の種類もあります。

なお、以下に、当日のプログラムを掲載しておきます。

プログラム (Day 2) [2021-03-17]

9:00-9:05

挨拶

赤澤 亮正内閣府副大臣(金融担当)

9:10-10:00

セッション1 ポストコロナで金融サービスとテクノロジーは如何にあるべきか サムソン・モウBlockstream CSO Pixelmatic CEO ブラッド・カー国際金融協会デジタルファイナンスマネージングディレクター 横田 浩二みんなの銀行代表取締役頭取 兼 ふくおかフィナンシャルグループ取締役執行役員 松尾 元信金融庁 証券取引等監視委員会 事務局長

モデレーター
崎村 夏彦OpenID Foundation 理事長

10:20-11:10

セッション2 デジタル上の「信頼」構築に向けたビルディング・ブロック モティ・ウンGoogle セキュリティ&プライバシーリサーチサイエンティスト 安田 クリスティーナマイクロソフト・コーポレーション アイデンティティ規格アーキテクト トーステン・ロッダーシュテットyes.com CTO 手塚 悟慶應義塾大学 環境情報学部 教授

モデレーター

松尾 真一郎ジョージタウン大学Department of Computer Science研究教授 NTTリサーチ CISラボラトリーズ ブロックチェーン研究グループヘッド

11:30-12:20

セッション3 デジタル資産への変わりゆく信頼 ケイヴォン・ピレスターニHead of APAC Institutional Coverage & COO Coinbase Singapore ジョシュ・ディームスフィデリティ・デジタル・アセット 事業開発部長 ジャン=マリー・モグネッティCOINSHARES INTERNATIONAL CEO KOMAINU HOLDINGS CEOモデレーター マイケル・ケーシーCoinDesk CCO

12:50-13:40

セッション4 金融庁ブロックチェーン国際共同研究プロジェクト – デジタルアイデンティティの活用可能性と課題 佐古 和恵早稲田大学 基幹理工学部情報理工学科 教授 MyDataJapan 副理事長 間下 公照ジェーシービー イノベーション統括部 次長 アンドレ・ボイセンSecureKey Technologies Inc. チーフ・アイデンティティ・オフィサー 渡辺 翔太 野村総合研究所 コーポレートイノベーションコンサルティング部 主任コンサルタント

モデレーター

牛田 遼介ジョージタウン大学 シニアフェロー 金融庁 フィンテック室 課長補佐

14:00-14:50

セッション5 APIエコノミーにおける金融の役割を再考する 藤井 達人日本マイクロソフト エンタープライズ事業本部 業務執行役員 金融イノベーション本部長 FINOVATORS Co-Founder 丸山 弘毅インフキュリオン 代表取締役社長 富士榮 尚寛OpenID Foundation eKYC and Identity Assurance WG 共同議長
OpenID ファウンデーションジャパン 理事 松尾 拓哉JALペイメント・ポート 取締役マーケティング部長

モデレーター
大久保 光伸金融庁 参与 内閣官房 政府CIO補佐官

15:10-15:55

特別座談会1 ユーザー起点の金融サービスとは何なのか? 沖田 貴史ナッジ 代表取締役社長 Fintech協会 会長 河合 祐子apan Digital Design CEO室 Senior Researcher 加藤 修一伊藤忠商事 執行役員 第8カンパニー プレジデント

モデレーター
岡田 大金融庁 総合政策局 総合政策課長

16:15-17:05

セッション6 BGIN – 1年間の歩みの振り返りと今後の展望 鈴木 茂哉慶應義塾大学 大学院政策・メディア研究科 特任教授 ローマン・ダンツィガー パヴロフSafestead CEO ジュリアン・ブリンガーKallistech CEO マノージ・クマル・シンハインド準備銀行 副部長 

モデレーター
マイ・サンタマリーアアイルランド財務省 ファイナンシャルアドバイザリー部門長

17:25-18:10

特別座談会2 金融サービス新時代に向けたフィンテック・イノベーションの推進 貴志 優紀Plug and Play Japan Fintech and Brand&Retail 兼 Director Fintech協会 理事 リチャード・ノックス英国財務省 金融サービスグループ長(国際部門) パット・パテルシンガポール金融管理局 プリンシプルエグゼクティブオフィサー

モデレーター
野崎 彰金融庁 組織戦略監理官 兼 フィンテック室長

18:15-18:20

挨拶

挨拶麻生 太郎副総理 兼 財務大臣 兼 内閣府特命担当大臣(金融)

The post Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか first appeared on @_Nat Zone.

Simon Willison

sqlite-uuid

sqlite-uuid Another Python package that wraps a SQLite module written in C: this one provides access to UUID functions as SQLite functions. Via Ricardo Ander-Egg

sqlite-uuid

Another Python package that wraps a SQLite module written in C: this one provides access to UUID functions as SQLite functions.

Via Ricardo Ander-Egg


sqlite-spellfix

sqlite-spellfix I really like this pattern: "pip install sqlite-spellfix" gets you a Python module which includes a compiled (on your system when pip install ran) copy of the SQLite spellfix1 module, plus a utility variable containing its path so you can easily load it into a SQLite connection. Via sqlite_uuid

sqlite-spellfix

I really like this pattern: "pip install sqlite-spellfix" gets you a Python module which includes a compiled (on your system when pip install ran) copy of the SQLite spellfix1 module, plus a utility variable containing its path so you can easily load it into a SQLite connection.

Via sqlite_uuid


Nader Helmy

The State of Identity on the Web

The evolution of identity on the web is happening at a rapid pace, with many different projects and efforts converging around similar ideas with their own interpretations and constraints. It can be difficult to parse through all of these developments while the dust hasn’t completely settled, but looking at these issues holistically, we can see a much bigger pattern emerging. In fact, many of the m

The evolution of identity on the web is happening at a rapid pace, with many different projects and efforts converging around similar ideas with their own interpretations and constraints. It can be difficult to parse through all of these developments while the dust hasn’t completely settled, but looking at these issues holistically, we can see a much bigger pattern emerging. In fact, many of the modern innovations related to identity on the web are actually quite connected and build upon each other in a myriad of complementary ways.

The rise of OpenID Connect

The core of modern identity is undoubtedly OpenID Connect (OIDC), the de-facto standard for user authentication and identity protocol on the internet. It’s a protocol that enables developers building apps and services to verify the identity of their users and obtain basic profile information about them in order to create an authenticated user experience. Because OIDC is an identity layer built on top of the OAuth 2.0 framework, it can also be used as an authorization solution. Its development was significant for many reasons, in part because it came with the realization that identity on the web is fundamental to many different kinds of interactions, and these interactions need simple and powerful security features that are ubiquitous and accessible. Secure digital identity is a problem that doesn’t make sense to solve over and over again in different ways with each new application, but instead needs a standard and efficient mechanism that’s easy to use and works for the majority of people.

OpenID Connect introduced a convenient and accessible protocol for identity that required less setup and complexity for developers building different kinds of applications and programs. In many ways, protocols like OIDC and OAuth 2.0 piggy-backed on the revolution that was underfoot in the mid 2000’s as developers fled en-mass from web based systems heavily reliant on technologies like XML (and consequently identity systems built upon these technologies like SAML), for simpler systems based on JSON. OpenID built on the success of OAuth and offered a solution that improved upon existing identity and web security technologies which were vulnerable to attacks like screen scraping. This shift towards a solution built upon modern web technologies with an emphasis on being easy-to-use created ripe conditions for adoption of these web standards.

OIDC’s success has categorically sped up both the web and native application development cycle when it comes to requiring the integration of identity, and as a result, users have now grown accustomed to having sign-in options aplenty with all their favorite products and services. It’s not intuitively clear to your average user why they need so many different logins and it’s up to the user to manage which identities they use with which services, but the system works and provides a relatively reliable way to integrate identity on the web.

Success and its unintended consequences

While OIDC succeeded in simplicity and adoption, what has emerged over time are a number of limitations and challenges that have come as a result of taking these systems to a global scale.

When it comes to the market for consumer identity, there are generally three main actors present:

Identity Providers Relying Parties End-Users

The forces in the market that cause their intersection to exist are complex, but can be loosely broken down into the interaction between each pair of actors.

In order for an End-User to be able to “login” to a website today, the “sweet spot” must exist where each of these sets of requirements are met.

The negotiation between these three parties usually plays out on the relying party’s login page. It’s this precious real-estate that drives these very distinct market forces.

Anti-competitive market forces

In typical deployments of OIDC, in order for a user to be able to “login” to a relying party or service they’re trying to access online, the relying party must be in direct contact with the Identity Provider (IdP). This is what’s come to be known as the IdP tracking problem. It’s the IdP that’s responsible for performing end-user authentication and issuing end-user identities to relying parties, not the end-users themselves. Over time, these natural forces in OIDC have created an environment that tends to favour the emergence and continued growth of a small number of very large IdPs. These IdPs wield a great deal of power, as they have become a critical dependency and intermediary for many kinds of digital interactions that require identity.

This environment prevents competition and diversity amongst IdPs in exchange for a convenience-driven technology framework where user data is controlled and managed in a few central locations. The market conditions have made it incredibly difficult for new IdPs to break into the market. For example, when Apple unveiled their “Sign in with Apple” service, they used their position as a proprietary service provider to mandate their inclusion as a third party sign in option for any app or service that was supporting federated login on Apple devices. This effectively guaranteed adoption of their OpenID-based solution, allowing them to easily capture a portion of the precious real-estate that is the login screen of thousands of modern web apps today. This method of capturing the market is indicative of a larger challenge wherein the environment of OIDC has made it difficult for newer and smaller players in the IdP ecosystem to participate with existing vendors on an equal playing field.

Identity as a secondary concern has primary consequences

Another key issue in the current landscape is that for nearly all modern IdPs, being an identity provider is often a secondary function to their primary line of business. Though they have come to wear many different hats, many of the key IdPs’ primary business function is offering some service to end-users (e.g. Facebook, Twitter, Google, etc.). Their role as IdPs is something that emerged over time, and with it has surfaced a whole new set of responsibilities whose impact we are only just beginning to grapple with.

Due to this unequal relationship, developers and businesses who want to integrate identity in their applications are forced to choose those IdPs which contain user data for their target demographics, instead of offering options for IdP selection based on real metrics around responsible and privacy-preserving identity practices for end-users.

This cycle perpetuates the dominance of a few major IdPs and likewise forces users to keep choosing from the same set of options or risk losing access to all of their online accounts. In addition, many of these IdPs have leveraged their role as central intermediaries to increase surveillance and user behavior tracking, not just across their proprietary services, but across a user’s entire web experience. The net result of this architecture on modern systems is that IdPs have become a locus for centralized data storage and processing.

The privacy implications associated with the reliance on a central intermediary who can delete, control, or expose user data at any time have proven to be no small matter. New regulations such as GDPR and CCPA have brought user privacy to the forefront and have spurred lots of public discourse and pressure for companies to manage their data processing policies against more robust standards. The regulatory and business environment that is forming around GDPR and CCPA is pushing the market to consider better solutions that may involve decentralizing the mode of operation or separating the responsibilities of an IdP.

Identity Provider lock-in

Lastly, in today’s landscape there is an inseparable coupling between an End-User and the IdP they use. This effectively means that, in order to transfer from say “Sign In With Google” to “Sign In With Twitter,” a user often has to start over and build their identity from scratch. This is due to the fact that users are effectively borrowing or renting their identities from their IdPs, and hence have little to no control in exercising that identity how they see fit. This model creates a pattern that unnecessarily ties a user to the application and data ecosystem of their IdP and means they must keep an active account with the provider to keep using their identity. If a user can’t access to their account with an IdP, say by losing access to their Twitter profile, they can no longer login to any of the services where they’re using Twitter as their IdP.

One of the problems with the term Identity Provider itself is that it sets up the assumption that the end-user is being provided with an identity, rather than the identity being theirs or under their control. If end-users have no real choice in selecting their IdP, then they are ultimately subject to the whims of a few very large and powerful companies. This model is not only antithetical to anti-trust policies and legislation, it also prevents data portability between platforms. It’s made it abundantly clear that the paradigm shift on end-user privacy practices needs to start by giving users a baseline level of choice when it comes to their identity.

A nod to an alternative model

Fundamentally, when it comes to identity on the web, users should have choice; choice about which services they employ to facilitate the usage of their digital identities along with being empowered to change these service providers if they so choose.

The irony of OpenID Connect is that the original authors did actually consider these problems, and evidence of this can be found in the original OIDC specification: in chapter 7, entitled “Self Issued OpenID Provider” (SIOP).

Earning its name primarily from the powerful idea that users could somehow be their own identity provider, SIOP was an ambitious attempt at solving a number of different problems at once. It raises some major questions about the future of the protocol, but it stops short of offering an end-to-end solution to these complex problems.

As it stands in the core specification, the SIOP chapter of OIDC was really trying to solve 3 significant, but distinct problems, which are:

Enabling portable/transferable identities between providers Dealing with different deployment models for OpenID providers Solving the Nascar Problem

SIOP has recently been of strong interest to those in the decentralized or self-sovereign identity community because it’s been identified as a potential pathway to transitioning existing deployed digital infrastructure towards a more decentralized and user-centric model. As discussion is ongoing at the OpenID Foundation to evolve and reboot the work around SIOP, there are a number of interesting questions raised by this chapter that are worth exploring to their full extent. For starters, SIOP questions some of the fundamental assumptions around the behaviour and deployment of an IdP.

OpenID and OAuth typically use a redirect mechanism to relay a request from a relying party to an IdP. OAuth supports redirecting back to a native app for the end-user, but it assumes that the provider itself always takes the form of an HTTP server, and furthermore it assumes the request is primarily handled server-side. SIOP challenged this assumption by questioning whether the identity provider has to be entirely server-side, or if the provider could instead take the form of a Single-Page Application (SPA), Progressive Web Application (PWA), or even a native application. In creating a precedent for improving upon the IdP model, SIOP was asking fundamental questions such as: who gets to pick the provider? What role does the end-user play in this selection process? Does the provider always need to be an authorization server or is there a more decentralized model available that is resilient from certain modes of compromise?

Although some of these questions remain unanswered or are early in development, the precedent set by SIOP has spurred a number of related developments in and around web identity. Work is ongoing at the OpenID Foundation to flesh out the implications of SIOP in the emerging landscape.

Tech giants capitalize on the conversation

Although OIDC is primarily a web-based identity protocol, it was purposefully designed to be independent of any particular browser feature or API. This separation of concerns has proved incredibly useful in enabling adoption of OIDC outside of web-only environments, but it has greatly limited the ability for browser vendors to facilitate and mediate web-based login events. A number of large technology and browser vendors have picked up on this discrepancy, and are starting to take ownership of the role they play in web-based user interactions.

Notably, a number of new initiatives have been introduced in the last few years to address this gap in user privacy on the web. An example of this can be found in the W3C Technical Architecture Group (TAG), a group tasked with documenting and building consensus around the architecture of the World Wide Web. Ahead of the 2019 W3C TPAC in Japan, Apple proposed an initiative called IsLoggedIn, effectively a way for websites to tell the browser whether the user was logged in or not in a trustworthy way. What they realized is that the behavior of modern web architecture results in users being “logged in by default” to websites they visit, even if they only visit a website once. Essentially as soon as the browser loads a webpage, that page can store data about the user indefinitely on the device, with no clear mechanism for indicating when a user has logged out or wishes to stop sharing their data. They introduced an API that would allow browsers to set the status of user log-ins to limit long term storage of user data. It was a vision that required broad consensus among today’s major web browsers to be successful. Ultimately, the browsers have taken their own approach in trying to mitigate the issue.

In 2019, Google created their Privacy Sandbox initiative to advance user privacy on the web using open and transparent standards. As one of the largest web browsers on the planet, Google Chrome seized the opportunity provided by an increased public focus on user privacy to work on limiting cross-site user tracking and pervasive incentives that encourage surveillance. Fuelled by the Privacy Sandbox initiative, they created a project called WebID to explore how the browser can mediate between different parties in a digital identity transaction. WebID is an early attempt to get in the middle of the interaction that happens between a relying party and an IdP, allowing the browser to facilitate the transaction in a way that provides stronger privacy guarantees for the end-user.

As an overarching effort, it’s in many ways a response to the environment created by CCPA and GDPR where technology vendors like Google are attempting to enforce privacy expectations for end-users while surfing the web. Its goal is to keep protocols like OIDC largely intact while using the browser as a mediator to provide a stronger set of guarantees when it comes to user identities. This may ultimately give end-users more privacy on the web, but it doesn’t exactly solve the problem of users being locked into their IdPs. With the persistent problem of data portability and limited user choices, simply allowing the browser to mediate the interaction is an important piece of the puzzle but does not go far enough on its own.

Going beyond the current state of OpenID Connect

Though it is a critical component of modern web identity, OIDC is not by any means the only solution or protocol to attempt to solve these kinds of problems.

A set of emerging standards from the W3C Credentials Community Group aim to look at identity on the web in a very different way, and, in fact, are designed to consider use cases outside of just consumer identity. One such standard is Decentralized Identifiers (DIDs) which defines a new type of identifier and accompanying data model featuring several novel properties not present in most mainstream identifier schemes in use today. Using DIDs in tandem with technologies like Verifiable Credentials (VCs) creates an infrastructure for a more distributed and decentralized layer for identity on the web, enabling a greater level of user control. VCs were created as the newest in a long line of cryptographically secured data representation formats. Their goal was to provide a standard that improves on its predecessors by accommodating formal data semantics through technologies like JSON-LD and addressing the role of data subjects in managing and controlling data about themselves.

These standards have emerged in large part to address the limitations of federated identity systems such as the one provided by OIDC. In the case of DIDs, the focus has been on creating a more resilient kind of user-controllable identifier. These kinds of identifiers don’t have to be borrowed or rented from an IdP as is the case today, but can instead be directly controlled by the entities they represent via cryptography in a consistent and standard way. When combining these two technologies, VCs and DIDs, we enable verifiable information that has a cryptographic binding to the end-user and can be transferred cross-context while retaining its security and semantics.

As is the case with many emerging technologies, in order to be successful in an existing and complicated market, these new standards should have a cohesive relationship to the present. To that end, there has been a significant push to bridge these emerging technologies with the existing world of OIDC in a way that doesn’t break existing implementations and encourages interoperability.

One prominent example of this is a new extension to OIDC known as OpenID Connect Credential Provider. Current OIDC flows result in the user receiving an identity token which is coupled to the IdP that created it, and can be used to prove the user’s identity within a specific domain. OIDC Credential Provider allows you to extend OIDC to allow IdPs to issue reusable VCs about the end-user instead of simple identity tokens with limited functionality. It allows end-users to request credentials from an OpenID Provider and manage their own credentials in a digital wallet under their control. By allowing data authorities to be the provider of reusable digital credentials instead of simple identity assertions, this extension effectively turns traditional Identity Providers into Credential Providers.

The credentials provided under this system are cryptographically bound to a public key controlled by the end-user. In addition to public key binding, the credential can instead be bound to a DID, adding a layer of indirection between a user’s identity and the keys they use to control it. In binding to a DID, the subject of the credential is able to maintain ownership of the credential on a longer life cycle due to their ability to manage and rotate keys while maintaining a consistent identifier. This eases the burden on data authorities to re-issue credentials when the subject’s keys change and allows relying parties to verify that the credential is always being validated against the current public key of the end-user. The innovations upon OIDC mark a shift from a model where relying parties request claims from an IdP, to one where they can request claims from specific issuers or according to certain trust frameworks and evaluation metrics appropriate to their use case. This kind of policy-level data management creates a much more predictable and secure way for businesses and people to get the data they need.

OIDC Credential Provider, a new spec at the OpenID Foundation, is challenging the notion that the identity that a user receives has to be an identity entirely bound to its domain. It offers traditional IdPs a way to issue credentials that are portable and can cross domains because the identity/identifier is no longer coupled to the provider as is the case with an identity token. This work serves to further bridge the gap between existing digital identity infrastructure and emerging technologies which are more decentralized and user-centric. It sets the foundation for a deeper shift in how data is managed online, whether it comes in the form of identity claims, authorizations, or other kinds of verifiable data.

Broadening the landscape of digital identity

OIDC, which is primarily used for identity, is built upon OAuth 2.0, whose primary use is authorization and access. If OIDC is about who the End-User is, then OAuth 2.0 is about what you’re allowed to do on behalf of and at the consent of the End-User. OAuth 2.0 was built prior to OIDC, in many ways because authorization allowed people to accomplish quite a bit without the capabilities of a formalized and standardized identity protocol. Eventually, it became obvious that identity is an integral and relatively well-defined cornerstone of web access that needed a simple solution. OIDC emerged as it increasingly became a requirement to know who the end-user (or resource owner) is and for the client to be able to request access to basic claims about them. Together, OIDC and OAuth2.0 create a protocol that combines authentication and authorization. While this allows them to work natively with one another, it’s not always helpful from an infrastructure standpoint to collapse these different functions together.

Efforts like WebID are currently trending towards the reseparation of these concepts that have become married in the current world of OpenID, by developing browser APIs that are specifically geared towards identity. However, without a solution to authorization, it could be argued that many of the goals of the project will remain unsatisfied whenever the relying party requires both authentication and authorization in a given interaction.

As it turns out, these problems are all closely related to each other and require a broad and coordinated approach. As we step into an increasingly digital era where the expectation continues to evolve around what’s possible to do online, the role of identity becomes increasingly complex. Take, for example, sectors such as the financial industry dealing with increased requirements around electronic Know-Your-Customer (KYC) policies. In parallel with the innovation around web identity and the adoption of emerging technologies such as VCs, there has been a growing realization that the evolution of digital identity enables many opportunities that extend far beyond the domain of identity. This is where the power of verifiable data on the web really begins, and with it an expanded scope and structure for how to build digital infrastructure that can support a whole new class of applications.

A new proposed browser API called Credential Handler API (CHAPI) offers a promising solution to browser-mediated interactions that complements the identity-centric technologies of OIDC and WebID. It currently takes the form of a polyfill to allow these capabilities to be used in the browser today. Similar to how SIOP proposes for the user to be able pick their provider for identity-related credentials, CHAPI allows you to pick your provider, but not just for identity — for any kind of credential. In that sense, OIDC and CHAPI are solving slightly different problems:

OIDC is primarily about requesting authentication of an End-User and receiving some limited identity claims about them, and in certain circumstances also accessing protected resources on their behalf. CHAPI is about requesting credentials that may describe the End-user or may not. Additionally, credentials might not even be related to their identity directly and instead used for other related functions like granting authorization, access, etc.

While OIDC offers a simple protocol based upon URL redirects, CHAPI pushes for a world of deeper integration with the browser that affords several useability benefits. Unlike traditional implementations of OIDC, CHAPI does not start with the assumption that an identity is fixed to the provider. Instead, the end-user gets to register their preferred providers in the browser and then select from this list when an interaction with their provider is required. Since CHAPI allows for exchanging credentials that may or may not be related to the end-user, it allows for a much broader set of interactions than what’s provided by today’s identity protocols. In theory, these can work together rather than as alternative options. You could, for instance, treat CHAPI browser APIs as a client to contact the end-user’s OpenID Provider and then use CHAPI to exchange and present additional credentials that may be under the end-user’s control.

CHAPI is very oriented towards the “credential” abstraction, which is essentially a fixed set of claims protected in a cryptographic envelope and often intended to be long lived. A useful insight from the development of OIDC is that it may be helpful to separate, at least logically, the presentation of identity-related information from the presentation of other kinds of information. To extend this idea, authenticating or presenting a credential is different from authenticating that you’re the subject of a credential. You may choose to do these things in succession, but they are not inherently related.

The reason this is important has to do with privacy, data hygiene, and best security practices. In order to allow users to both exercise their identity on the web and manage all of their credentials in one place, we should be creating systems that default to requesting specific information about an end-user as needed, not necessarily requesting credentials when what’s needed is an authentic identity and vice versa.

Adopting this kind of policy would allow configurations where the identifier for the credential subject would not be assumed to be the identifier used to identify the subject with the relying party. Using this capability in combination with approaches to selective disclosure like VCs with JSON-LD BBS+ signatures will ensure not only a coherent system that can separate identity and access, but also one that respects user privacy and provides a bridge between existing identity management infrastructure and emerging technologies.

An emergent user experience

Using these technologies in tandem also helps to bridge the divide between native and web applications when it comes to managing identity across different modalities. Although the two often get conflated, a digital wallet for holding user credentials is not necessarily an application. It’s a service to help users manage their credentials, both identity-related and otherwise, and should be accessible wherever an end-user needs to access it. In truth, native apps and web apps are each good at different things and come with their own unique set of trade-offs and implementation challenges. Looking at this emerging paradigm where identity is managed in a coherent way across different types of digital infrastructure, “web wallets” and “native wallets” are not necessarily mutually exclusive — emerging technologies can leverage redirects to allow the use of both.

The revolution around digital identity offers a new paradigm that places users in a position of greater control around their digital interactions, giving them the tools to exercise agency over their identity and their data online. Modern legislation focused on privacy, portability, security and accessible user experience is also creating an impetus for the consolidation of legacy practices. The opportunity is to leverage this directional shift to create a network effect across the digital ecosystem, making it easier for relying parties to build secure web experiences and unlocking entirely new value opportunities for claims providers and data authorities.

Users shouldn’t have to manage the complexity left behind by today’s outdated identity systems, and they shouldn’t be collateral damage when it comes to designing convenient apps and services. Without careful coordination, much of the newer innovation could lead to even more fragmentation in the digital landscape. However, as we can see here, many of these technology efforts and standards are solving similar or complementary problems.

Ultimately, a successful reinvention of identity on the web should make privacy and security easy; easy for end-users to understand, easy for relying parties to support, and easy for providers to implement. That means building bridges across technologies to support not only today’s internet users, but enabling access to an entirely new set of stakeholders across the globe who will finally have a seat at the table, such as those without access to the internet or readily available web infrastructure. As these technologies develop, we should continue to push for consolidation and simplicity to strike the elusive balance between security and convenience across the ecosystem for everyday users.

Where to from here?

Solving the challenges necessary to realize the future state of identity on the web will take a collective effort of vendor collaboration, standards contributions, practical implementations and education. In order to create adoption of this technology at scale, we should consider the following as concrete next steps we can all take to bring this vision to life:

Continue to drive development of bridging technologies that integrate well with existing identity solutions and provide a path to decentralized and portable identity

E.g. formalization of OIDC Credential Provider to extend existing IdPs

Empower users to exercise autonomy and sovereignty in selecting their service provider, as well as the ability to change providers and manage services over time

E.g. selection mechanisms introduced by SIOP and WebID

Adopt a holistic approach to building solutions that recognizes the role of browser-mediated interactions in preserving user privacy

E.g. newer browser developments such as CHAPI and WebID

Build solutions that make as few assumptions as necessary in order to support different types of deployment environments that show up in real-world use cases

E.g. evolution of SIOP as well as supporting web and native wallets

Ensure that the development of decentralized digital identity supports the variety and diversity of data that may be managed by users in the future, whether that data be identity-related or otherwise

Taking these steps will help to ensure that the identity technologies we build to support the digital infrastructure of tomorrow will avoid perpetuating the inequalities and accessibility barriers we face today. By doing our part to collaborate and contribute to solutions that work for everybody, building bridges rather than building siloes, we can create a paradigm shift that has longevity and resilience far into the future. We hope you join us.

The State of Identity on the Web was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 14. March 2021

Jon Udell

The Modesto Pile

Reading a collection of John McPhee stories, I found several that were new to me. The Duty of Care, published in the New Yorker in 1993, is about tires, and how we do or don’t properly recycle them. One form of reuse we’ve mostly abandoned is retreads. McPhee writes: A retread is in no way … Continue reading The Modesto Pile

Reading a collection of John McPhee stories, I found several that were new to me. The Duty of Care, published in the New Yorker in 1993, is about tires, and how we do or don’t properly recycle them. One form of reuse we’ve mostly abandoned is retreads. McPhee writes:

A retread is in no way inferior to a new tire, but new tires are affordable, and the retreaded passenger tire has descended to the status of a clip-on tie.

My dad wore clip-on ties. He also used retreaded tires, and I can remember visiting a shop on several occasions to have the procedure done.

Recently I asked a friend: “Whatever happened to retreaded tires?” We weren’t sure, but figured they’d gone away for good reasons: safety, reliability. But maybe not. TireRecappers and TreadWright don’t buy those arguments. Maybe retreads were always a viable option for our passenger fleet, as they still are for our truck fleet. And maybe, with better tech, they’re better than they used to be.

In Duty of Care, McPhee tells the story of the Modesto pile. It was, at the time, the world’s largest pile of scrap tires containing, by his estimate, 34 million tires.

You don’t have to stare long at that pile before the thought occurs to you that those tires were once driven upon by the Friends of the Earth. They are Environmental Defense Fund tires, Rainforest Action Network tires, Wilderness Society tires. They are California Natural Resources Federation tires, Save San Francisco Bay Association tires, Citizens for a Better Environment tires. They are Greenpeace tires, Sierra Club tires, Earth Island Institute tires. They are Earth First! tires!

(I love a good John McPhee list.)

The world’s largest pile of tires left a surprisingly small online footprint, but you can find the LA Times’ Massive Pile of Tires Fuels Controversial Energy Plan which describes the power plant — 41 million dollars, 14 megawatts, “the first of its kind in the United States and the largest in the world” — that McPhee visited when researching his story. I found it on Google Maps by following McPhee’s directions.

If you were abandon your car three miles from the San Joaquin County line and make your way on foot southwest one mile…

You can see the power plant. There’s no evidence of tires, or trucks moving them, so maybe the plant, having consumed the pile, is retired. Fortunately it never caught fire, that would’ve made a hell of a mess.

According to Wikipedia, we’ve reduced our inventory of stockpiled tires by an order of magnitude from a peak of a billion around the time McPhee wrote that article. We burn most of them for energy, and turn some into ground rubber for such uses as paving and flooring. So that’s progress. But I can’t help but wonder about the tire equivalent of Amory Lovins’ negawatt: “A watt of energy that you have not used through energy conservation or the use of energy-efficient products.”

Could retreaded passenger tires be an important source of negawatts? Do we reject the idea just because they’re as unfashionable as clip-on ties? I’m no expert on the subject, obviously, but I suspect these things might be true.


Simon Willison

Weeknotes: tableau-to-sqlite, django-sql-dashboard

This week I started a limited production run of my new backend for Vaccinate CA calling, built a tableau-to-sqlite import tool and started working on a subset of Datasette for PostgreSQL and Django called django-sql-dashboard. Vaccinate CA backend progress My key project at the moment is buiding out a new Django-powered backend for the Vaccinate CA call reporting application - where real human

This week I started a limited production run of my new backend for Vaccinate CA calling, built a tableau-to-sqlite import tool and started working on a subset of Datasette for PostgreSQL and Django called django-sql-dashboard.

Vaccinate CA backend progress

My key project at the moment is buiding out a new Django-powered backend for the Vaccinate CA call reporting application - where real human beings constantly call pharmacies and medical sites around California to build a comprehensive guide to where the Covid vaccine is available.

As of this week, the new backend is running for a subset of the overall call volume. It's exciting! It's also a reminder that the single hardest piece of logic in any crowdsourcing-style application is the logic that gives a human being their next task. I'm continuing to evolve that logic, which is somewhat harder when the system I'm modifying is actively being used.

tableau-to-sqlite

The Vaccinate CA project is constantly on the lookout for new sources of data that might indicate locations that have the vaccine. Some of this data is locked up in Tableau dashboards, which are notoriously tricky to scrape.

When faced with problems like this, I frequently turn to GitHub code search: I'll find a unique looking token in the data I'm trying to wrangle and run searches to see if anyone on GitHub has written code to handle it.

In doing so, I came across Tableau Scraper - an open source Python library by Bertrand Martel which does a fantastic job of turning a Tableau dashboard into a Pandas DataFrame.

Writing a Pandas DataFrame to a SQLite database is a one-liner: df.to_sql("table-name", sqlite3.connect(db_path)). So I spun up a quick command-line wrapper around the TableuaScraper class called tableau-to-sqlite which lets you do the following:

% tableau-to-sqlite tableau.db https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations

Considering how much valuable data is trapped in government Tableau dashboards I'm really excited to point this tool at more sources. The README includes tips on combining this with sqlite-utils to get a CSV or JSON export which can then be tracked using Git scraping.

django-sql-dashboard

I'm continuing to ponder the idea of getting Datasette to talk to PostgreSQL in addition to SQLite, but in the meantime I have a growing Django application that runs against PostgreSQL and a desire to build some quick dashboards against it.

One of Datasette's key features is the ability to bookmark a read-only SQL query and share that link with other people. It's SQL injection attacks repurposed as a feature, and it's proved to be incredibly useful over the past few years.

Here's an example from earlier this week where I wanted to see how many GitHub issues I had opened and then closed within 60 seconds. The answer is 17!

django-sql-dashboard is my highly experimental exploration of what that idea looks like against a PostgreSQL database, wrapped inside a Django application

The key idea is to support executing read-only PostgreSQL statements with a strict timelimit (set using PostgreSQL's statement_timeout setting, described here). Users can execute SQL directly, bookmark and share queries and save them to a database table in order to construct persistent dashboards.

It's very early days for the project yet, and I'm still not 100% convinced it's a good idea, but early signs are very promising.

A fun feature is that it lets you have more than one SQL query on the same page. Here's what it looks like running against my blog's database, showing a count query and the months in which I wrote the most blog entries:

Releases this week hacker-news-to-sqlite: 0.4 - (5 releases total) - 2021-03-13
Create a SQLite database containing data pulled from Hacker News django-sql-dashboard: 0.1a3 - (3 releases total) - 2021-03-13
Django app for building dashboards using raw SQL queries datasette-ripgrep: 0.7 - (11 releases total) - 2021-03-11
Web interface for searching your code using ripgrep, built as a Datasette plugin tableau-to-sqlite: 0.2 - (3 releases total) - 2021-03-11
Fetch data from Tableau into a SQLite database TIL this week Pretty-printing all read-only JSON in the Django admin Flattening nested JSON objects with jq Converting no-decimal-point latitudes and longitudes using jq How to almost get facet counts in the Django admin Querying for GitHub issues open for less than 60 seconds Querying for items stored in UTC that were created on a Thursday in PST

Friday, 12. March 2021

@_Nat Zone

おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲

みなさん、フランスの20世紀の作曲家、ダマーズの「… The post おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲 first appeared on @_Nat Zone.

みなさん、フランスの20世紀の作曲家、ダマーズの「フルート、オーボエ、クラリネットとピアノのための四重奏」をご存知でしょうか?いかにもフランスという洒脱な曲です。リンクを貼ったのは二楽章ですが、ダマーズはこれに限らず第二楽章に楽しいものがおおいです。

ダマーズはコルトーの弟子で、フォーレのノクチュルヌとバルカローレの全曲録音を最初にした人でもあります。

新古典主義の作曲だったので20世紀にはあまり評価されていませんでしたが、最近評価が上がってきて録音も手に入るようになってきています。いかにもフランス的。プーランクが好きだったらダマーズも好きだと思います。

https://music.youtube.com/watch?v=mKkA9mFBlRw&feature=share

The post おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲 first appeared on @_Nat Zone.

Doc Searls Weblog

Trend of the Day: NFT

NFTs—Non-Fungible Tokens—are hot shit. Wikipedia explains (at that link), A non-fungible token (NFT) is a special type of cryptographic token that represents something unique. Unlike cryptocurrencies such bitcoin and many network or utility tokens,[a], NFTs are not mutually interchangeable and are thus not fungible in nature[1][2] Non-fungible tokens are used

NFTs—Non-Fungible Tokens—are hot shit. Wikipedia explains (at that link),

A non-fungible token (NFT) is a special type of cryptographic token that represents something unique. Unlike cryptocurrencies such bitcoin and many network or utility tokens,[a], NFTs are not mutually interchangeable and are thus not fungible in nature[1][2]

Non-fungible tokens are used to create verifiable[how?artificial scarcity in the digital domain, as well as digital ownership, and the possibility of asset interoperability across multiple platforms.[3] Although an artist can sell one or more NFTs representing a work, the artist can still retain the copyright to the work represented by the NFT.[4] NFTs are used in several specific applications that require unique digital items like crypto art, digital collectibles, and online gaming.

Art was an early use case for NFTs, and blockchain technology in general, because of the purported ability of NFTs to provide proof of authenticity and ownership of digital art, a medium that was designed for ease of mass reproduction, and unauthorized distribution through the Internet.[5]

NFTs can also be used to represent in-game assets which are controlled by the user instead of the game developer.[6] NFTs allow assets to be traded on third-party marketplaces without permission from the game developer.

An NPR story the other day begins,

The artist Grimes recently sold a bunch of NFTs for nearly $6 million. An NFT of LeBron James making a historic dunk for the Lakers garnered more than $200,000. The band Kings of Leon is releasing its new album in the form of an NFT.

At the auction house Christie’s, bids on an NFT by the artist Beeple are already reaching into the millions.

And on Friday, Twitter CEO Jack Dorsey listed his first-ever tweet as an NFT.

Safe to say, what started as an Internet hobby among a certain subset of tech and finance nerds has catapulted to the mainstream.

I remember well exactly when I decided not to buy bitcoin. It was on July 26, 2009, after I finished driving back home to Arlington, Mass, after dropping off my kid at summer camp in Vermont. I had heard a story about it on the radio that convinced me that now was the time to put $100 into something new that would surely become Something Big.

But trying to figure out how to do it took too much trouble, and my office in the attic was too hot, so I didn’t. Also, at the time, the price was $0. Easy to rationalize not buying a non-something that’s worth nothing.

So let’s say I made the move when it hit $1, which I think was in 2011. That would have been $100 for 100 bitcoin, which at this minute are worth $56101.85 apiece. A hundred of those are now $5,610,185. And what if I had paid the 1¢ or less a bitcoin would have been in July, 2009? You move the decimal point while I shake my head.

So now we have NFTs. What do you think I should do? Or anybody? Serious question.


The Dingle Group

eIDAS and Self-Sovereign Identity

On March 9, The Vienna Digital Identity Meetup* hosted presentations from Xavier Vale, Product Manager for Validated ID and Dr. Ignacio Alamillo, Director of Astrea on eIDAS and Self Sovereign Identity. The presentations covered the technical, legal and business dimensions of bridging between eIDAS and SSI concepts and increasing the value and usability of digital identity in the European mode

On March 9, The Vienna Digital Identity Meetup* hosted presentations from Xavier Vila, Product Manager for Validated ID and Dr. Ignacio Alamillo, Director of Astrea on eIDAS and Self Sovereign Identity. The presentations covered the technical, legal and business dimensions of bridging between eIDAS and SSI concepts and increasing the value and usability of digital identity in the European model.

A key component of the Digital Europe architecture is the existence of a trusted and secure digital identity infrastructure. This journey was started in 2014 with the implementation of the eIDAS Regulation. As has been discussed in previous Vienna Digital Identity Meetups, a high assurance digital identity is the key piece connecting the physical and digital worlds, and this piece was not included in the initial creation of our digital world.

Why then is eIDAS v1 not seen as a success? There are many reasons; from parts of the regulation that focused or constrained its use into the public sphere only, to the lack of total coverage across all of the EU. Likely the key missing piece was that the cultural climate was not yet ripe and the state of digital identity was really not ready. Too many technical problems were yet to be solved. Without these elements the realized state of eIDAS should not be unexpected. All this said, eIDAS v1 laid very important groundwork and created an environment to gather important learnings to allow eIDAS v2 to realize the hoped for levels of success and adoption.

Validated ID has been developing a eIDAS - ESSIF bridge bringing the eIDAS trust seals to Verifiable Credentials. As a Qualified Trust Service Provider under the eIDAS, Validated ID has been offering trust services to customers across Europe since 2012. Xavier presented this current work and provides a detailed explanation on how the eIDAS-SSI bridge applies the qualified electronic seals to Verifiable Credentials.

Dr. Alamillo further clarified the importance of the eIDAS-SSI Bridge in his presentation on SSI in the eIDAS Regulation. When a Qualified Electronic Seal is applied to a Verifiable Credential the combined document becomes a legal document with cross border legal value within the EU. Currently in an identity context, eIDAS is the European wide identity meta system, providing the framework for the 27 nation states of the EU to operate as a very large federated identity network. However, the point was raised the in the current ongoing revision of the eIDAS regulation, it is possible that handling of electronic identification may be changed from a ‘connecting of services’ to a new trust service.


To listen to a recording of the event please check out the link: https://vimeo.com/522501200

Time markers:

0:00:00 - Introduction


0:04:00 - Xavier Vila, Validated ID


0:34:00 - Questions


0:42:00 - Dr. Ignacio Alamillo, Astrea


1:18:00 - Questions


1:28:00 - Wrap-up & Upcoming Events



Resources

SSI - eIDAS Bridge GitHub Repo - https://github.com/validatedid/ssi-eidas-bridge/

Xavier Vila’s Presentation Deck: Vale-SSI-EIDAS Bridge.pdf
Nach
o Alamillo’s Presentation Deck: Alamillo-SSI-EIDAS.pdf

And as a reminder, we continue to have online only events.

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Jon Udell

A wider view

For 20 bucks or less, nowadays, you can buy an extra-wide convex mirror that clips onto your car’s existing rear-view mirror. We just tried one for the first time, and I’m pretty sure it’s a keeper. These gadgets claim to eliminate blind spots, and this one absolutely does. Driving down 101, I counted three seconds … Continue reading A wider view

For 20 bucks or less, nowadays, you can buy an extra-wide convex mirror that clips onto your car’s existing rear-view mirror. We just tried one for the first time, and I’m pretty sure it’s a keeper. These gadgets claim to eliminate blind spots, and this one absolutely does. Driving down 101, I counted three seconds as a car passed through my driver’s-side blind spot. That’s a long time when you’re going 70 miles per hour; during that whole time I could see that passing car in the extended mirror.

Precious few gadgets spark joy for me. This one had me at hello. Not having to turn your head, avoiding the risk of not turning your head — these are huge benefits, quite possibly life-savers. For 20 bucks!

It got even better. As darkness fell, we wondered how it would handle approaching headlights. It’s not adjustable like the stock mirror, but that turns out not to be a problem. The mirror dims those headlights so they’re easy to look at. The same lights in the side mirrors are blinding by comparison.

I’ve been driving more than 40 years. This expanded view could have been made available at any point along the way. There’s nothing electronic or digital. It’s just a better idea that combines existing ingredients in a new way. That pretty much sums up my own approach to product development.

Finally, there’s the metaphor. Seeing around corners is a superpower I’ve always wanted. I used to love taking photos with the fisheye lens on my dad’s 35mm Exacta, now I love making panoramic views with my phone. I hate being blindsided, on the road and in life, by things I can’t see coming. I hate narrow-mindedness, and always reach for a wider view.

I’ll never overcome all my blind spots but it’s nice to chip away at them. After today, there will be several fewer to contend with.

File:A wider view at sunset – geograph.org.uk – 593022.jpg – Wikimedia Commons

Thursday, 11. March 2021

Doc Searls Weblog

Why is the “un-carrier” falling into the hellhole of tracking-based advertising?

For a few years now, T-Mobile has been branding itself the “un-carrier,” saying it’s “synonymous with 100% customer commitment.” Credit where due: we switched from AT&T a few years ago because T-Mobile, alone among U.S. carriers at the time, gave customers a nice cheap unlimited data plan for traveling outside the country. But now comes this […]

For a few years now, T-Mobile has been branding itself the “un-carrier,” saying it’s “synonymous with 100% customer commitment.” Credit where due: we switched from AT&T a few years ago because T-Mobile, alone among U.S. carriers at the time, gave customers a nice cheap unlimited data plan for traveling outside the country.

But now comes this story in the Wall Street Journal:

T-Mobile to Step Up Ad Targeting of Cellphone Customers
Wireless carrier tells subscribers it could share their masked browsing, app data and online activity with advertisers unless they opt out

Talk about jumping on a bandwagon sinking in quicksand. Lawmakers in Europe (GDPR), California (CCPA) and elsewhere have been doing their best to make this kind of thing illegal, or at least difficult. Worse, it should now be clear that it not only sucks at its purpose, but customers hate it. A lot.

I just counted, and all 94 responses in the “conversation” under that piece are disapproving of this move by T-Mobile. I just copied them over and compressed out some extraneous stuff. Here ya go:

“Terrible decision by T-Mobile. Nobody ever says “I want more targeted advertising,” unless they are in the ad business.  Time to shop for a new carrier – it’s not like their service was stellar.”

“A disappointing development for a carrier which made its name by shaking up the big carriers with their overpriced plans.”

“Just an unbelievable break in trust!”

“Here’s an idea for you, Verizon. Automatically opt people into accepting a break on their phone bill in exchange for the money you make selling their data.”

“You want to make money on selling customer’s private information? Fine – but in turn, don’t charge your customers for generating that profitable information.”

“Data revenue sharing is coming. If you use my data, you will have to share the revenue with me.”

“Another reason to never switch to T-Mobile.”

“Kudos to WSJ for providing links on how to opt-out!”

“Just another disappointment from T-Mobile.  I guess I shouldn’t be surprised.”

“We were supposed to be controlled by the government.”

“How crazy is it that we are having data shared for service we  PAY for? You might expect it on services that we don’t, as a kind of ‘exchange.'”

“WSJ just earned their subscription fee. Wouldn’t have known about this, or taken action without this story. Toggled it off on my phone, and then sent everyone I know on T Mobile the details on how to protect themselves.”

“Just finished an Online Chat with their customer service dept….’Rest assured, your data is safe with T-Mobile’…no, no it isn’t.  They may drop me as a customer since I sent links to the CCPA, the recent VA privacy law and a link to this article.  And just  to make sure the agent could read it – I sent the highlights too.  the response – ‘Your data is safe….’  Clueless, absolutely clueless.”

“As soon as I heard this, I went in and turned off tracking.  Also, when I get advertising that is clearly targeted (sometimes pretty easy to tell) I make a mental note to never buy or use the product or service advertised if I can avoid it.  Do others think the same?”

“Come on Congress, pass a law requiring any business or non-profit that wants to share your data with others to require it’s customers to ‘opt-in’. We should(n’t) have to ‘opt-out’ to prevent them from doing so, it should be the other way around. Only exception is them sharing data with the government and that there should be laws that limit what can be shared with the government and under what circumstances.”

“There must be massive amounts of money to be made in tracking what people do for targeted ads.  I had someone working for a national company tell me I would be shocked at what is known about me and what I do online.  My 85 year old dad refuses a smartphone and pays cash for everything he does short of things like utilities.  He still sends in a check each month to them, refuses any online transactions.  He is their least favorite kind of person but, he at least has some degree of privacy left.”

“Would you find interest-based ads on your phone helpful or intrusive?
Neither–they’re destructive. They limit the breadth of ideas concerning things I might be interested in seeing or buying. I generally proactively look when I want or need something, and so advertising has little impact on me. However, an occasional random ad shows up that broadens my interest–that goes away with the noise of targeted ads overlain and drowning it out. If T-Mobile were truly interested, it would make its program an opt-in program and tout it so those who might be interested could make the choice.”

“Humans evolved from stone age to modern civilization. These tech companies will strip all our clothes.”

“They just can’t help themselves. They know it’s wrong, they know people will hate and distrust them for it, but the lure of doing evil is too strong for such weak-minded business executives to resist the siren call of screwing over their customers for a buck. Which circle of hell will they be joining Zuckerberg in?”

“Big brother lurks behind every corner.”

“What privacy policy update was this?  Don’t they always preface their privacy updates with the statement: YOUR PRIVACY IS IMPORTANT TO US(?) When did T-Mobile tell its customers our privacy is no longer important to them?  And that in fact we are now going to sell all we know about you to the highest bidder. Seems they need at least to get informed consent to reverse this policy and to demonstrate that they gave notice that was actually received and reviewed and  understood by customers….otherwise, isn’t this wiretapping by a third party…a crime?  Also isn’t using electronic means to monitor someone in an environment where they have the reasonable expectation of privacy a tort. Why don’t they just have a dual rate structure?   The more expensive traditional privacy plan and a cheaper exploitation plan? Then at least they can demonstrate they have given you consideration for the surrender of your right to privacy.”

“A very useful article! I was able to log in and remove my default to receive such advertisements “relevant” to me.  That said all the regulatory bodies in the US are often headed by industry personnel who are their to protect companies, not consumers. US is the best place for any company to operate freely with regulatory burden. T-mobile follows the European standards in EU, but in the US there are no such restraints.”

“It’s far beyond time for the Congress to pass a sweeping privacy bill that outlaws collection and sale of personal information on citizens without their consent.”

“Appreciate the heads-up  and the guidance on how to opt out. Took 30 seconds!”

“Friends, you may not be aware that almost all of the apps on your iPhone track your location, which the apps sell to other companies, and someday the government. If you want to stop the apps from tracking your locations, this is what to do. In Settings, choose Privacy.   Then choose Location Services.  There you will see a list of your apps that track your location.  All of the time. I have switched nearly all of my apps to ‘Never’ track.  A few apps, mostly relating to travel, I have set to “While using.”  For instance, I have set Google Maps to ‘While using.’ That is how to take control of your information.”

“Thank you for this important info! I use T-Mobile and like them, but hadn’t heard of this latest privacy outrage. I’ve opted out.”

“T-Mobile is following Facebook’s playbook. Apple profits by selling devices and Operating Sysyems. Facebook & T-Mobile profit by selling, ………………… YOU!”

“With this move, at first by one then all carriers, I will really start to limit my small screen time.”

“As a 18 year customer of T-Mobile, I would have preferred an email from T-Mobile  about this, rather than having read this by chance today.”

“It should be Opt-In, not Opt-out. Forcing an opt out is a bit slimy in my books. Also, you know they’ll just end up dropping that option eventually and you’ll be stuck as opted in. Even if you opted in, your phone plan should be free or heavily subsidized since they are making dough off your usage.”

“No one automatically agrees to tracking of one’s life, via the GPS on their cell phone. Time to switch carriers.”

“It’s outrageous that customers who pay exorbitant fees for the devices are also exploited with advertising campaigns. I use ad blockers and a VPN and set cookies to clear when the browser is closed. When Apple releases the software to block the ad identification number of my device from being shared with the scum, I’ll be the first to use that, too.”

“It was a pain to opt out of this on T-Mobile. NOT COOL.”

“I just made the decision to “opt out” of choosing TMobile as my new phone service provider.  So very much appreciated.”

“Well, T-Mobile, you just lost a potential subscriber.  And why not reverse this and make it opt-in instead of opt-out?  I know, because too many people are lazy and will never opt-out, selling their souls to advertisers. And for those of you who decide to opt-out, congratulations.  You’re part of the vast minority who actually pay attention to these issues.”

“I have been seriously considering making the switch from Verizon to T-Mobile. The cavalier attitude that T-Mobile has for customers data privacy has caused me to put this on hold. You have to be tone deaf as a company to think that this is a good idea in the market place today.”

“Been with T-Mo for over 20 years because they’re so much better for international travel than the others. I don’t plan on changing to another carrier but I’ll opt out of this, thanks.”

“So now we know why T-Mobile is so much cheaper.”

“I have never heard anyone say that they want more ads. How about I pay too much for your services already and I don’t want ANY ads. We need a European style GDP(R) with real teeth in the USA and we need it now!”

“So these dummies are going to waste their money on ads when their service Suckky Ducky!   Sorry, but it’s a wasteland of T-Mobile, “No Service” Bars on your phone with these guys.  It’s the worst service, period. Spend your money on your service, the customers will follow.  Why is that so hard for these dummies to understand?”

“If they do this I will go elsewhere.”

“When will these companies learn that their ads are an annoyance.  I do not want or appreciate their ads.  I hate the words ‘We use our data to customize the ads you receive.'”

“Imagine if those companies had put that much effort and money into actually improving their service. Nah, that’s ridiculous.”

“Thank you info on how to opt out. I just did so. It’s up to me to decide what advertising is relevant for me, not some giant corporation that thinks they own me.”

“who is the customer out there like, Yeah I want them to advertise to me! I love it!’? Hard to believe anyone would ask for this.”

“I believe using a VPN would pretty much halt all of this nonsense, especially if the carrier doesn’t want to cooperate.”

“I’m a TMobile customer, and to be honest, I really don’t care about advertising–as long as they don’t give marketers my phone number.  Now that would be a deal breaker.”

“What about iPhone users on T-Mobile?  Apple’s move to remove third party cookies is creating this incentive for carriers to fill the void. It’s time for a national privacy bill.”

“We need digital privacy laws !!!   Sad that Europe and other countries are far ahead of us here.”

“Pure arrogance on the part of the carrier. What are they thinking at a time when people are increasingly concerned about privacy? I’m glad that I’m not currently a T-Mobile customer and this seals the deal for me for the future.”

“AT&T won’t actually let you opt out fully. Requests to block third party analytics trigger pop up messages that state ‘Our system doesn’t seem to be cooperating. Sorry for any inconvenience. Please try again later’.”

“One of the more salient articles I’ve read anywhere recently. Google I understand, we get free email and other stuff, and it’s a business. But I already pay a couple hundred a month to my phone provider. And now they think it’s a good idea to barrage me and my family? What about underage kids getting ads – that must be legal only because the right politicians got paid off.”

“Oh yeah, I bet customers have been begging for more “targeted advertising”.  It would be nice if a change in privacy policy also allowed you to void your 12 month agreement with these guys.”

“Thank you for showing us how to opt out. If these companies want to sell my data, then they should pay me part of the proceeds. Otherwise, I opt out.”

Think T-Mobile is listening?

If not, they’re just a typical carrier with 0% customer commitment.


The eventual normal

One year ago exactly (at this minute), my wife and I were somewhere over Nebraska, headed from Newark to Santa Barbara by way of Denver, on the last flight we’ve ever taken. Prior to that we had put about four million miles on United alone, flying almost constantly somewhere, mostly on business. The map above […]

One year ago exactly (at this minute), my wife and I were somewhere over Nebraska, headed from Newark to Santa Barbara by way of Denver, on the last flight we’ve ever taken. Prior to that we had put about four million miles on United alone, flying almost constantly somewhere, mostly on business. The map above traces what my pocket GPS recorded on various trips (and far from all of them) by land, sea and air since 2007. This life began for me in 1990 and for my wife long before that. Post-Covid, none of this will ever be the same. For anybody.

We also haven’t seen most of our kids or grandkids in more than a year. Same goes for countless friends, business associates and fellow (no longer) travelers on other routes of life.

The old normal is over. We don’t know what the new normal will be, exactly; but it’s clear that business travel as we knew it is gone for years to come, if not forever.

I also sense a generational hand-off. Young people always take over from their elders at some point, but this handoff is from the physical to the digital. Young people are digital natives. Older folk are at best familiar with the digital world: adept in many cases, but not born into it. Being born into the digital world is very different. And still very new.

Though my wife and I have been stuck in Southern California for a year now, we have been living mostly in the digital world, working hard on that handoff, trying to deposit all we can of our long experience and hard-won wisdom on the conveyor belt of work we share across generations.

There will be a new normal, eventually. It will be a normal like the one we had in the 20th Century, which started with WWI and ended with Covid. This was a normal where the cultural center was held by newspapers and broadcasting, and every adult knew how to drive.

Now we’re in the 21st Century, and it’s something of a whiteboard. We still have the old media and speak the same languages, but Covid pushed a reset button, and a lot of the old norms are open to question, if not out the window completely.

Why should the digital young accept the analog-born status quos of business, politics, religion, education, transportation or anything? The easy answer is because the flywheels of those things are still spinning. The hard answers start with questions about how we can do all that stuff better. For sure all the answers will be, to a huge degree, digital.

Perspective: the world has been digital for a only few years now, and will likely remain so for many decades or centuries. Far more has been not been done than has, and lots of stuff will have to be improvised until we (increasingly the young folk) figure out the best approaches. It won’t be easy. None of the technical areas my wife and I are involved with personally (and I’ve been writing about) —privacy, identity, fintech, facial recognition, advertising, journalism—have easy answers to their problems, much less final ones.

But we like working on them, and sensing some progress, which doesn’t suck.

 

 

 


Bill Wendel's Real Estate Cafe

Pandemic exposed Hidden Infections – Real Estate Reckoning Overdue

“When I started in this business, there was a broad consensus around making the American dream accessible to middle- and lower-income people. After this year… The post Pandemic exposed Hidden Infections - Real Estate Reckoning Overdue first appeared on Real Estate Cafe.

“When I started in this business, there was a broad consensus around making the American dream accessible to middle- and lower-income people. After this year…

The post Pandemic exposed Hidden Infections - Real Estate Reckoning Overdue first appeared on Real Estate Cafe.


Doc Searls Weblog

Enough with the giant URLs

A few minutes ago I wanted to find something I’d written about privacy. So I started with a simple search on Google: The result was this: Which is a very very very very very very very very very very very very very way long way of saying this:  https://google.com/search?&q=doc+searls+… That’s 609 characters vs. 47, or […]

A few minutes ago I wanted to find something I’d written about privacy. So I started with a simple search on Google:

The result was this:

Which is a very very very very very very very very very very very very very way long way of saying this:

 https://google.com/search?&q=doc+searls+…

That’s 609 characters vs. 47, or about 13 times longer. (Hence the word “very” repeated 13 times, above.)

Why are search URLs so long these days? The didn’t used to be.

I assume that the 562 extra characters in that long url tell Google more about me and what I’m doing than they used to want to know. In old long-URL search results, there was human-readable stuff there about the computer and the browser being used. This mess surely contains the same, plus lots of personal data about me and what I’m doing online in addition to searching for this one thing. But I don’t know. And that’s surely part of the idea here.

This much, however, is easy for a human to read:

Giant URLs like this are cyphers, on purpose. You’re not supposed to know what they actually say. Only Google should know. There is a lot about your searches that are Google’s business and not yours. Google has lost interest (if it ever had any) in making search result URLs easy to copy and use somewhere else, such as in a post like this.

Bing is better in this regard. Here’s the same search result there:

That’s 101 characters, or less than 1/6th of Google’s.

The de-crufted URL is also shorter:

 https://bing.com/search?q=doc+searls+pri…

Just 44 characters.

So here is a suggestion for both companies: make search results available with one click in their basic forms. That will make sharing those URLs a lot easier to do, and create good will as well. And, Google, if a cruft-less URL is harder for you to track, so what? Maybe you shouldn’t be doing some of this tracking in the first place.

Sometimes it’s better to make things easy for people than harder. This is one of those times. Or billions of them.

 

 

 

Tuesday, 09. March 2021

Jon Udell

The New Freshman Comp

The column republished below, originally at http://www.oreillynet.com/pub/a/network/2005/04/22/primetime.html, was the vector that connected me to my dear friend Gardner Campbell. I’m resurrecting it here partly just to bring it back online, but mainly to celebrate the ways in which Gardner — a film scholar among many other things — is, right now, bringing his film expertise … Continue reading The

The column republished below, originally at http://www.oreillynet.com/pub/a/network/2005/04/22/primetime.html, was the vector that connected me to my dear friend Gardner Campbell. I’m resurrecting it here partly just to bring it back online, but mainly to celebrate the ways in which Gardner — a film scholar among many other things — is, right now, bringing his film expertise to the practice of online teaching.

In this post he reflects:

Most of the learning spaces I’ve been in provide very poorly, if at all, for the supposed magic of being co-located. A state-mandated prison-spec windowless classroom has less character than a well-lighted Zoom conference. A lectern with a touch-pad control for a projector-and-screen combo is much less flexible and, I’d argue, conveys much less human connection and warmth than I can when I share a screen on Zoom during a synchronous class, or see my students there, not in front of a white sheet of reflective material, but in the medium with me, lighting up the chat, sharing links, sharing the simple camaraderie of a hearty “good morning” as class begins.

And in this one he shares the course trailer (!) for Fiction into film: a study of adaptations of “Little Women”.

My 2005 column was a riff on a New York Times article, Is Cinema Studies the new MBA? It was perhaps a stretch, in 2005, to argue for cinema studies as an integral part of the new freshman comp. The argument makes a lot more sense now.

The New Freshman Comp

For many years I have alternately worn two professional hats: writer and programmer. Lately I find myself wearing a third hat: filmmaker. When I began making the films that I now call screencasts, my readers and I both sensed that this medium was different enough to justify the new name that we collaboratively gave it. Here’s how I define the difference. Film is a genre of storytelling that addresses the whole spectrum of human experience. Screencasting is a subgenre of film that can tell stories about the limited — but rapidly growing — slice of our lives that is mediated by software.

Telling stories about software in this audiovisual way is something I believe technical people will increasingly want to do. To explain why, let’s first discuss a more ancient storytelling mode: writing.

The typical reader of this column is probably, like me, a writer of both prose and code. Odds are you identify yourself as a coder more than as a writer. But you may also recently have begun blogging, in which case you’ve seen your writing muscles grow stronger with exercise.

Effective writing and effective coding are more closely related than you might think. Once upon a time I spent a year as a graduate student and teaching assistant at an Ivy League university. My program of study was science writing, but that was a tiny subspecialty within a larger MFA (master of fine arts) program dedicated to creative writing. That’s what they asked me to teach, and the notion terrified me. I had no idea what I’d say to a roomful of aspiring poets and novelists. As it turned out, though, many of these kids were in fact aspiring doctors, scientists, and engineers who needed humanities credits. So I decided to teach basic expository writing. The university’s view was that these kids had done enough of that in high school. Mine was that they hadn’t, not by a long shot.

I began by challenging their reverence for published work. Passages from books and newspapers became object lessons in editing, a task few of my students had ever been asked to perform in a serious way. They were surprised by the notion that you could improve material that had been professionally written and edited, then sold in bookstores or on newsstands. Who were they to mess with the work of the pros?

I, in turn, was surprised to find this reverent attitude even among the budding software engineers. They took it for granted that programs were imperfect texts, always subject to improvement. But they didn’t see prose in the same way. They didn’t equate refactoring a program with editing a piece of writing, as I did then and still do.

When I taught this class more than twenty years ago the term “refactoring” wasn’t commonly applied to software. Yet that’s precisely how I think about the iterative refinement of prose and of code. In both realms, we adjust vocabulary to achieve consistency of tone, and we transform structure to achieve economy of expression.

I encouraged my students to regard writing and editing as activities governed by engineering principles not unlike the ones that govern coding and refactoring. Yes, writing is a creative act. So is coding. But in both cases the creative impulse is expressed in orderly, calculated, even mechanical ways. This seemed to be a useful analogy. For technically-inclined students earning required humanities credits, it made the subject seem more relevant and at the same time more approachable.

In the pre-Internet era, none of us foresaw the explosive growth of the Internet as a textual medium. If you’d asked me then why a programmer ought to be able to write effectively, I’d have pointed mainly to specs and manuals. I didn’t see that software development was already becoming a global collaboration, that email and newsgroups were its lifeblood, and that the ability to articulate and persuade in the medium of text could be as crucial as the ability to design and build in the medium of code.

Nowadays, of course, software developers have embraced new tools of articulation and persuasion: blogs, wikis. I’m often amazed not only by the amount of writing that goes on in these forms, but also by its quality. Writing muscles do strengthen with exercise, and the game of collaborative software development gives them a great workout.

Not everyone drinks equally from this fountain of prose, though. Developers tend to write a great deal for other developers, but much less for those who use their software. Laziness is a factor; hubris even more so. We like to imagine that our software speaks for itself. And in some ways that’s true. Documentation is often only a crutch. If you have to explain how to use your software, you’ve failed.

It may, however, be obvious how to use a piece of software, and yet not at all obvious why to use it. I’ll give you two examples: Wikipedia and del.icio.us. Anyone who approaches either of these applications will immediately grasp their basic modes of use. That’s the easy part. The hard part is understanding what they’re about, and why they matter.

A social application works within an environment that it simultaneously helps to create. If you understand that environment, the application makes sense. Otherwise it can seem weird and pointless.

Paul Kedrosky, an investor, academic, and columnist, alluded to this problem on his blog last month:

Funny conversation I had with someone yesterday: We agreed that the thing that generally made us both persevere and keep trying any new service online, even if we didn’t get it the first umpteen times, was having Jon Udell post that said service was useful. After all, if Jon liked it then it had to be that we just hadn’t tried hard enough. [Infectious Greed]

I immodestly quote Paul’s remarks in order to revise and extend them. I agree that the rate-limiting factor for software adoption is increasingly not purchase, or installation, or training, but simply “getting it.” And while I may have a good track record for “getting it,” plenty of other people do too — the creators of new applications, obviously, as well as the early adopters. What’s unusual about me is the degree to which I am trained, inclined, and paid to communicate in ways that help others to “get it.”

We haven’t always seen the role of the writer and the role of the developer as deeply connected but, as the context for understanding software shifts from computers and networks to people and groups, I think we’ll find that they are. When an important application’s purpose is unclear on the first umpteen approaches, and when “getting it” requires hard work, you can’t fix the problem with a user-interface overhaul or a better manual. There needs to be an ongoing conversation about what the code does and, just as importantly, why. Professional communicators (like me) can help move things along, but everyone needs to participate, and everyone needs to be able to communicate effectively.

If you’re a developer struggling to evangelize an idea, I’d start by reiterating that your coding instincts can also help you become a better writer. Until recently, that’s where I’d have ended this essay too. But recent events have shown me that writing alone, powerful though it can be, won’t necessarily suffice.

I’ve written often — and, I like to think, cogently — about wikis and tagging. But my screencasts about Wikipedia and del.icio.us have had a profoundly greater impact than anything I’ve written on these topics. People “get it” when they watch these movies in ways that they otherwise don’t.

It’s undoubtedly true that an audiovisual narrative enters many 21st-century minds more easily, and makes a more lasting impression on those minds, than does a written narrative. But it’s also true that the interactive experience of software is fundamentally cinematic in nature. Because an application plays out as a sequence of frames on a timeline, a narrated screencast may be the best possible way to represent it and analyze it.

If you buy either or both of these explanations, what then? Would I really suggest that techies will become fluid storytellers not only in the medium of the written essay, but also in the medium of the narrated screencast? Actually, yes, I would, and I’m starting to find people who want to take on the challenge.

A few months ago I heard from Michael Tiller, who describes himself as a “mechanical engineer trapped in a computer scientist’s body.” Michael has had a long and passionate interest in Modelica, an open, object-oriented language for modeling mechanical, electrical, electronic, hydraulic, thermal, and control systems. He wanted to work with me to develop a screencast on this topic. But it’s far from my domains of expertise and, in the end, all he really needed was my encouragement. This week, Michael launched a website called Dynopsis.com that’s chartered to explore the intersection of engineering and information technologies. Featured prominently on the site is this 20-minute screencast in which he illustrates the use of Modelica in the context of the Dymola IDE.

This screencast was made with Windows Media Encoder 9, and without the help of any editing. After a couple of takes, Michael came up with a great overview of the Modelica language, the Dymola tool, and the modeling and simulation techniques that they embody. Since he is also author of a book on this subject, I asked Michael to reflect on these different narrative modes, and here’s how he responded on his blog:

If I were interested in teaching someone just the textual aspects of the Modelica language, this is exactly the approach I would take.

But when trying to teach or explain a medium that is visual, other tools can be much more effective. Screencasts are one technology that could really make an impact on the way some subjects are taught and I can see how these ideas could be extended much further. [Dynopsis: Learning by example: screencasts]

We’re just scratching the surface of this medium. Its educational power is immediately obvious, and over time its persuasive power will come into focus too. The New York Times recently asked: “Is cinema studies the new MBA?” I’ll go further and suggest that these methods ought to be part of the new freshman comp. Writing and editing will remain the foundation skills they always were, but we’ll increasingly combine them with speech and video. The tools and techniques are new to many of us. But the underlying principles — consistency of tone, clarity of structure, economy of expression, iterative refinement — will be familiar to programmers and writers alike.

Monday, 08. March 2021

Damien Bod

Securing Blazor Web assembly using cookies

The article shows how a Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Azure AD is used as the identity provider and the Microsoft.Identity.Web Nuget package is used to secure the trusted server rendered application. The API calls are protected using the secure cookie and anti-forgery tokens to […]

The article shows how a Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Azure AD is used as the identity provider and the Microsoft.Identity.Web Nuget package is used to secure the trusted server rendered application. The API calls are protected using the secure cookie and anti-forgery tokens to protected against CSRF. This architecture is also known as the Backends for Frontends (BFF) Pattern.

Code: Blazor Cookie security

Why Cookies

By using cookies, it gives us the possiblity to increase the security of the whole application, UI + API. Blazor web assembly is treated as a UI in the server rendered application. By using cookies, no access tokens, refresh tokens or id tokens are saved or managed in the browser. All security is implemented in the trusted backend. By implementing the security in the trusted backend, the application can be authenticated by the identity provider and all access tokens are removed from the browser, web storage. With the correct security definitions on the cookies, the security risks can be reduced, and the client application can be authenticated. It would be possible to use sender constrained tokens, or Mutual TLS for increased security, if this was required. Anti-forgery tokens are required to secure the API requests because we use cookies.

The UI and the backend are one application which are coupled together. This is different to the standard Blazor template which uses access tokens. The WASM and the API are secured as two separate applications. Here only a single server rendered application is secured. The WASM client can only use APIs hosted on the same domain.

History

2021-03-09 Updated Anti-forgery policy, feedback Philippe De Ryck

Credits

Some of the code in this repo was built using original source code from Bernd Hirschmann.

Thank you for the git repository.

Creating the Blazor application

The Blazor application was created using a web assembly template hosted in an ASP.NET Core application. You need to check the checkbox in Visual Studio for this. No authentication was added. This creates three projects. We will add the security first, then the services to use the Identity of the authenticated user in the WASM client, and then add the bits required for CSRF protection.

Securing the application using Azure AD

The application is secured using the Azure AD identity provider. This is implemented using the Microsoft.Identity.Web web application client, not API client. This is just a wrapper for the Open ID connect code flow authentication and if successful authenticated, the auth data is stored in a cookie. Two Azure App Registrations are used to implement this, one for the API and one for the Web authentication. A client secret is required to access the API. A certificate could also be used instead. See the Microsoft.Identity.Web docs for more info.

The app.settings contains the configuration for both the API and the web client. The ScopeForAccessToken contains all the scopes required by the application so that after the user authenticates, the user can give consent up front for all required APIs. The rest is standard.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "46d2f651-813a-4b5c-8a43-63abcb4f692c", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc" // "ClientSecret": "add secret to the user secrets" }, "UserApiOne": { "ScopeForAccessToken": "api://b2a09168-54e2-4bc4-af92-a710a64ef1fa/access_as_user User.ReadBasic.All user.read", "ApiBaseAddress": "https://localhost:44395" },

The following nuget packages were added to the server blazor host application.

Microsoft.AspNetCore.Components.WebAssembly.Server Microsoft.AspNetCore.Authentication.JwtBearer Microsoft.AspNetCore.Authentication.OpenIdConnect Microsoft.Identity.Web Microsoft.Identity.Web.UI Microsoft.Identity.Web.MicrosoftGraphBeta IdentityModel IdentityModel.AspNetCore

The startup ConfigureServices method is used to add the Azure AD authentication clients. The AddMicrosoftIdentityWebAppAuthentication method is used to add the web client which uses the services added in the AddMicrosoftIdentityUI method. Graph API is added as a downstream API demo.

public void ConfigureServices(IServiceCollection services) { // + ... services.AddHttpClient(); services.AddOptions(); string[] initialScopes = Configuration.GetValue<string>( "UserApiOne:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/beta", "User.ReadBasic.All user.read") .AddInMemoryTokenCaches(); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); }

The Configure method is used to add the middleware in the correct order. The Blazor application is setup like the template except for the fallback which maps to the razor page _Host instead of the index. This was added to support anti forgery tokens which I’ll explain later in this blog.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebAssemblyDebugging(); } else { app.UseExceptionHandler("/Error"); app.UseHsts(); } app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); }); }

Now the APIs can be protected using the Authorize attribute with the cookie scheme. The AuthorizeForScopes which come from the Microsoft.Identity.Web Nuget package can be used to validate the scope and handle MSAL consent exceptions.

[ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [AuthorizeForScopes(Scopes = new string[] { "api://b2a09168-54e2-4bc4-af92-a710a64ef1fa/access_as_user" })] [ApiController] [Route("api/[controller]")] public class DirectApiController : ControllerBase { [HttpGet] public IEnumerable<string> Get() { return new List<string> { "some data", "more data", "loads of data" }; } }

Using the claims, identity in the web assembly client application

The next part of the code was implemented using the source code created by Bernd Hirschmann. Now that the server authentication is implemented and the identity exists for the user and the application, the claims from this identity and the state of the actual user needs to be accessed and used in the client web assembly part of the application. APIs need to be created for this purpose. The account controller is used to initialize the sign in flow and a HTTP Post can be used to sign out.

using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Authentication.Cookies; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace BlazorAzureADWithApis.Server.Controllers { // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth [Route("api/[controller]")] public class AccountController : ControllerBase { [HttpGet("Login")] public ActionResult Login(string returnUrl) { return Challenge(new AuthenticationProperties { RedirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/" }); } // [ValidateAntiForgeryToken] // not needed explicitly due the the auto global definition. [Authorize] [HttpPost("Logout")] public IActionResult Logout() { return SignOut( new AuthenticationProperties { RedirectUri = "/" }, CookieAuthenticationDefaults.AuthenticationScheme, OpenIdConnectDefaults.AuthenticationScheme); } } }

The UserController is used to for the WASM to get access about the current identity and the claims of this identity.

using System.Collections.Generic; using System.Linq; using System.Security.Claims; using BlazorAzureADWithApis.Shared.Authorization; using IdentityModel; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace BlazorAzureADWithApis.Server.Controllers { // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth [Route("api/[controller]")] [ApiController] public class UserController : ControllerBase { [HttpGet] [AllowAnonymous] public IActionResult GetCurrentUser() { return Ok(User.Identity.IsAuthenticated ? CreateUserInfo(User) : UserInfo.Anonymous); } private UserInfo CreateUserInfo(ClaimsPrincipal claimsPrincipal) { if (!claimsPrincipal.Identity.IsAuthenticated) { return UserInfo.Anonymous; } var userInfo = new UserInfo { IsAuthenticated = true }; if (claimsPrincipal.Identity is ClaimsIdentity claimsIdentity) { userInfo.NameClaimType = claimsIdentity.NameClaimType; userInfo.RoleClaimType = claimsIdentity.RoleClaimType; } else { userInfo.NameClaimType = JwtClaimTypes.Name; userInfo.RoleClaimType = JwtClaimTypes.Role; } if (claimsPrincipal.Claims.Any()) { var claims = new List<ClaimValue>(); var nameClaims = claimsPrincipal.FindAll(userInfo.NameClaimType); foreach (var claim in nameClaims) { claims.Add(new ClaimValue(userInfo.NameClaimType, claim.Value)); } // Uncomment this code if you want to send additional claims to the client. //foreach (var claim in claimsPrincipal.Claims.Except(nameClaims)) //{ // claims.Add(new ClaimValue(claim.Type, claim.Value)); //} userInfo.Claims = claims; } return userInfo; } } }

In the client project, the services are added in the program file. The HttpClients are added as well as the AuthenticationStateProvider which can be used in the client UI.

using BlazorAzureADWithApis.Client.Services; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection.Extensions; using System; using System.Net.Http; using System.Net.Http.Headers; using System.Threading.Tasks; namespace BlazorAzureADWithApis.Client { public class Program { public static async Task Main(string[] args) { var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddOptions(); builder.Services.AddAuthorizationCore(); builder.Services.TryAddSingleton<AuthenticationStateProvider, HostAuthenticationStateProvider>(); builder.Services.TryAddSingleton(sp => (HostAuthenticationStateProvider)sp.GetRequiredService<AuthenticationStateProvider>()); builder.Services.AddTransient<AuthorizedHandler>(); builder.Services.AddHttpClient("default", client => { client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); }); builder.Services.AddHttpClient("authorizedClient", client => { client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); }).AddHttpMessageHandler<AuthorizedHandler>(); builder.Services.AddTransient(sp => sp.GetRequiredService<IHttpClientFactory>().CreateClient("default")); await builder.Build().RunAsync(); } } }

The HostAuthenticationStateProvider implements the AuthenticationStateProvider and is used to call the user controller APIs and return the state to the UI.

using BlazorAzureADWithApis.Shared.Authorization; using Microsoft.AspNetCore.Components; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.Extensions.Logging; using System; using System.Net.Http; using System.Net.Http.Json; using System.Security.Claims; using System.Threading.Tasks; namespace BlazorAzureADWithApis.Client.Services { // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth public class HostAuthenticationStateProvider : AuthenticationStateProvider { private static readonly TimeSpan _userCacheRefreshInterval = TimeSpan.FromSeconds(60); private const string LogInPath = "api/Account/Login"; private const string LogOutPath = "api/Account/Logout"; private readonly NavigationManager _navigation; private readonly HttpClient _client; private readonly ILogger<HostAuthenticationStateProvider> _logger; private DateTimeOffset _userLastCheck = DateTimeOffset.FromUnixTimeSeconds(0); private ClaimsPrincipal _cachedUser = new ClaimsPrincipal(new ClaimsIdentity()); public HostAuthenticationStateProvider(NavigationManager navigation, HttpClient client, ILogger<HostAuthenticationStateProvider> logger) { _navigation = navigation; _client = client; _logger = logger; } public override async Task<AuthenticationState> GetAuthenticationStateAsync() { return new AuthenticationState(await GetUser(useCache: true)); } public void SignIn(string customReturnUrl = null) { var returnUrl = customReturnUrl != null ? _navigation.ToAbsoluteUri(customReturnUrl).ToString() : null; var encodedReturnUrl = Uri.EscapeDataString(returnUrl ?? _navigation.Uri); var logInUrl = _navigation.ToAbsoluteUri($"{LogInPath}?returnUrl={encodedReturnUrl}"); _navigation.NavigateTo(logInUrl.ToString(), true); } private async ValueTask<ClaimsPrincipal> GetUser(bool useCache = false) { var now = DateTimeOffset.Now; if (useCache && now < _userLastCheck + _userCacheRefreshInterval) { _logger.LogDebug("Taking user from cache"); return _cachedUser; } _logger.LogDebug("Fetching user"); _cachedUser = await FetchUser(); _userLastCheck = now; return _cachedUser; } private async Task<ClaimsPrincipal> FetchUser() { UserInfo user = null; try { _logger.LogInformation(_client.BaseAddress.ToString()); user = await _client.GetFromJsonAsync<UserInfo>("api/User"); } catch (Exception exc) { _logger.LogWarning(exc, "Fetching user failed."); } if (user == null || !user.IsAuthenticated) { return new ClaimsPrincipal(new ClaimsIdentity()); } var identity = new ClaimsIdentity( nameof(HostAuthenticationStateProvider), user.NameClaimType, user.RoleClaimType); if (user.Claims != null) { foreach (var claim in user.Claims) { identity.AddClaim(new Claim(claim.Type, claim.Value)); } } return new ClaimsPrincipal(identity); } } }

The AuthorizedHandler implements the DelegatingHandler which can be used to add headers or handle HTTP request logic when the user is authenticated.

using System.Net; using System.Net.Http; using System.Threading; using System.Threading.Tasks; namespace BlazorAzureADWithApis.Client.Services { // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth public class AuthorizedHandler : DelegatingHandler { private readonly HostAuthenticationStateProvider _authenticationStateProvider; public AuthorizedHandler(HostAuthenticationStateProvider authenticationStateProvider) { _authenticationStateProvider = authenticationStateProvider; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var authState = await _authenticationStateProvider.GetAuthenticationStateAsync(); HttpResponseMessage responseMessage; if (!authState.User.Identity.IsAuthenticated) { // if user is not authenticated, immediately set response status to 401 Unauthorized responseMessage = new HttpResponseMessage(HttpStatusCode.Unauthorized); } else { responseMessage = await base.SendAsync(request, cancellationToken); } if (responseMessage.StatusCode == HttpStatusCode.Unauthorized) { // if server returned 401 Unauthorized, redirect to login page _authenticationStateProvider.SignIn(); } return responseMessage; } } }

Now the AuthorizeView and the Authorized components can be used to hide or display the UI elements depending on the authentication state of the identity.

@inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <div class="main"> <div class="top-row px-4 auth"> <AuthorizeView> <Authorized> <strong>Hello, @context.User.Identity.Name!</strong> <form method="post" action="api//Account/Logout"> <AntiForgeryTokenInput/> <button class="btn btn-link" type="submit">Sign out</button> </form> </Authorized> <NotAuthorized> <a href="Account/Login">Log in</a> </NotAuthorized> </AuthorizeView> </div> <div class="content px-4"> @Body </div> </div> </div>

For more information on this, see the Microsoft docs or this blog.

Cross-site request forgery CSRF protection

Cross-site request forgery (also known as XSRF or CSRF) is a possible security problem when using cookies. We can protect against this using anti-forgery tokens and will add this to the Blazor application. To support this, we can use a Razor page _Host.cshtml file instead of a static html file. This host page is added to the server project and uses the default div with the id app just like the index.html file from the dotnet template. The index.html can be deleted form the client project. The render-mode is per default WebAssembly. If you copied a _Host file from a server Blazor template, you would have to change this or remove it.

The Anti-forgery token is added at the bottom of the file in the body. A antiForgeryToken.js is also added to the razor Page _Host file. Also make sure the headers match the headers from the index.html which you deleted.

@page "/" @namespace BlazorAzureADWithApis.Pages @using BlazorAzureADWithApis.Client @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @{ Layout = null; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <title>Blazor AAD Cookie</title> <base href="~/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/app.css" rel="stylesheet" /> <link href="BlazorAzureADWithApis.Client.styles.css" rel="stylesheet" /> <link href="manifest.json" rel="manifest" /> <link rel="apple-touch-icon" sizes="512x512" href="icon-512.png" /> </head> <body> <div id="app"> <!-- Spinner --> <div class="spinner d-flex align-items-center justify-content-center" style="position:absolute; width: 100%; height: 100%; background: #d3d3d39c; left: 0; top: 0; border-radius: 10px;"> <div class="spinner-border text-success" role="status"> <span class="sr-only">Loading...</span> </div> </div> </div> <div id="blazor-error-ui"> <environment include="Staging,Production"> An error has occurred. This application may no longer respond until reloaded. </environment> <environment include="Development"> An unhandled exception has occurred. See browser dev tools for details. </environment> <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> <script src="_framework/blazor.webassembly.js"></script> <script src="antiForgeryToken.js"></script> @Html.AntiForgeryToken() </body> </html>

The MapFallbackToPage needs to be updated to use the _Host file instead of the static html.

app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); });

The AddAntiforgery is used to add the service for the CSRF protection by using a header named X-XSRF-TOKEN. The AutoValidateAntiforgeryTokenAttribute is added so that all POST, PUT, DELETE Http requests require an anti-forgery token.

public void ConfigureServices(IServiceCollection services) { // + ... services.AddAntiforgery(options => { options.HeaderName = "X-XSRF-TOKEN"; options.Cookie.Name = "__Host-X-XSRF-TOKEN"; options.Cookie.SameSite = Microsoft.AspNetCore.Http.SameSiteMode.Strict; options.Cookie.SecurePolicy = Microsoft.AspNetCore.Http.CookieSecurePolicy.Always; }); services.AddControllersWithViews(options => options.Filters.Add( new AutoValidateAntiforgeryTokenAttribute())); }

The antiForgeryToken.js Javascript file uses the hidden input created by the _Host Razor Page file and returns this in a function.

function getAntiForgeryToken() { var elements = document.getElementsByName('__RequestVerificationToken'); if (elements.length > 0) { return elements[0].value } console.warn('no anti forgery token found!'); return null; }

The Javascript function can be used in any Blazor component now by using the JSRuntime. The anti-forgery token can be added to the X-XSRF-TOKEN HTTP request header which is configured in the server Startup class.

@page "/directapi" @inject HttpClient Http @inject IJSRuntime JSRuntime <h1>Data from Direct API</h1> @if (apiData == null) { <p><em>Loading...</em></p> } else { <table class="table"> <thead> <tr> <th>Data</th> </tr> </thead> <tbody> @foreach (var data in apiData) { <tr> <td>@data</td> </tr> } </tbody> </table> } @code { private string[] apiData; protected override async Task OnInitializedAsync() { var token = await JSRuntime.InvokeAsync<string>("getAntiForgeryToken"); Http.DefaultRequestHeaders.Add("X-XSRF-TOKEN", token); apiData = await Http.GetFromJsonAsync<string[]>("api/DirectApi"); } }

If you are using forms directly in the Blazor template, then a custom component which creates a hidden input can be used to add the anti forgery token to the HTTP POST, PUT, DELETE requests. Underneath is a new component called AntiForgeryTokenInput.

@inject IJSRuntime JSRuntime <input type="hidden" id="__RequestVerificationToken" name="__RequestVerificationToken" value="@GetToken()"> @code { private string token = ""; protected override async Task OnInitializedAsync() { token = await JSRuntime.InvokeAsync<string>("getAntiForgeryToken"); } public string GetToken() { return token; } }

The AntiForgeryTokenInput can be used directly in the HTML code.

<form method="post" action="api/Account/Logout"> <AntiForgeryTokenInput/> <button class="btn btn-link" type="submit">Sign out</button> </form>

In the server application, the ValidateAntiForgeryToken attribute can be used the force using anti forgery token protection explicitly.

[ValidateAntiForgeryToken] [Authorize] [HttpPost("Logout")] public IActionResult Logout() { return SignOut( new AuthenticationProperties { RedirectUri = "/" }, CookieAuthenticationDefaults.AuthenticationScheme, OpenIdConnectDefaults.AuthenticationScheme); }

Using cookies with Blazor WASM and ASP.NET Core hosted applications can be used to support the high security flow requirements which are required for certain application deployments. This makes it possible to add extra layers of security just by having a trusted application implement the security parts. The Blazor client application can only use the API deployed on the host in the same domain. Any Open ID Connect provider can be supported in this way, just like a Razor Page application. This makes it easier to support logout requirements by using a OIDC backchannel logout and so on. MTLS and sender constrained tokens can also be supported with this setup. SignalR no longer needs to add the access tokens to the URL of the web sockets as cookies can be used on the same domain.

Would love feedback on further ways of improving this.

Links:

https://github.com/berhir/BlazorWebAssemblyCookieAuth

Secure a Blazor WebAssembly application with cookie authentication

https://docs.microsoft.com/en-us/aspnet/core/blazor/components/prerendering-and-integration?view=aspnetcore-5.0&pivots=webassembly#configuration

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

Securing Blazor Server App using IdentityServer4

https://github.com/saber-wang/BlazorAppFormTset

https://jonhilton.net/blazor-wasm-prerendering-missing-http-client/

https://andrewlock.net/enabling-prerendering-for-blazor-webassembly-apps/

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/additional-scenarios

Sunday, 07. March 2021

Simon Willison

Weeknotes: Datasette and Git scraping at NICAR, VaccinateCA

This week I virtually attended the NICAR data journalism conference and made a ton of progress on the Django backend for VaccinateCA (see last week). NICAR 2021 NICAR stands for the National Institute for Computer Assisted Reporting - an acronym that reflects the age of the organization, which started teaching journalists data-driven reporting back in 1989, long before the term "data journalis

This week I virtually attended the NICAR data journalism conference and made a ton of progress on the Django backend for VaccinateCA (see last week).

NICAR 2021

NICAR stands for the National Institute for Computer Assisted Reporting - an acronym that reflects the age of the organization, which started teaching journalists data-driven reporting back in 1989, long before the term "data journalism" became commonplace.

This was my third NICAR and it's now firly established itself at the top of the list of my favourite conferences. Every year it attracts over 1,000 of the highest quality data nerds - from data journalism veterans who've been breaking stories for decades to journalists who are just getting started with data and want to start learning Python or polish up their skills with Excel.

I presented an hour long workshop on Datasette, which I'm planning to turn into the first official Datasette tutorial. I also got to pre-record a five minute lightning talk about Git scraping.

I published the video and notes for that yesterday. It really seemed to strike a nerve at the conference: I showed how you can set up a scheduled scraper using GitHub Actions with just a few lines of YAML configuration, and do so entirely through the GitHub web interface without even opening a text editor.

Pretty much every data journalist wants to run scrapers, and understands the friction involved in maintaining your own dedicated server and crontabs and storage and backups for running them. Being able to do this for free on GitHub's infrastructure drops that friction down to almost nothing.

The lightning talk lead to a last-minute GitHub Actions and Git scraping office hours session being added to the schedule, and I was delighted to have Ryan Murphy from the LA Times join that session to demonstrate the incredible things the LA Times have been doing with scrapers and GitHub Actions. You can see some of their scrapers in the datadesk/california-coronavirus-scrapers repo.

VaccinateCA

The race continues to build out a Django backend for the VaccinateCA project, to collect data on vaccine availability from people making calls on that organization's behalf.

The new backend is getting perilously close to launch. I'm leaning heavily on the Django admin for this, refreshing my knowledge of how to customize it with things like admin actions and custom filters.

It's been quite a while since I've done anything sophisticated with the Django admin and it has evolved a LOT. In the past I've advised people to drop the admin for custom view functions the moment they want to do anything out-of-the-ordinary - I don't think that advice holds any more. It's got really good over the years!

A very smart thing the team at VaccinateCA did a month ago is to start logging the full incoming POST bodies for every API request handled by their existing Netlify functions (which then write to Airtable).

This has given me an invaluable tool for testing out the new replacement API: I wrote a script which replays those API logs against my new implementation - allowing me to test that every one of several thousand previously recorded API requests will run without errors against my new code.

Since this is so valuable, I've written code that will log API requests to the new stack directly to the database. Normally I'd shy away from a database table for logging data like this, but the expected traffic is the low thousands of API requests a day - and a few thousand extra database rows per day is a tiny price to pay for having such a high level of visibility into how the API is being used.

(I'm also logging the API requests to PostgreSQL using Django's JSONField, which means I can analyze them in depth later on using PostgreSQL's JSON functionality!)

YouTube subtitles

I decided to add proper subtitles to my lightning talk video, and was delighted to learn that the YouTube subtitle editor pre-populates with an automatically generated transcript, which you can then edit in place to fix up spelling, grammar and remove the various "um" and "so" filler words.

This makes creating high quality captions extremely productive. I've also added them to the 17 minute Introduction to Datasette and sqlite-utils video that's embedded on the datasette.io homepage - editing the transcript for that only took about half an hour.

TIL this week Writing tests for the Django admin with pytest-django Show the timezone for datetimes in the Django admin How to run MediaWiki with SQLite on a macOS laptop

Jon Udell

The 3D splendor of the Sonoma County landscape

We’ve been here 6 years, and the magical Sonoma County landscape just keeps growing on me. I’ve written about the spectacular coastline, a national treasure just twenty miles to our west that’s pristine thanks to the efforts of a handful of environmental activists, notably Bill Kortum. Even closer, just ten miles to our east, lies … Continue reading The 3D splendor of the Sonoma County landscape

We’ve been here 6 years, and the magical Sonoma County landscape just keeps growing on me. I’ve written about the spectacular coastline, a national treasure just twenty miles to our west that’s pristine thanks to the efforts of a handful of environmental activists, notably Bill Kortum. Even closer, just ten miles to our east, lies the also spectacular Mayacamas range where today I again hiked Bald Mountain in Sugarloaf Ridge State Park.

If you ever visit this region and fancy a hike with great views, this is the one. (Ping me and I’ll go with you at the drop of a hat.) It’s not too challenging. The nearby Goodspeed Trail, leading up to Gunsight Notch on Hood Mountain, is more demanding, and in the end, less rewarding. Don’t get me wrong, the view from Gunsight — looking west over the Sonoma Valley and the Santa Rosa plain — is delightful. But the view from the top of Bald Mountain is something else again. On a clear day (which is most of them) you can spin a slow 360 and take in the Napa Valley towns of St. Helena and Calistoga to the west, Cobb and St. Helena mountains to the north, the Sonoma Valley and Santa Rosa plain to the east, then turn south to see Petaluma, Mt. Tamalpais, the top of the Golden Gate bridge, San Francisco, the San Pablo Bay, the tops of the tallest buildings in Oakland, and Mt. Diablo 51 miles away. Finally, to complete the loop, turn east again to look at the Sierra Nevada range 130 miles away.

The rugged topography you can see in that video occurs fractally everywhere around here. It’s taken a while to sink in, but I think I can finally explain what’s so fascinating about this landscape. It’s the sightlines. From almost anywhere, you’re looking at a 3D puzzle. I’m a relative newcomer, but I hike with friends who’ve lived their whole lives here, and from any given place, they are as likely as I am to struggle to identify some remote landmark. Everything looks different from everywhere. You’re always seeing multiple overlapping planes receding into the distance, like dioramas. And they change dramatically as you move around even slightly. Even just ten paces in any direction, or a slight change in elevation, can alter the sightlines completely and reveal or hide a distant landmark.

We’ve lived in flat places, we’ve lived in hilly places, but never until now in such a profoundly three-dimensional landscape. It is a blessing I will never take for granted.


Bill Wendel's Real Estate Cafe

CASA Share: Dreaming of the Saints Next Door!

Have you seen the Pope’s new book, Let Us Dream: The Path to a Better Future? It encourages people to dream bold new futures coming… The post CASA Share: Dreaming of the Saints Next Door! first appeared on Real Estate Cafe.

Have you seen the Pope’s new book, Let Us Dream: The Path to a Better Future? It encourages people to dream bold new futures coming…

The post CASA Share: Dreaming of the Saints Next Door! first appeared on Real Estate Cafe.

Saturday, 06. March 2021

Wip Abramson

Thoughts and Ideas on the Memory of Things

An idea has been maturing in my thoughts for a while now. Or rather I have been thinking about a series of ideas, programming projects I…

An idea has been maturing in my thoughts for a while now. Or rather I have been thinking about a series of ideas, programming projects I would love to actualise, which I recently realised share a common thread. Memory and how we access it. Probably not surprising for software developers, pretty much any application we can imagine requires some form of data store. Although rarely, in my experience, do we talk about this data in terms of memory.

My more recent work researching identity, privacy and trust in digital interactions has evolved and broadened my perspective on the importance of memory. Our identity, whatever we conceive that to be, must be understood to exist in close relation to memory. Or the mathematician in me wants to say as a function of memory I ~ F(m) 1. And as Herbert Simon details in the Science of the Artificial, memory is a key property of any intelligent system 2. The ability to take information from the past and apply it when navigating the present moment is a powerful skill in the Homo Sapien toolbox, both as individuals and a species.

These patterns of thought on memory percolating in my mind have further been influenced through my engagement with blockchain/distributed ledger both as a concept and through the practical application of it in application development. I especially find the ideas coming out of the Ethereum community transformative to the possibilities for digital application design. Persistent Compute Objects (PICOs) 3 are a more recent idea and open source project I have been following that appears to support novel interaction patterns.

It has been an interesting journey to this point, what follows in a sketch of the evolution of my thoughts on memory told through the lens of three ideas and the questions these ideas sparked in me. Then I intend to reflect on where my thoughts are at now, because it is only now reflecting on these ideas that I see the common thread - human memory.

The ideas that influenced my thinking and help illustrate my thoughts are; Viewing Time, The Community Mind and Nifty Books. Each originates at a different point in my life but all were ideas for programming side projects I could use to cut my teeth on a new technology, language or library. Each idea has a special place in my own memory, and it is from these memories that the thoughts I am sharing emerged.

Viewing Time

My first idea, back before I knew how to program. The inspirational carrot I used to motivate myself to learn. Not that I ever wrote more than a few lines of code on this. It originates from a time in Maastricht visiting a friend, we were sitting looking at a beautiful view and I thought:

What if we could create a timelapse of a view from a specific location? A View of Time.

I think I tried to get my friend to go back their every day to take a photo to do just this. Not the ideal solution, my thinking has evolved since then. Here are some of the questions I thought about:

What if we could crowd source the collection of the photos for a view, enabling anyone with a camera to contribute?

How would view’s of time be discovered, found, contributed to and viewed?

How would you prevent “bad” view from being added. Inappropriate, non valued, etc?

Who decides what a “bad” view is? Authorisation problem

Where would views be stored?

Who stores them? Who has control over them? Who manages that control? Who would host and pay for this application and why?

How might Viewing Time change our relationship with our environment and help us reflect on the change happening all around us?

Chasing Ice, a documentary I watched on another visit to the Netherlands, emphasised how powerful this could be. Here is an example. Watching a Chinese cityscape evolve over these last 20 years would have provided another staggering view of time and change. Here are some examples of this using satellite imagery How might Viewing Time help communities record and interact with shared memories? I was recently involved with planting a Community Orchard and became aware how valuable creating a view as a shared artifact could be. It might help communities celebrate and appreciate the positive changes they bring about. How might date, time or season be used to present different time-lapses of the same view?

How would we prevent overtourism?

I never wanted to turn view’s into used and abused tourist spaces, an acknowledged tension. Explicitly want to avoid insta tourism type effects. How? This is a funny add from the Kiwi’s recently keeping me mindful of this What are the incentives? How do we ensure this is an artifact all can enjoy while minimising the unintended consequences associated with the change in context-relative informational norms? How might this be used to create incentives for positive, respectful tourism? Realised in part this is about how views are discovered. This got me closer to the importance of location.

What if you could only discover view’s if you were in it’s location?

How might this help to prevent bad content? How would you even prove you were in a certain location? Who/what would you prove it to?

What is the context and associated informational norms with a View?

Who defines and evolves these norms? What are the rules? How might location be used as a strong authenticator?4

These are all pointers, sketching the outlines of thought that I have been evolving over the 5 or so years since first appeared in my mind.

It is an idea that I never made any meaningful, tangible progress on. Except for a few positive conversations with friends it has remained wishful thinking. It is in my humble opinion, a beauty of an idea, something that I would love to see happen. I am optimistic, the technology and mental model for application design is shifting in ways that open up an entirely new design space for these kinds of ideas. Something other than the for profit venture capitalist endeavours that have lead to the colonisation and privatisation of much of our virtual spaces. A for-profit venture could never be the best realisation of this idea.

The Community Mind

This is my baby. My first side project. It is how I learnt React and felt the power of GraphQl. The first website and server I ever deployed - a challenging experience but one I learnt from. It is was also my first encounter with authentication and account management, what a nightmare that was.

After completing a year in industry this was the next programming project I worked on. I committed a lot of my time to this work. Including spending a month straight on it while dipping my toes into the digital nomad lifestyle in Chiang Mai, where I was exposed to and became fascinated with blockchain, Bitcoin and all that crazy stuff. An influential period of my life really.

Anyway, the idea revolves around creating a place for us to organise our questions. A space for questions to be shared and thought about, but not a place to collect answers. In my mind it was explicitly not for answers. Rather it would provide triggers to thought’s within individuals as they pondered these questions from the their own unique perspective shaped by their lived experience. I am a strong believe that we all have the ability to imagine creative ideas and possibilities, the hard part of course is actioning those ideas. As in many ways this text demonstrates.

The initial inspiration for this idea came from the book A More Beautiful Question. I wanted to try to develop something that would make it easier to ask and discover beautiful questions. A place where these questions could be crowd sourced, recorded and connected. I wanted to provide an interface for individuals to explore a web of interconnected questions being thought by others. I wanted people to be able to contribute their own questions and thought pathways to this network. Each individual interacting with and contributing to the community mind.

The questions that surfaced when thinking through the design requirements of such a project were something along these lines:

How will I manage questioners?

I was still thinking in a user account paradigm in these days. Who gets to ask questions? Who gets to see the questions asked?

How will questioners search and discover the questions they are interested in?

What if questions could be linked to other questions? Who can link questions to other questions? Which links do people see and how do they decide? What if these links grew in strength the more they were traversed and endorsed like neural pathways in our own minds?

How will we prevent duplicate questions?

Am I trying to create a single global repository of questions?

Or would it be better to allow each individual to manage their web of questions independently?

How might you enable the best of both?

How will we prevent bad data? Questions that don’t align with the ethos of the mind?

Whose mind? Who decides? How might questions be optimally curated? What is optimal and who is curating? Where would this information be stored?

What is the business model for such an application?

What are the incentives?

When initially developing this project I had in mind a database for storing questions and their links, which people would interact with. Searching and filtering to discover the questions they were interested in. Contributing their own questions and connections to this storage. All managed by some centralised application, providing a single view and interface for people to interact with. Today, I have a model of individual’s being able to maintain their own mind, curating the questions and connections that find useful to them. Then providing a mechanism to network and aggregate the minds of others into a larger web of questions for all to explore. Imagine a mind like a GitHub repository, the entity that creates it would be able to manage the rules governing how questions and links are contributed. I even considered private minds as a potential business model. Although my desire was and remains today to develop an open source tool for recording, curating and discovering beautiful questions. I see them as a loose scaffold around thought, hinting at the problem space without prescribing the solution. A common entry point to creativity, that any individual from any background at any moment in time would be able to interact with. Using their own unique perspective to draw new insights and inspire different solutions.

I have a lot of fond memories developing this idea, including a weekend in Porto visiting a friend where I discovered the joy and value of committing thoughts to paper. Creating a physical artifact to interact with. Something that in my view can never fully be replicated in a digital medium, but what if you could have both?

This book take me back

Towards the end of my active development of this idea I attempted to integrate a token, Simple Token, as part of a challenge they held. My idea at the time was to use this as some form of incentive mechanism for the application, although the actual execution was a bit clumsy looking back. You can view my submission here. Then there is this old Github issue from a month long hackathon called Blockternship.

While development is dormant, I am still very committed to making this a thing. One day!

Nifty Books

This is another lovely idea, in my book at least. Originating from my desire to learn how to write smart contracts using solidity, the Ethereum programming language. The idea stemmed from thinking through how we might digitally represent books the books we own, creating a distributed library and opening access to a wealth of books. Moving them off our shelves and into peoples hands, helping the wisdom held within them diffuse into more peoples minds and become recorded in their memories. Book’s can provide an intoxicating fountain of knowledge or a refreshing escape from reality. I appreciate both aspects equally and would love more to experience their joy.

For a bit of history of this idea you can see my proposal for the ETH Berlin hackathon around this. I proposed creating an application that allowed anyone to mint an ERC721 Non-Fungible Token to represent their physical book. Unfortunately, I ended up forming a different team and haven’t made much progress on realising this idea. I remain a sketchy solidity developer at best. That said, progress has been made. The concept is more mature in my mind, and the ethereum development landscape has come a long way since 2018. As my recent experience at the virtual ETH Denver highlighted, while I didn’t manage to submit anything or even write much code I did get a sense for how far things have come. The scaffold-eth repo seems like a great place to start, if I ever do manage to carve out time to create this.

I am convinced, and regularly reminded, how this idea could unlock so much hidden value. Book’s deserve to be read more than once, indeed there is something beautiful about a book having been read by many different people. Throughout the course of my studies in Edinburgh I have developed a fairly extensive personal collection of some truly fascinating books. I would love to have a means to share them with others in the area also interested in this material. And yes I am sure there exist ways for me to do this if I really tried, but I believe giving a book a memory has more implications that simply making it easily shareable.

Here are a few questions that this idea has raised over the years as I wondered about how it might be developed:

How might we digitally represent a physical book?

How would you link the physical book with it’s digital representation? Would a QR Code work here?

How might we represent ownership?

What affordances should owners of books have? What affordances should borrower’s have?

How would books within the virtual library be discovered, requested and returned?

How might you pay for postage of book between participants?

What if the model was to create primarily a local virtual library, but with exchanges between localities when requested?

Libraries have existed like this for ages.

How might the digital representation of a book be used to embed it with a memory?

What information would the book want to store in it’s memory? How might we represent the list of borrowers without compromising their privacy?

What if the book had an interface to the Community Mind allowing readers to ask and share questions that the material provoked in them?

How might being in possession of a book, either as a lender or borrower provide a mechanism for access control into other applications? E.g. the Community Mind. What can we learn from the way individuals interact with eBook’s today? How might this approach help us appreciate this medium more deeply?

How might we deter bad actors abusing the virtual library?

What is the incentive model? Who decides? What is to stop malicious actors creating virtual book’s unattached to physical copies? What prevents people from stealing book’s they have borrowed? Who stores the information around virtual books?

How are the search and discovery capabilities for these books mediated?

Who is mediating this? What can we learn from the way book’s are shared and exchanged by travellers?

What if the things we bought came with a configurable and extendable digital memory?

How might this both simplify and expand the field of interaction enabled and perceived by those in proximity to the device?

As usual, a whole load of questions. Always there are questions. I present them here to provoke your own thought and inquiry around these ideas.

The Common Thread

Now, these ideas are not directly or intentionally linked. They are connected through me, and through my desire to produce ideas for software applications I would be motivated to create. Most developers you meet will have a few of these kicking around if you ask them. Over time all ideas evolve, dots connect and new insight emerges. The three ideas I presented trace the evolution of me, as much as anything else, from a computer science undergraduate to hopefully a final year PhD student.

It is only recently, from the new perspectives my research has provided me that I can reflect on all these ideas and clearly see a common thread in my thinking. In keeping with the article will summarise in a series of questions:

How might we use technology to experiment with the ways in which we can attach memory to the artifacts we place meaning in?

How might we design this memory to be open and extendable supporting permission-less innovation at the edges?

How might artifacts use this memory to intelligently interact with it’s environment and the people interacting with it?

How might such artifacts be designed to respect the privacy of those it interacts with while ensuring they are held accountable within a defined context?

What if we started to centralise information on artifacts, using our physical interaction with these artifacts to provide an alternative to the search engine for the discovery of information?

How might centralising information on the artifact, be that a location, object or thought transform the way design digital systems and our ability to actively maintain collective memory at all scales of society? How might context-relative informational norms for managing this memory be defined, communicated, upheld and evolved? How might such artifacts change the way we identify and authenticate digitally?

How might this change the nature of the virtual spaces we interact in?

How might this provide natural limits to the participants within these spaces? How might this create virtual neighbours and encourage trustworthy behaviour and positive relationships?

What are the descriptive and normative properties for structuring memory?

What are the properties of collective memory that we experience in the majority of virtual environments we exist in today and how do they differ from the ways we have managed memory in the physical reality across time and context?

It is an exploration of the structure of memory, how we might augment this structure with intention using digital technology and the implications of this on meaning making within both individuals and groups. To date this exploration has predominantly involved a few powerful entities constructing, owning and manipulating the digital structures of memory to meet their own agenda’s 5.

It is also interesting to me that while working on this idea I created artifacts, both physical and digital, they hold meaning to me but are no longer only accessible too me. By committing thoughts to paper, code or words, they become remembered at least partially in a different kind of memory. Indeed this text itself is one of these artifacts.

I could go on, but it’s long already and I wrote this for fun more than anything. A break from the rigidity of academic writing that can be suffocating at times. A more detailed, formal analysis of these ideas is for another time.

Thanks for making it this far. These thoughts are pretty fresh, I would love to know what they triggered in you.

This is something my thoughts wander across on occasion. I am always drawn the ideas in quantum field theory where particles exist in a field of potential, it is only upon measurement that the field collapses. Feels like there is something interesting in having a similar mental model for identity.

The Sciences of the Artificial, Third Edition. Herbert A Simon. 1969. This is a pretty dense book, but worth a read. Chapters on the Psychology of Thinking and Learning and Remembering are relevant to this post.

Persistent Compute Objects, a fascinating but little known project originating from Phil Windely some time ago I believe. It is currently being actively developed at Brigham Young university. Some good links: https://picolabs.atlassian.net/wiki/spaces/docs/pages/1189992/Persistent+Compute+Objects, https://www.windley.com/tags/picos.shtml

One idea that stuck with me from this is location based authentication. This is one of the reasons I was so interested by the FOAM project, although it seems a long way from reaching it’s potential at the moment. Alternative, censorship resistant location services feels like something that would unlock a lot of value.

I am reminded of a recent podcast episode I listened to involving Kate Raworth the creator of Donut Economics and the Centre for Humane Technology responsible for the Social Dilema. Here is a noteworthy and relevant clip (sorry it’s on facebook), although I highly recommend listening to the entire episode

Friday, 05. March 2021

Simon Willison

The SOC2 Starting Seven

The SOC2 Starting Seven "So, you plan to sell your startup’s product to big companies one day. Congratu-dolences! [...] Here’s how we’ll try to help: with Seven Things you can do now that will simplify SOC2 for you down the road while making your life, or at least your security posture, materially better in the immediacy. Via @jacobian

The SOC2 Starting Seven

"So, you plan to sell your startup’s product to big companies one day. Congratu-dolences! [...] Here’s how we’ll try to help: with Seven Things you can do now that will simplify SOC2 for you down the road while making your life, or at least your security posture, materially better in the immediacy.

Via @jacobian


MyDigitalFootprint

Updating our board papers for Data Attestation

I have written and read my fair share of board and investment papers over the past 25 years.  This post is not to add to the abundance of excellent work on how to write a better board/ investment paper or what the best structure is - it would annoy you and waste my time.  A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structur


I have written and read my fair share of board and investment papers over the past 25 years.  This post is not to add to the abundance of excellent work on how to write a better board/ investment paper or what the best structure is - it would annoy you and waste my time. 

A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structure/ Operations, Illustrative Financials & Scenarios, Competition, Risks and Legal. Case by case there are always minor adjustments. Finally, there will be some form of Recommendation inviting the board to note key facts and approve the request.  I believe it is time for the Chair or CEO, with the support of their senior data lead (#CDO) to ask that each board paper has a new section heading called   “Data Attestation.”  Some will favour this as an addition to the main flow, some as a new part of legal, others as an appendix; how and where matters little compared to its intent.

The intention of this new heading and section is that the board receives a *signed* declaration from the proposer(s) and independent data expert, that the proposer has:

proven attestation of the data used in the board paper, 

proven rights to use the data

what difference/ delta third-party data makes the recommendation/ outcome

ensured, to best efforts, that there is no bias or selection in the data or analysis

clearly specifies any decision making that is or becomes automated 

if relevant created the hypothesis before the analysis 

run scenarios using different data and tools

not miss-led the board using data

highlighted the conflicts of interest between their BSC/KPI and the approval sort


In regards to the independent auditor, this should not be the companies financial auditor or data lake provider, this should be an independent forensic data expert. Audit suggests sampling; this is not about sampling. It is not about creating more hurdles or handing power to an external body, this is about third party verification and validation. A company you build a list of experts and cycle through them on a regular basis. The auditor does not need to see the board paper, the outcome from the analysis or the recommendations - they are there to check the attestation and efficacy from end to end.  Critical will be proof of their expertise and a large insurance certificate.    


Whilst this is not the final wording you will use, it is the intent that is important, this does not remove data risks from the risk section.

Data Attestation

We certify by our signatures that we, the proposer and auditor, can prove to OurCompany PLC Board that we have provable attestation and rights to all the data used in the presentation of this paper.   We have presented in this paper sensitivity of the selected data, model and tools and have provided evidence that different data and analysis tool selection equally favours the recommendation.  We have tested and can verify that our data, analysis, insights and knowledge is traceable and justifiable.  We declare that there are no Conflicts of Interest and no automation of decision making will result from this approval. 

Why do this?

Whilst Directors are collectively accountable and responsible for the decisions they take, right now there is a gap in skills in data and many board members don’t know how to test the data that forms the basis on which they are being asked to approve. This is all new and a level of detail that requires deep expertise.  This provides an additional line until such time that we can gain sufficient skills at the Board and test data properly.  Yes, there is a high duty of care that is already intrinsic in anyone who presents a board paper, however, the data expertise and skills in the majority of senior levels are also well below what we need.  If nothing else it will get those presenting to think carefully about data, bias and the ethics of their proposal. 






Simon Willison

Git scraping, the five minute lightning talk

I prepared a lightning talk about Git scraping for the NICAR 2021 data journalism conference. In the talk I explain the idea of running scheduled scrapers in GitHub Actions, show some examples and then live code a new scraper for the CDC's vaccination data using the GitHub web interface. Here's the video. Notes from the talk Here's the PG&E outage map that I scraped. The trick her

I prepared a lightning talk about Git scraping for the NICAR 2021 data journalism conference. In the talk I explain the idea of running scheduled scrapers in GitHub Actions, show some examples and then live code a new scraper for the CDC's vaccination data using the GitHub web interface. Here's the video.

Notes from the talk

Here's the PG&E outage map that I scraped. The trick here is to open the browser developer tools network tab, then order resources by size and see if you can find the JSON resource that contains the most interesting data.

I scraped that outage data into simonw/pge-outages - here's the commit history (over 40,000 commits now!)

The scraper code itself is here. I wrote about the project in detail in Tracking PG&E outages by scraping to a git repo - my database of outages database is at pge-outages.simonwillison.net and the animation I made of outages over time is attached to this tweet.

Here's a video animation of PG&E's outages from October 5th up until just a few minutes ago pic.twitter.com/50K3BrROZR

- Simon Willison (@simonw) October 28, 2019

The much simpler scraper for the www.fire.ca.gov/incidents website is at simonw/ca-fires-history.

In the video I used that as the template to create a new scraper for CDC vaccination data - their website is https://covid.cdc.gov/covid-data-tracker/#vaccinations and the API I found using the browser developer tools is https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_data.

The new CDC scraper and the data it has scraped lives in simonw/cdc-vaccination-history.

You can find more examples of Git scraping in the git-scraping GitHub topic.

Thursday, 04. March 2021

FACILELOGIN

Why developer-first IAM and why Okta’s Auth0 acquisition matters?

https://careerfoundry.com/en/blog/uploads/web_dev_pillar_page.jpg Why developer-first IAM ? And why Okta’s Auth0 acquisition matters? In my previous blog, The Next TCP/IP Moment in Identity, I discussed why the enterprises will demand for developer-first IAM. As every company is becoming a software company, and starting to build their competitive advantage on the software they build, the deve
https://careerfoundry.com/en/blog/uploads/web_dev_pillar_page.jpg Why developer-first IAM ? And why Okta’s Auth0 acquisition matters?

In my previous blog, The Next TCP/IP Moment in Identity, I discussed why the enterprises will demand for developer-first IAM. As every company is becoming a software company, and starting to build their competitive advantage on the software they build, the developer-first IAM will free the developers from inherent complexities in doing Identity integrations.

The announcement came yesterday on Okta’s intention to acquire Auth0 for $6.5B, which is probably 40 times the Auth0’s current revenue, is a true validation of the push towards developer-first IAM. However, this is not Okta’s first effort towards developer-first IAM. In 2017, Okta acquired Stormpath, a company that built tools to help developers to integrate login with their apps. Stormpath soon got absorbed into the Okta platform, but yet, Okta’s selling strategy didn’t change. It was always top-down.

In contrast to Okta, Auth0 follows a bottom-up sales strategy. One of the analysts I spoke to couple of years back told that the Auth0 name comes in an inquiry call only when a developer joins in. The acquisition of Auth0 will give Okta, the access to a broader market. So, it is important for Okta to let Auth0 run as an independent business, as also mentioned in the acquisition announcement.

Auth0 is not just about the product; but also the developer tooling, content and the developer community around it. Okta will surely benefit from this ecosystem around Auth0. Also, for years, Azure is the primary competitor of Okta, and Auth0 acquisition will make Okta stronger against Azure in the long run.

Late 2020, in one of the the earnings calls, the Okta announced that it sees the total market for its workplace identity management software as $30 billion, but it sees another, additional market for customer identity software at $25 billion. In the customer Identity space, when we talk to enterprises, they bring in their unique requirements. In many cases they look for a product that can be used to build an agile, event-driven customer Identity (CIAM) platform that can flex to meet frequently changing business requirements. The developer-first IAM is more critical in building a CIAM solution than in workforce IAM. In the latest Forrester report on CIAM, Auth0 is way ahead of Okta, in terms of the current product offering. Okta will probably use Auth0 to increase their presence in the CIAM domain. Like Microsoft has Azure AD to focus on workforce IAM and Azure B2C to focus on CIAM, Auth0 could be Okta’s offering for CIAM.

Forrester CIAM Wave 2020

When Auth0 was founded in 2013, it picked a less-saturated (even right to say fresh), future-driven market segment in IAM — developer-first. Developer-first is all about the experience. Auth0 did extremely well there. They were worried about building the right level of developer experience, rather than the feature set. Even today, Auth0 stands against others not because of their feature set, but the experience they build for developers.

The developer-first experience is not only about the product. How you build product features in a developer-first manner, probably contribute 50% of the total effort. The rest is about developer tooling, SDKs, and the content. Then again, the content is not just about the product. The larger portion of the content needs to be on how to integrate the product with a larger ecosystem — and also to teach developers the basic constructs, concepts and best practices in IAM. That helps to win the developer trust!

How the Auth0 website looked like in July, 2014

Auth0’s vision towards developer-first IAM evolved over the past. The way the Auth0 website itself evolved in terms of the messaging and the presentation, reflects how much they want to be on the enterprisy side more today than in the past. As they claim Auth0 has 9000+ enterprise customers, which probably generate $150M annual revenue, the average sales value (ASV) would be around $16500. That probably means, the majority of Auth0 customers are in free/developer/developer pro tiers. So, it’s understandable why they want to bring in enterprise look and messaging, probably moving forward to focus more on top-down driven sales strategy. The big Contact Sales button over the Signup button on the Auth0 website today, sums up this direction to some extent.The Okta’s acquisition of Auth0 could probably strengthen this move.

How the Auth0 website looks like today (March, 2021)

Like Microsoft’s acquisition of GitHub for $7.5B in 2018, Okta’s acquisition of Auth0 for $6.5B is a win for developers! Congratulations to both Auth0 and Okta and very much looking forward to see the journey of Auth0 together with Okta.

Why developer-first IAM and why Okta’s Auth0 acquisition matters? was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyDigitalFootprint

Trust is not a thing or a destination, but an outcome from a transformation

Trust is not a thing or a destination but an outcome of a transition. I have written over 214 articles on “trust” over the past 12 years.  I have read endless books and done countless presentations on the topic of TRUST. I have always searched for the relationships between trust and strategy, value, consent, privacy, identity, data and risk.   The well-reasoned articles include Imagi
Trust is not a thing or a destination but an outcome of a transition.

I have written over 214 articles on “trust” over the past 12 years.  I have read endless books and done countless presentations on the topic of TRUST. I have always searched for the relationships between trust and strategy, value, consent, privacy, identity, data and risk.   The well-reasoned articles include Imaging a Digital Strategy starting from TRUST, Trust is not a destination!, How can Brands restore user trust?  A segmentation model based on trust, The relationship between Trust, Risk and Privacy.  This article brings together some of the thinking already explored in previous articles but as the new drawing below.  

It started as I was playing with Laplace to Fourier and back.  These are the transformations between time and frequency.  Whilst time and frequency are related, to move between the domains in maths requires some fancy work.  On either side of the status is TIME and FREQUENCY, and between them are “transformations.”  A bit like our digital transformations, there is the before and the after.   This somehow got me thinking about trust. The before state and the after state, with the transformation being “trust.”  

The before status is untrusted or less trusted, and because of specific actions, the end state will either be more trusting, eventually leading to being trusted (a final state) or less trusted, eventually leading back to untrusted.  “Trust” itself never exists - it happens to be the word we use as the transition or transformation of state.  However, as we socially don’t want to talk about the existing state of trust, I have in you or what status of trust I have after the activities/ actions we use the word “trust” to hide, confuse and remain unclear, perhaps we don’t know ourselves.   

Do you trust me? This is not a very good question; what do you trust me for would be far better, but that is often too raw, so we prefer the former.

Does this framing of trust help?

We might adopt it within the team to allow us to define the starting and endpoints and the journey, avoiding the word naked word “trust.”  We also need to recognise hysteresis, which creates different paths between becoming more trustworthy and either losing trust (time and inaction) or the destruction of trust by action.  I would expect everyone has a different hysteresis, and it will vary by age, experience, value, relationships, chemistry and time.  We are all on the journey, but we also need to determine who is on the journey with us and if we want to walk at the pace of the slowest or leave a few behind.   

Perhaps there is a good reason we find it hard to define trust?



ESG and the elephants in the room

There are more elephants in the room than you imagined!   When we unpack climate and ESG, it is too big, so we don’t know what to do.  However, this post unpacks the “too big” so that we can individually identify the one thing we can all do that will make a difference.  The 1st elephant in the room Never teach an elephant to sing; it wastes your time and annoys the e

There are more elephants in the room than you imagined!  

When we unpack climate and ESG, it is too big, so we don’t know what to do.  However, this post unpacks the “too big” so that we can individually identify the one thing we can all do that will make a difference. 








The 1st elephant in the room

Never teach an elephant to sing; it wastes your time and annoys the elephant.

This elephant represents the big issues: politics, geopolitics, policy, human behaviours, policing, crime, cyber-attacks, climate change, poverty, tax, ESG, economics, capitalism, growth, sustainability, vaccines, circular economy, transparency and equality as examples.  They are all so big and complex that we cannot get our heads around them.  No one person owns the problem.  We ignore those who go on about them often by putting them into the category of philosophies as I don’t understand or know what I can do, believing that my individual actions will make no difference. 

 


The 2nd elephant in the room

The blind people are asked about their understanding of an elephant but are limited to one part.  If you cannot see the whole picture but use a different sense to try and make sense of the elephant, you will likely miss something. How do you know your view is limited?  The message is that if you cannot imagine or see the whole elephant but are limited or framed, it means you only get the outcome you expected.  Our limits in experience, diversity, understanding bias, behaviour, path dependency, data structure means we are all blind but think we can see.


The 3rd elephant in the room

How do you eat an elephant? One small chunk at a time

It is too big to swallow whole, so cut up the big problem into smaller chunks.  What is the problem I have to solve today?  What is the work to be done?

It is easy to focus on and has items everyone can identify with, but someone must know it is an elephant to know the priorities and what tasks matter.


The 4th elephant in the room

When the client asks if you can do it cheaper. 

Where priorities, budgets and the economics of short-termism override long term purpose. 

Enough said. 






Observation

What can I do that makes a difference?  Knowing that the elephant in the room is more significant than anyone can understand (ESG, Climate, policy, behaviours, economics, geopolitics, humanity).  I have to accept that I cannot have a complete HD picture of how everything connects but must accept that I am blind, but I can sense a small part and owe a duty to connect and share. Together we will work out the first chuck we need to eat, knowing that we cannot eat it all at once.  

What can I do that makes a difference? 


ESG and Climate

We cannot individually do anything about it as it is too big, and my actions are too small. Some leaders do have a clear picture, but how can we break-up the big into smaller steps.  Whilst I hope we can debate about the steps’ priorities, there is one action that each of us can take.

The problem that is too big

The majority of our agreed economic understanding rests in several supply/ demand models; how we have implemented the controls of the models varies across the globe, including market forces, regulation, government (state control) and stewardship codes. 

Using a three-axis plot, where the axes are Values/ Principles, Accountability/ Obligations and Health of Eco-systems, we create an inner blue box in Figure 1 representing today’s overall economic market and controls.  The values and principles tend to focus on self, accountability tends towards an elected authority (self-appointed, director or elected in a democracy) and the health of the ecosystem is focussed on only the one that you interact with (prime) but usually to ensure your health and not everyone’s. There are many exceptions to this model. 

We understand from the Business RoundTable 2019 report that companies in western capital markets are encouraged to think broader and include values that benefit society and all dependent ecosystems.  The orange box in figure 1.

ESG thinking pushes all the boundaries further such that our values/ principles create a sustainable earth, society becomes accountable for its actions, and all ecosystems become healthy (this solves the boundary problem with circular economy thinking where you cannot ignore your actions have an effect on everyone else). This is the larger green box in figure 1

One elephant in the room is the requirement to move our economic model in three dimensions simultaneously.  Walking forward is naturally comfortable (one dimension). Running uphill is harder (two dimensions). Going forward, upwards and outwards by defying gravity for man is challenging; however, this is the big picture elephant of ESG.  Change everything at the same time.



Figure 1: The Big ESG problem

Should we move one direction at a time to make this more digestible?

There is a debate I believe we need to have in which direction we should focus as a priority?  Figure 2 offers a suggestion. How do we move from where we are to where we want to be, in phases and not trying to do everything at once.  However, whilst digestible, it might be too slow. We need to agree on a map, but who should own this?

Figure 2, where do we go first?


Where are we on the journey right now?

We are making progress for Directors to move from ShareHolder Primacy mandates to report on ESG Compliance Codes and more informed fiduciary duties. Audit (finance) is a mess and is not reliable, and it is not a solution for ESG. We are making progress with capital and investment markets with the addition of stewardship codes, forcing directors to comply as the investment basis they need starts to demand change. Thank you, Blackrock and Hermes.  We are at the early stages of ESG and using public data to make informed investment decisions. However, I believe public data ESG assessment a significant problem as it introduces quantum risk. ESG public data has three fundamental issues as the boundaries we are dependent on have become less clear.

1. The quality of the data and implied knowledge we receive from our direct and dependent* eco-system, even if based on audit for financial and non-financial data, is unreliable and is increasingly complicated due to different data proposes and ontologies.

2. The quality of the knowledge we receive from our indirect and interdependent** eco-system, even if based on audit for financial and non-financial data, is unreliable and is increasingly complicated due to different data proposes and ontologies.

3. Who is responsible and accountable at second and third-order data boundaries? (assumption first boundary is direct and already in control in our risk model) - the reuse of the same “data” in several justifications. 

* Dependent: balancing being able to get what I want by my own effort as contingent on or determined by the actions of someone else to make it work 

** Interdependence combine my efforts with the efforts of others to achieve successful outcomes together but does not have to be mutual or controlled


Therefore I believe we need to look at a much deeper level of data sharing to achieve sustainability, and the first step is an ESG API for data. 

How can we make this happen together?

Investment industry - move your power to follow and support those who implement a Data API for ESG data the quickest. Let’s get data with attestation to make better decisions.

Companies and Charities - collect the data and make it available publicly and not via a highly scripted and edited document.  Drive out multiple uses of the same data and greenwashing

Academic/ Standards/ Global bodies - create a top-level ontology for this data set’s data sharing.  Ensure that it is creative commons, owned by everyone to benefit everyone. 

Government - do your part and lead in collecting and publishing the government’s data - show the way.

Regulators - lobby to gain the power to strike off and fine Directors who are part of the creation of false data or are not fast enough to react

Media/ Journalism/  - check the data/ analysis/ insights and don’t promote your click model negative headlines over purpose, knowing that the regulators will have the power on the particular issue of data to remove you.  Be the body that brings about change and not the one the looks to score wins at society’s expense.

Researchers/ Consulting - Find ways to help and support the transformation, not by shaming or by creating best practice but by highlighting where we need to focus so we don’t leave anyone behind. Let’s not celebrate successes until we all find a sustainable place. We cannot run faster than our weakest partner. 

Individuals - we make change possible because somewhere in here, we have a role, and next is what we can all do.  Personally commit to enabling that the data, with rights and attestation, can be collected in everything done.  This will ensure I can make my choices based on data that has provenance and lineage. 


Together, with data, we can achieve the sustainable goals we want.



Simon Willison

google-cloud-4-words

google-cloud-4-words This is really useful: every Google Cloud service (all 250 of them) with a four word description explaining what it does. I'd love to see the same thing for AWS. UPDATE: Turns out I had - I can't link to other posts from blogmark descriptions yet, so search "aws explained" on this site to find it.

google-cloud-4-words

This is really useful: every Google Cloud service (all 250 of them) with a four word description explaining what it does. I'd love to see the same thing for AWS. UPDATE: Turns out I had - I can't link to other posts from blogmark descriptions yet, so search "aws explained" on this site to find it.

Tuesday, 02. March 2021

Bill Wendel's Real Estate Cafe

Real Estate Cartel: Anti-competitive practices deny consumers $50 BILLION annual saving

#NCPW: In 1997, two years after #RealEstateCafe opened it’s 1,200 SF storefront in Cambridge MA, just 2% of buyers found houses themselves, now it’s half. Given internet… The post Real Estate Cartel: Anti-competitive practices deny consumers BILLION annual saving first appeared on Real Estate Cafe.

#NCPW: In 1997, two years after #RealEstateCafe opened it’s 1,200 SF storefront in Cambridge MA, just 2% of buyers found houses themselves, now it’s half. Given internet…

The post Real Estate Cartel: Anti-competitive practices deny consumers BILLION annual saving first appeared on Real Estate Cafe.

Monday, 01. March 2021

Doc Searls Weblog

A toast to the fools standing high on broadcasting’s hill

In Winter, the cap of dark on half the Earth is cocked to the north. So, as the planet spins, places farther north get more night in the winter. In McGrath, Alaska, at close to sixty-three degrees north, most of the day is dark. This would be discouraging to most people, but to Paul B. […]

In Winter, the cap of dark on half the Earth is cocked to the north. So, as the planet spins, places farther north get more night in the winter. In McGrath, Alaska, at close to sixty-three degrees north, most of the day is dark. This would be discouraging to most people, but to Paul B. Walker it’s a blessing. Because Paul is a DXer.

In the radio world, DX stands for for distance, and DXing is listening to distant radio stations. Thanks to that darkness, Paul listens to AM stations of all sizes, from Turkey to Tennessee, Thailand to Norway. And last night, New Zealand. Specifically, NewsTalk ZB‘s main AM signal at 1035 on the AM (what used to be the) dial. According to distancecalculator.net, the signal traveled 11886.34 km, or 7385.83 miles, across the face of the earth. In fact it flew much farther, since the signal needed to bounce up and down off the E layer of the ionosphere and the surface of the ocean multiple times between Wellington and McGrath. While that distance is no big deal on shortwave (which bounces off a higher layer) and no deal at all on the Internet (where we are all zero distance apart), for a DXer that’s like hauling in a fish the size of a boat.

In this sense alone, Paul and I are kindred souls. As a boy and a young man, I was a devout DXer too. I logged thousands of AM and FM stations, from my homes in New Jersey and North Carolina. (Here is a collection of QSL cards I got from stations to which I reported reception, in 1963, when I was a sophomore in high school.) More importantly, learning about all these distant stations sparked my interest in geography, electronics, geology, weather, astronomy, history and other adjacent fields. By the time I was a teen, I could draw all the states of the country, freehand, and name their capitals too. And that was on top of knowing on sight the likely purpose of every broadcast tower and antenna I saw. For example, I can tell you (and do in the mouse-over call-outs you’ll see if you click on the photo) what FM and TV station transmits from every antenna in this picture (of Mt. Wilson, above Los Angeles):

As a photographer, I’ve shot thousands of pictures of towers and antennas. (See here.) In fact, that’s how I met Paul, who created and runs a private Facebook group called (no kidding) “I Take Pictures of Transmitter Sites.” This is not a small group. It has 14,100 members, and is one of the most active and engaging groups I have ever joined.

One reason it’s so active is that many of the members (and perhaps most of them) are, or were, engineers at radio and TV stations, and their knowledge of many topics, individually and collectively, is massive.

There is so much you need to know about the world if you’re a broadcast engineer.

On AM you have to know about ground conductivity, directional arrays (required so stations don’t interfere with each other), skywave signals such as the ones Paul catches and the effects of tower length on the sizes and shapes of the signals they radiate.

On FM you need to know the relative and combined advantages of antenna height and power, how different numbers of stacked antennas concentrate signal strength toward and below the horizon, the shadowing effects of buildings and terrain, and how the capacitive properties of the earth’s troposphere can sometimes bend signals so they go much farther than they would normally.

On TV you used to care about roughly the same issues as FM (which, in North America is sandwiched between the two original TV bands). Now you need to know a raft of stuff about how digital transmission works as well.

And that’s just a small sampling of what needs to be known in all three forms of broadcasting. And the largest body of knowledge in all three domains is what actually happens to signals in the physical world—which differs enormously from place to place, and region to region.

All of this gives the engineer a profound sense of what comprises the physical world, and how it helps, limits, and otherwise interacts with the electronic one. Everyone in the business is like the fool on the Beatles’ hill, seeing the sun going down and the world spinning round. And, while it’s not a dying profession, it’s a shrinking one occupied by especially stalwart souls. And my hat’s off to them.

By the way, you can actually hear Paul Walker for yourself, in two places. One is as a guest on this Reality 2.0 podcast, which I did in January. The other is live on KSKO/89.5 in McGrath, where he’s the program director. You don’t need to be a DXer to enjoy either one.


Damien Bod

Using Azure AD groups authorization in ASP.NET Core for an Azure Blob Storage

This post show how Azure AD groups could be used to implement authorization for an Azure Blob storage and used in an ASP.NET Core Razor page application to authorize the identities. The groups are assigned the roles in the Azure Storage. Azure AD users are added to the Azure AD groups and inherit the group […]

This post show how Azure AD groups could be used to implement authorization for an Azure Blob storage and used in an ASP.NET Core Razor page application to authorize the identities. The groups are assigned the roles in the Azure Storage. Azure AD users are added to the Azure AD groups and inherit the group roles. The group ID is added to the claims of the tokens which can be used for authorization in the client application.

Code: https://github.com/damienbod/AspNetCoreAzureAdAzureStorage

Blogs in this series

Secure Azure AD User File Upload with Azure AD Storage and ASP.NET Core Adding ASP.NET Core authorization for an Azure Blob Storage and Azure AD users using role assignments Using Azure AD groups authorization in ASP.NET Core for an Azure Blob Storage

Setup the groups in Azure AD

To implement this, two new user groups are created inside the Azure AD directory.

The required Azure AD users are added to the groups.

Add the role assignment for the groups to Azure Storage

The Azure Storage which was created in the previous post is opened and the new Azure AD groups can be assigned roles. The Storage Blob Contributor group and the Storage Blob reader group are add to the Azure Storage Role assignments.

You can see that the Storage Contributors Azure AD group is assigned the Storage Blob Data Contributor role and the Storage Reader Azure AD group is assigned the Storage Blob Data Reader role.

Add the group IDs to the tokens in Azure App registrations

Now the groups can be added to the id_token and the access token in the Azure App registrations. If you use a lot of groups in your organisation, you might not want to do this due to token size, but instead use Microsoft Graph in the applications to get the groups for the authenticated Identity. In the Token Configuration blade of the Azure App registration, the groups can be added as an optional claim.

To use the groups in the ASP.NET Core web application, only the security groups are required.

Implement the ASP.NET Core authorization handlers

The StorageBlobDataContributorRoleHandler implements the ASP.NET Core handler to authorize the identities. The handler implements the AuthorizationHandler with the required requirement. The handler retrieves the group claim from the claims with the group value of the group we set in the role assignments in the Azure Storage. The claims are extracted from the id_token or the user info endpoint. Only the group is validated in the ASP.NET Core application, not the actual role which is required for the Azure Storage access. There is no direct authorization with the Azure Storage Blob container. The authorization is done through the Azure AD groups.

By using the group claims, no extra API call is required to authorize the identity using the application. This is great, as long as the tokens don’t get to large in size.

public class StorageBlobDataContributorRoleHandler : AuthorizationHandler<StorageBlobDataContributorRoleRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, StorageBlobDataContributorRoleRequirement requirement ) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); // StorageContributorsAzureADfiles var groupId = "6705345e-c37e-4f7a-b2d9-e2f43e029524"; var spIdUserGroup = context.User.Claims .FirstOrDefault(t => t.Type == "groups" && t.Value == groupId); if(spIdUserGroup != null) { context.Succeed(requirement); } return Task.CompletedTask; } }

The handlers with the requirements are registered in the application in the Startup class. Policies are created for the requirement which can be then used in the application. Microsoft.Identity.Web is used to authenticate the user and the application.

services.AddMicrosoftIdentityWebAppAuthentication(Configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.AddAuthorization(options => { options.AddPolicy("StorageBlobDataContributorPolicy", policyIsAdminRequirement => { policyIsAdminRequirement .Requirements .Add(new StorageBlobDataContributorRoleRequirement()); }); options.AddPolicy("StorageBlobDataReaderPolicy", policyIsAdminRequirement => { policyIsAdminRequirement .Requirements .Add(new StorageBlobDataReaderRoleRequirement()); }); }); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI();

The authorization policies can be applied as attributes on the class of ASP.NET Core Razor pages.

[Authorize(Policy = "StorageBlobDataContributorPolicy")] [AuthorizeForScopes(Scopes = new string[] { "https://storage.azure.com/user_impersonation" })] public class AzStorageFilesModel : PageModel { // code ...

When the application is started from Visual Studio, the group claims can be viewed and the handler will succeed if the Azure groups, users and role assignments are configured correctly.

Azure AD Groups can be used in this way to manage access to Azure AD storage in an ASP.NET Core. This works great but is a very dependent on Azure adminstration. For this to work good, you need access and control over the Azure AD tenant which is not always the case in companys. If the Azure Storage was extended with Tables or Queues, the roles can be applied to new groups or the existing ones. A lot depends on how the users, groups and applications are managed.

Links

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

Using Azure Management Libraries for .NET to manage Azure AD users, groups, and RBAC Role Assignments

https://management.azure.com/subscriptions/subscriptionId/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01

https://docs.microsoft.com/en-us/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-authentication

https://docs.microsoft.com/en-us/rest/api/authorization/role-assignment-rest-sample

https://github.com/mderriey/azure-identity-livestream

https://blog.soft-cor.com/empowering-developer-teams-to-manage-their-own-azure-rbac-permissions-in-highly-regulated-industries/


Mike Jones: self-issued

Second Version of W3C Web Authentication (WebAuthn) advances to Proposed Recommendation (PR)

The World Wide Web Consortium (W3C) has published this Proposed Recommendation (PR) for the Web Authentication (WebAuthn) Level 2 specification, bringing the second version of WebAuthn one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, ent

The World Wide Web Consortium (W3C) has published this Proposed Recommendation (PR) for the Web Authentication (WebAuthn) Level 2 specification, bringing the second version of WebAuthn one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format.


Phil Windley's Technometria

Announcing Pico Engine 1.0

Summary: I'm excited to announce a new, stable, production-ready pico engine. The latest release of the Pico Engine (1.X) provides a more modular design that better supports future enhancements and allows picos to be less dependent on a specific engine for operation. The pico engine creates and manages picos.1 Picos (persistent compute objects) are internet-first, persistent, actors

Summary: I'm excited to announce a new, stable, production-ready pico engine. The latest release of the Pico Engine (1.X) provides a more modular design that better supports future enhancements and allows picos to be less dependent on a specific engine for operation.

The pico engine creates and manages picos.1 Picos (persistent compute objects) are internet-first, persistent, actors that are a good choice for building reactive systems—especially in the Internet of Things.

Pico engine is the name we gave to the node.js rewrite of the Kynetx Rules Engine back in 2017. Matthew Wright and Bruce Conrad have been the principal developers of the pico engine.

The 2017 rewrite (Pico Engine 0.X) was a great success. When we started that project, I listed speed, internet-first, small deployment, and attribute-based event authorization as the goals. The 0.X rewrite achieved all of these. The new engine was small enough to be able to be deployed on Raspberry Pi's and other small computers and yet was significantly faster. One test we did on a 2015 13" Macbook Pro handled 44,504 events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 events per second or 47.6 milliseconds per request.

This past year Matthew and Bruce reimplemented the pico engine with some significant improvements and architectural changes. We've released that as Pico Engine 1.X. This blog post discusses the improvements in Pico Engine 1.X, after a brief introduction of picos so you'll know why you should care.

Picos

Picos support an actor model of distributed computation. Picos have the following three properties. In response to a received message,

picos send messages to other picos—Picos respond to events and queries by running rules. Depending on the rules installed, a pico may raise events for itself or other picos. picos create other picos—Picos can create and delete other picos, resulting in a parent-child hierarchy of picos. picos change their internal state (which can affect their behavior when the next message received)—Each pico has a set of persistent variables that can only be affected by rules that run in response to events.

I describe picos and their API and programming model in more detail elsewhere. Event-driven systems, like those built from picos, can be used to create systems that meet the Reactive Manifesto.

Despite the parent-child hierarchy, picos can be arranged in a heterachical network for peer-to-peer communication and computation. As mentioned, picos support direct asynchronous messaging by sending events to other picos. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on the bus with event scenarios declared in the event expressions. Event expressions can specify simple single event matches, or complicated event relationships with temporal ordering. Rules whose event expressions match are scheduled for execution. Executing rules may raise additional events. More detail about the event loop and pico execution model are available elsewhere.

Each pico presents a unique Event-Query API that is dependent on the specific rulesets installed in the pico. Picos share nothing with other picos except through messages exchanged between them. Picos don't know and can't directly access or affect the internal state of another pico.

As a result of their design, picos exhibit the following important properties:

Lock-free concurrency—picos respond to messages without locks. Isolation—state changes in one pico cannot affect the state in other picos. Location transparency—picos can live on multiple hosts and so computation can be scaled easily and across network boundaries. Loose coupling—picos are only dependent on one another to the extent of their design. Pico Engine 1.0

Version 1.0 is a rewrite of pico-engine that introduces major improvements:

A more pico-centric architecture that makes picos less dependent on a particular engine. A more module design that supports future improvements and makes the engine code easier to maintain and understand. Ruleset versioning and sharing to facilitate decentralized code sharing. Better, attribute-based channel policies for more secure system architecture. A new UI written in React that uses the event-query APIs of the picos themselves to render.

One of our goals for future pico ecosystems is build not just distributed, but decentralized peer-to-peer systems. One of the features we'd very much like picos to have is the ability to move between engines seamlessly and with little friction. Pico engine 1.X better supports this roadmap.

Figure 1 shows a block diagram of the primary components. The new engine is built on top of two primary modules: pico-framework and select-when.

Figure 1: Pico Engine Modular Architecture (click to enlarge)

The pico-framework handles the building blocks of a Pico based system:

Pico lifecycle—picos exist from the time they're created until they're deleted. Pico parent/child relationships—Every pico, except for the root pico, has a parent. All picos may have children. Events—picos respond to events based on the rules that are installed in the pico. The pico-framework makes use of the select_when library to create rules that pattern match on event streams. Queries—picos can also respond to queries based on the rulesets that are installed in the pico. Channels—Events and queries arrive on channels that are created and deleted. Access control policies for events and queries on a particular channel are also managed by the pico-framework Rulesets—the framework manages installing, caching, flushing, and sandboxing rulesets. Persistence—all picos have persistence and can manage persistent data. The pico-framework uses Levelup to define an interface for a LevelDB compatible data store and uses it to handle persistence of picos.

The pico-framework is language agnostic. Pico-engine-core combines pico-framework with facilities for rendering KRL, the rule language used to program rulesets. KRL rulesets are compiled to Javascript for pico-framework. Pico-engine-core contains a registry (transparent to the user) that caches compiled rulesets that have been installed in picos. In addition, pico-engine-core includes a number of standard libraries for KRL. Rulesets are compiled to Javascript for execution. The Javascript produced by the rewrite is much more readable than that rendered by the 0.X engine. Because of the new modular design, rulesets written entirely in Javascript can be added to a pico system.

The pico engine combines the pico-engine-core with a LevelDB-compliant persistent store, an HTTP server, a log writer, and a ruleset loader for full functionality.

Wrangler

Wrangler is the pico operating system. Wrangler presents an event-query API for picos that supports programatically managing the pico lifecycle, channels and policies, and rulesets. Every pico created by the pico engine has Wrangler installed automatically to aid in programatically interacting with picos.

One of the goals of the new pico engine was to support picos moving between engines. Picos relied too heavily on direct interaction with the engine APIs in 0.X and thus were more tightly coupled to the engine than is necessary. The 1.0 engine minimizes the coupling to the largest extent possible. Wrangler, written in KRL, builds upon the core functionality provided by the engine to provide developers with an API for building pico systems programmatically. A great example of that is the Pico Engine Developer UI, discussed next.

Pico Engine Developer UI

Another significant change to the pico engine with the 1.0 release was a rewritten Developer UI. In 0.X, the UI was hard coded into the engine. The 1.X UI is a single page web application (SPA) written in React. The SPA uses an API that the engine provides to get the channel identifier (ECI) for the root pico in the engine. The UI SPA uses that ECI to connect to the API implemented by the io.picolabs.pico-engine-ui.krl ruleset (which is installed automatically in every pico).

Figure 2 shows the initial Developer UI screen. The display is the network of picos in the engine. Black lines represent parent-child relationships and form a tree with the root pico at the root. The pink lines are subscriptions between picos—two-way channels formed by exchanging ECIs. Subscriptions are used to form peer-to-peer (heterachical) relationships between picos and do no necessarily have to be on the same engine.

Figure 2: Pico Engine UI (click to enlarge)

When a box representing a pico in the Developer UI is clicked, the display shows an interface for performing actions on the pico as shown in Figure 3. The interface shows a number of tabs.

The About tab shows information about the pico, including its parent and children. The interface allows information about the pico to be changed and new children to be created. The Rulesets tab shows any rulesets installed in the pico, allows them to be flushed from the ruleset cache, and for new rulesets to be installed. The Channels tab is used to manage channels and channel policies. The Logging tab shows execution logs for the pico. The Testing tab provides an interface for exercising the event-query APIs that the rulesets installed in the pico provide. The Subscriptions tab provides an interface for managing the pico's subscriptions and creating new ones. Figure 3: Pico Developer Interface (click to enlarge)

Because the Developer UI is just using the APIs provided by the pico, everything it does (and more) can be done programatically by code running in the picos themselves. Most useful pico systems will be created and managed programmatically using Wrangler. The Developer UI provides a convenient console for exploring and testing during development. The io.picolabs.pico-engine-ui.krl ruleset can be replaced or augmented by another ruleset the developer installs on the pico to provide a different interface to the pico. Interesting pico-based system will have applications that interact with their APIs to present the user interface. For example, Manifold is a SPA written in React that creates a system of picos for use in IoT applications.

Come Contribute

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

In addition to the work on the engine itself, one of the primary workstreams at present is to complete Bruce Conrad's excellent work to use DIDs and DIDComm as the basis for inter-pico communication, called ACA-Pico (Aries Cloud Agent - Pico). We're holding monthly meetings and there's a repository of current work complete with issues. This work is important because it will replace the current subscriptions method of connecting heterarchies of picos with DIDComm. This has the obvious advantages of being more secure and aligned with an important emerging standard. More importantly, because DIDComm is protocological, this will support protocol-based interactions between picos, including credential exchange.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

Notes The pico engine is to picos as the docker engine is to docker containers.

Photo Credit: Flowers Generative Art from dp792 (Pixabay)

Tags: picos iot krl programming rules

Sunday, 28. February 2021

Bill Wendel's Real Estate Cafe

23rd National Consumer Protection Week: Why is Real Estate still a Blindspot?

Welcome to the 23rd annual National Consumer Protection Week, 2/28 – 3/6/21. Regrettably, other than our own Tweets, real estate once again appears to be… The post 23rd National Consumer Protection Week: Why is Real Estate still a Blindspot? first appeared on Real Estate Cafe.

Welcome to the 23rd annual National Consumer Protection Week, 2/28 – 3/6/21. Regrettably, other than our own Tweets, real estate once again appears to be…

The post 23rd National Consumer Protection Week: Why is Real Estate still a Blindspot? first appeared on Real Estate Cafe.

Friday, 26. February 2021

Phil Windley's Technometria

Building Decentralized Applications with Pico Networks

Summary: Picos make building decentralized applications easy. This blog post shows a heterarchical sensor network can built using picos. Picos are designed to form heterarchical, or peer-to-peer, networks by connecting directly with each other. Because picos use an actor model of distributed computation, parent-child relationships are very important. When a pico creates another pico,

Summary: Picos make building decentralized applications easy. This blog post shows a heterarchical sensor network can built using picos.

Picos are designed to form heterarchical, or peer-to-peer, networks by connecting directly with each other. Because picos use an actor model of distributed computation, parent-child relationships are very important. When a pico creates another pico, we say that it is the parent and the pico that got created is the child. The parent-child connection allows the creating pico to perform life-cycle management tasks on the newly minted pico such as installing rulesets or even deleting it. And the new pico can create children of its own, and so on.

Building a system of picos for a specific application requires programming them to perform the proper lifecycle management tasks to create the picos that model the application. Wrangler is a ruleset installed in every pico automatically that is the pico operating system. Wrangler provides rules and functions for performing these life-cycle management tasks.

Building a pico application can rarely rely on the hierarchical parent-child relationships that are created as picos are managed. Instead, picos create connections between picos by creating what are called subscriptions, providing bi-directional channels used for raising events to and making queries of the other pico.

This diagram shows a network of temperature sensors built using picos. In the diagram, black lines are parent-child relationships, while pink lines are peer-to-peer relationships between picos.

Temperature Sensor Network (click to enlarge)

There are two picos (one salmon and the other green) labeled a "Sensor Community". These are used for management of the temperature sensor picos (which are purple). These community picos are performing life-cycle management of the various sensor picos that are their children. They can be used to create new sensor picos and delete those no longer needed. Their programming determines what rulesets are installed in the sensor picos. Because of the rulesets installed, they control things like whether the sensor pico is active and how often if updates its temperature. These communities might represent different floors of a building or different departments on a large campus.

Despite the fact that there are two different communities of temperature sensors, the pink lines tell us that there is a network of connections that spans the hierarchical communities to create a single connected graph of sensors. In this case, the sensor picos are programmed to use a gossip protocol to share temperature information and threshold violations with each other. They use a CRDT to keep track of the number of threshold violations currently occuring in the network.

The community picos are not involved in the network interactions of the sensor picos. The sensor network operates independently of the community picos and does not rely on them for communication. Astute readers will note that both communities are both children of a "root" pico. That's an artifact of the way I built this, not a requirement. Every pico engine has a root pico that has no parent. These two communities could have been built on different engines and still created a sensor network that spanned multiple communities operating on multiple engines.

Building decentralized networks of picos is relatively easy because picos provide support for many of the difficult tasks. The actor model of picos makes them naturally concurrent without the need for locks. Picos have persistent, independent state so they do not depend on external data stores. Picos have a persistent identity—they exist with a single identity from the time of their creation until they are deleted. Picos are persistently available, always on and ready to receive messages. You can see more about the programming that goes into creating these systems in these lessons: Pico-Based Systems and Pico to Pico Subscriptions.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

Tags: picos heterarchy p2p


MyDigitalFootprint

Data For Better Decisions. Nature or Nurture?

“Every” management student has had to answer the exam question: “Leadership/ management: Nature or Nurture? - discuss” It is a paradox from either side of the argument, the logical conclusion always highlights the other has truth. The reality of leadership and management is that it is a complex adaptive system, and context enables your nature to emerge and nurturing to mature.  This is


“Every” management student has had to answer the exam question: “Leadership/ management: Nature or Nurture? - discuss” It is a paradox from either side of the argument, the logical conclusion always highlights the other has truth. The reality of leadership and management is that it is a complex adaptive system, and context enables your nature to emerge and nurturing to mature.  This is important because we also know there is a link between strategy, styles (leadership) and business structures.  In this article, we will unpack how your “nature or nurture” thinking-structure, affects outcomes.  Your thinking-structure is also a complex adaptive system as your peers and customers thinking, your companies “culture of structure” thinking affect you. BUT have you considered how your data structure and your data philosophy will have a direct and significant impact on outcomes? 

I’ve known that my neurodiversity package (severe dyslexia, mild high functioning autism, ADHD) informs how I interrupt the world as my “biological cortex” and gut-brain axis structures process sensory data and memory uniquely. I cannot modify my mind or brain’s basic structure any more than I could change my fingerprint, core DNA or the colour of my eyes; however, I can play with my microbiome. It’s an essential part of what makes me, me. My chemical and biological structures enable me to make sense of the world in my way.  Communication (language, words, music, song, dance, art, sound, movement, gesture) enables us to share the sense we create from patterns and align with others who approximate the same (tribe).  How we actually make sense (learn) is intensely debated, with one camp believing that language is our sense maker, assuming that we might observe patterns but cannot understand it without language? Other than that, we make sense and then create a language to communicate the insight we have gained.  Irrespective, language allows us to both structure and navigate our space and share the journey.  

Is how we structure or frame something nature or nurture?

Why does this question matter? We all read, speak and write differently, we all understand differently, but we use questions to clarify understanding, check meaning and create common interruption.  How we individually structure meaning is determined from the perspective we have been given (nature), from what we have been taught (nurture) and what we align to (bias).  Our structure is an ontology*. Imagine putting one person from each of our worlds religions or faith groups into a room, but assume no-one can speak the same language.  How and what would they agree or disagree about as there is no common structure (ontology)

* An ontology is “the set of things whose existence is acknowledged by a particular theory or system of thought.”  (The Oxford Companion to Philosophy) 

By way of example, the word “Evil” creates meaning for you as soon as you read it. Without a doubt, the nature of evil is a complex and nuanced area too often glossed over in a rush to present or evaluate the defences and theodicies. Let’s unpack the word using the super book “Making Evil” by Dr Julia Shaw.   Evil is an unavoidable part of life that we all encounter as we all suffer in one sense or another, but what makes something evil is a matter of framing/ structure/ ontology. “Natural evil” is the pain and suffering that arises from the natural world’s functioning or malfunctioning. “Moral evil” is the pain and suffering that results from conscious human action or inaction. It is evil where a person or people are to blame for the suffering that occurs; a crucial point is the blameworthiness of the person at fault. Moral evil, at its heart, results from the free choice of a moral agent. If we just look at the consequences, it is not always possible to tell whether moral evil has taken place or not; we have many mitigations. Therefore a level of moral evil can be found in the degree of intention and consequence.  However, if we compare death rates for natural evil (suffering) and moral evil at an extreme people killing people, the latter is a rounding error in the form of suffering in the world. The point is that by framing something, I can create a structure for understanding. Critically our structures frame our understanding.  

Critically our structures frame our understanding.  

Structures are ontologies which are philosphies. 

To explore that our structures frame our understanding, what ontology makes us human? When we look at the different view below, we can view humans in many different ways. Note: I have deliberately ignored the classical all living things ontology structure (insects, birds, fish, mammal, reptiles, plant).  The point is that your framing or how you structure something at the start leads to a guided conclusion. 

Our framing or how we structure something at the start leads to a guided conclusion.

Pick a different structure, and you get a different answer; the ontology creates a natural conclusion.  It is likely that if you pick a philosophy/ ontology/ structure, you can only get what that framing will shine a light on or enable.

It is likely that if you pick a philosphies/ ontology/ structure, you can only get what that structure will shine a light on or enable.

This matters because all data has structure!

I explore continually the future of the digital business, which are underpinned by data, privacy, consent and identity.  Data is Data (it is not oil or sunshine).  What is The Purpose of your Data? Quantum (Data) Risk. Does data create a choice? Data and KPI’s. Wisdom is just more data. Data can create what does not exist. Data is not Memory.

I am asking these questions of directors, boards, senior leadership teams and data/ data managers. Directors are accountable in law for ensuring no discrimination and health and safety, but how can we know what we know if we don’t know the structure or framing of the data that gave us the result.  If we assume - that is a risk.

Do you have a data philosophy, and what is it?  

What is the structure of your data by silo? Is there a single top-level ontology?  

Do you know the structure/ ontologies of data for your ecosystem? 

What is the attestation and rights of the data in our data lake? How do we check if we are using data for a different purpose than intended?

How would you detect the consequences in your decision making by the aggregation of data with different ontologies? 

The Directors are accountable in law for discrimination, health and safety, and decision making (S.172 companies act), but how can we know what we know if we don’t know or understand the structure/ ontology and its limits.  We can now longer assume, as it is a known risk.

For most, this is already too much detail and in the weeds!  If you want to go deeper, this is a fantastic paper. A survey of Top-Level Ontologies to inform the ontological choices for a Foundation Data Model

 

Summary 

We want to use data to make better decisions and create new value.  However, we need to recognise that our data has a structure (ontology). Our data’s very structure (designed or otherwise) creates bias, prevents certain outcomes from being created, and creates others. The reality is that our structures (ontologies) have already committed to the success or failure of your data strategy and business model.  

The reality is that our structures (ontologies) have already committed to the success or failure of your data strategy and business model.  

As a leader, have you asked what is the structure (ontology) of our data? Has your team informed you about the limitations of your data structure/ ontology on decision making? The CDO should be tasked with providing a map, matrix or translation table showing data sets linkage to ontologies and the implications. As we now depend on ecosystem data, do you know the ontologies of others in your ecosystem and how that affects your decision making capability?  Gaps in data sharing ontologies affect decisions and create Quantum Risk.  What assumptions do we make about data without knowing is essential for investment risk, as we are using public ESG data to make capital allocation decisions without knowing where the data came from, what ontology the data has, if the right analysis tools have been used. 


----

Implication 1.  Management and Leadership

The figure-of-8 diagram below shows two interconnected loops. The connection is the mindset of the leader. Outstanding leadership with an open mindset can choose which loop is best at this time.  Poor leadership will stick to the lower closed mindset loop.  The lower loop never starts with a different way of asking questions or solving problems.  Those in this self-confirming loop stick to the same biases, same decisions and same paradigms.  This creates the ideas of one culture and a fixed culture.   We have our way of doing it.  The approach is consistency; the methods are highly efficient and based on the $1bn profit last year, we know it works, and we should continue to do the same.  The reward mechanism, KPI and balanced scorecards are structured to keep the same highly efficient and effective thinking.  Is assumes that yesterday, today and tomorrow will create the same outcomes if we do it the same.  There is nothing wrong with this, and during times of stability, many have made vast fortunes with this approach.

Great leaders follow this loop when it is right but can also swop to the upper loop.  Such leaders sense a change. Such a “paradigm shift”, a concept identified by the American physicist and philosopher Thomas Kuhn, “is a fundamental change in the basic concepts and experimental practices of a scientific discipline”.  This shift means there is a new structure to understand (ontology). This paradigm shift has a new structure, which means that there is a need to determine the new culture to create value with a new structure.  Together a team will form an approach.  At this point, the team will question the shift and the assumptions that have led to change, setting a new mindset for the new order. 

Critically - understanding structure and ontology is crucial, and it is why I believe Data Philosophy, Data Ontology and better decisions based on data are current board issues. Still, they require new skills, are highly detailed, and often require a mind shift. 

Understanding structure and ontology is crucial for a data-driven digital board.



Implication 2.  AI and Automation

The Data Paradox.  How are you supposed to know how to create questions about something that you did not know we had to ask a question of?

Every child reads a book differently. A child learns to use questions to check and refine understanding. Every software engineer reads code differently. A software engineer is forced to check their understanding of the code and function by asking questions and by being asked questions. Whilst every AI will create sense from the data differently (ontology and code), right now, an AI cannot check its understanding of data by asking questions! Who could/would/ should the AI ask the clarification question of, and how do we check the person who answered is without bias? (Note I am not speaking about AQA).  

Sherlock Holmes in The Great Game says, “people do not like telling you things; they love to contradict you. Therefore if you want smart answers, do not ask a question. Instead, give a wrong answer or ask a question in such a way that it already contains the wrong information. It is highly likely that people will correct you”.  Do you do this to your data, or can you do this to your AI?

Today (Feb 2021), we cannot write an “algorithm” that detects if AI is going to create harm (evil). Partly because we cannot agree on “harm”, we cannot determine the unintended consequences, and we cannot bound harm for a person vs society.  

There is a drive towards automation for efficiency based on the analysis of data. As a Director, are you capable of asking the right questions to determine bias and prejudice created in the automated processes, the data structures, different ontologies, data attestation or bias in the processes?  Given Directors are accountable and responsible - indeed, this is a skill all board needs. Where is the audit and quota for these skills, can you prove it is available to the board? 



Wednesday, 24. February 2021

Damien Bod

Implementing OAuth Pushed Authorisation Requests in Angular

This posts shows how an Angular application can be secured using Open ID Connect code flow with PKCE and OAuth Pushed Authorisation Requests using node-oidc-provider as the identity provider. This requires configuration on both the client and the identity provider. Code: par-angular Getting started using Schematics and angular-auth-oidc-client The Angular client is implemented using angular-auth-oi

This posts shows how an Angular application can be secured using Open ID Connect code flow with PKCE and OAuth Pushed Authorisation Requests using node-oidc-provider as the identity provider. This requires configuration on both the client and the identity provider.

Code: par-angular

Getting started using Schematics and angular-auth-oidc-client

The Angular client is implemented using angular-auth-oidc-client.

ng add can be used to add the auth bits to your project.

ng add angular-auth-oidc-client

Then select the configuration you require.

The auth-config.module is now created and the configuration can be completed as required. PAR is activated by using the usePushedAuthorisationRequests configuration. The offline_access scope is requested as well as the prompt=consent. The nonce validation after a refresh is ignored.

export function configureAuth(oidcConfigService: OidcConfigService) { return () => oidcConfigService.withConfig({ stsServer: 'http://localhost:3000', redirectUrl: window.location.origin, postLogoutRedirectUri: window.location.origin, clientId: 'client-par-required', usePushedAuthorisationRequests: true, // use par Pushed Authorisation Requests scope: 'openid profile offline_access', responseType: 'code', silentRenew: true, useRefreshToken: true, logLevel: LogLevel.Debug, ignoreNonceAfterRefresh: true, customParams: { prompt: 'consent', // login, consent }, }); }

The node-oidc-provider client configuration require_pushed_authorization_requests is set to true so that Pushed Authorisation Requests can be used. The node-oidc-provider clients need a configuration for the public client which uses refresh tokens. The grant_types ‘refresh_token’, ‘authorization_code’ are added as well as the offline_access scope. As this is still draft, you need to enable Pushed Authorisation Requests before you can use it.

clients: [ { client_id: 'client-par-required', token_endpoint_auth_method: 'none', application_type: 'web', grant_types: ['refresh_token', 'authorization_code'], redirect_uris: ['https://localhost:4207'], require_pushed_authorization_requests: true, scope: 'openid offline_access profile email', post_logout_redirect_uris: [ 'https://localhost:4207' ] }, ], features: { devInteractions: { enabled: false }, // defaults to true deviceFlow: { enabled: true }, // defaults to false introspection: { enabled: true }, // defaults to false revocation: { enabled: true }, // defaults to false pushedAuthorizationRequests: { enabled: true }, },

When the authentication begins, the well known endpoints is used to get the PAR endpoint. The property pushed_authorization_request_endpoint will be set if this is supported.

http://localhost:3000/.well-known/openid-configuration

{ "pushed_authorization_request_endpoint":"http://localhost:3000/request", "authorization_endpoint":"http://localhost:3000/auth", "token_endpoint":"http://localhost:3000/token" "issuer":"http://localhost:3000", "jwks_uri":"http://localhost:3000/jwks", "userinfo_endpoint":"http://localhost:3000/me", "introspection_endpoint":"http://localhost:3000/token/introspection", // ... more

The PAR request is sent using the same parameters as OIDC code flow, but in the body of the request. As this is a public client, the client is not authorized. This was also configured in the IDP.

client_id=client-par-required &redirect_uri=https://localhost:4207 &response_type=code &scope=openid profile offline_access &nonce=73a2f7 + ... &code_challenge=aLo8v3vvenGVmXwecG3-rhuYATGTrKBKnMmXHayXpHI &code_challenge_method=S256 &prompt=consent

All configured correctly, the PAR response will contain the url to the server session for this auth request.

{ expires_in: 300, request_uri: "urn:ietf:params:oauth:request_uri:oeSbJ-jn1QvsTW9EUsAasmypWj7-PQEp7RjxogiCWUo" }

The client then redirects to the authorization endpoint and the flow continues like the existing standard.

http://localhost:3000/auth? request_uri=urn%3Aietf%3Aparams%3Aoauth%3Arequest_uri%3AoeSbJ-jn1QvsTW9EUsAasmypWj7-PQEp7RjxogiCWUo &client_id=client-par-required

That’s all the configuration required. The OAuth Pushed Authorisation Requests is still in draft and so might change. Hopefully this will get rolled out and more identity providers will support this specification.

The next steps would be to use OAuth RAR in the PAR request.

Links:

https://github.com/panva/node-oidc-provider

https://github.com/damienbod/angular-auth-oidc-client

https://tools.ietf.org/html/draft-ietf-oauth-par-06

https://www.connect2id.com/products/server/docs/api/par

View at Medium.com

Tuesday, 23. February 2021

Doc Searls Weblog

Welcome to the 21st Century

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, […]

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, in 1974.

It has likewise been suggested that the 20th century actually began with the assassination of Archduke Ferdinand and the start of WWI, in 1914. While that and my other claims might be arguable, you might at least agree that there’s no need for historic shifts to align with two or more zeros on a calendar—and that in most cases they don’t.

So I’m here to suggest that the 21st century began in 2020 with the Covid-19 pandemic and the fall of Donald Trump. (And I mean that literally. Social media platforms were Trump’s man’s stage, and the whole of them dropped him, as if through a trap door, on the occasion of the storming of the U.S. Capitol by his supporters on January 6, 2021. Whether you liked that or not is beside the facticity of it.)

Things are not the same now. For example, over the coming years, we may never hug, shake hands, or comfortably sit next to strangers again.

But I’m bringing this up for another reason: I think the future we wrote about in The Cluetrain Manifesto, in World of Ends, in The Intention Economy, and in other optimistic expressions during the first two decades of the 21st Century may finally be ready to arrive.

At least that’s the feeling I get when I listen to an interview I did with Christian Einfeldt (@einfeldt) at a San Diego tech conference in April, 2004—and that I just discovered recently in the Internet Archive. The interview was for a film to be called “Digital Tipping Point.” Here are its eleven parts, all just a few minutes long:

01 https://archive.org/details/e-dv038_doc_…
02 https://archive.org/details/e-dv039_doc_…
03 https://archive.org/details/e-dv038_doc_…
04 https://archive.org/details/e-dv038_doc_…
05 https://archive.org/details/e-dv038_doc_…
06 https://archive.org/details/e-dv038_doc_…
07 https://archive.org/details/e-dv038_doc_…
08 https://archive.org/details/e-dv038_doc_…
09 https://archive.org/details/e-dv038_doc_…
10 https://archive.org/details/e-dv039_doc_…
11 https://archive.org/details/e-dv039_doc_…

The title is a riff on Malcolm Gladwell‘s book The Tipping Point, which came out in 2000, same year as The Cluetrain Manifesto. The tipping point I sensed four years later was, I now believe, a foreshadow of now, and only suggested by the successes of the open source movement and independent personal publishing in the form of blogs, both of which I was high on at the time.

What followed in the decade after the interview were the rise of social networks, of smart mobile phones and of what we now call Big Tech. While I don’t expect those to end in 2021, I do expect that we will finally see  the rise of personal agency and of constructive social movements, which I felt swelling in 2004.

Of course, I could be wrong about that. But I am sure that we are now experiencing the millennial shift we expected when civilization’s odometer rolled past 2000.


Ally Medina - Blockchain Advocacy

Letter to Attorney General Becerra Re: FinCen Proposed Rule Privacy concerns

February 22, 2021 The Honorable Xavier Becerra California State Capitol SENT VIA EMAIL Dear Attorney General Becerra, On behalf of the Blockchain Advocacy Coalition, an organization of blockchain and virtual currency businesses in California, I write to bring to your attention a pending federal regulation that would preempt and refute many of the important privacy pro

February 22, 2021

The Honorable Xavier Becerra

California State Capitol

SENT VIA EMAIL

Dear Attorney General Becerra,

On behalf of the Blockchain Advocacy Coalition, an organization of blockchain and virtual currency businesses in California, I write to bring to your attention a pending federal regulation that would preempt and refute many of the important privacy protections your office has led the nation on. On December 18, 2020 the US Treasury led by Steve Mnuchin, released a concerning proposed rule that would put into place first of its kind reporting requirements for virtual currencies and digital assets. The agency initially proposed a 15 day comment period over the holidays due to unsubstantiated ‘national security concerns’. After widespread pushback from private citizens, virtual currency companies and members of Congress, the Treasury Department provided another 15 days for reporting requirements and an additional 45 for recordkeeping and counterparting reporting. Fortunately the Biden administration, faced with an avalanche of such poorly thought out rules, gave a 60 day pause and extension on the rulemaking and now the industry is facing a March 1st deadline to comment on a rule that would significantly stifle innovation, limit access to these new products and massively extend the reach of government surveillance of financial transactions far beyond the Bank Secrecy Act (BSA).

If it were to become policy, this rule would preempt California’s consumer privacy laws, significantly weakening the data privacy protections around financial information voters deemed important when approving the Californian Privacy Rights Act in November of 2020. While many parties have opined on the slapdash process and lack of clarity in the proposed rule, we do not believe that the blatant and far reaching consumer privacy implications have been brought to attention . Your office has led the charge implementing and enforcing the nation’s first and strongest consumer privacy framework, particularly for sensitive financial information. Because of this, we wanted to raise the following concerns with the proposed FinCEN rulemaking and ask for your action. The proposed rule complements existing BSA requirements applicable to banks and MSBs (money service business) by proposing to add reporting requirements for virtual currency transactions exceeding $10,000 in value. Pursuant to the proposed rule, banks and MSBs will have 15 days from the date on which a reportable transaction occurs to file a report with FinCEN. Further, this proposed rule would require banks and MSBs to keep records

of a customer’s virtual currency transactions and counterparties, including verifying the identity of their customers, if a counterparty uses an unhosted or otherwise covered wallet and the transaction is greater than $3,000.

Our concerns with the consumer privacy implications of this proposed rule are twofold:

First, the proposed rule’s requirement that MSB’s collect identifying information associated with wallet addresses will create reporting that extends well beyond the intent of the rule or the transaction. According to the EFF “For some cryptocurrencies like Bitcoin, transaction data — including users’ Bitcoin addresses — is permanently recorded on a public blockchain. That means that if you know the name of the user associated with a particular Bitcoin address, you can glean information about all of their Bitcoin transactions that use that address.” California consumers do not have the expectation that a future reporting requirement will link to their entire financial transaction history from that wallet.

Second, this rule creates requirements for disclosure of counterparty information beyond what the BSA requires banks and other financial institutions to collect. It wouldn’t only require these businesses to collect information about their own customers, but also the information of anyone who transacts with those customers using their own cryptocurrency wallets. Specifically:

The name and physical address of each counterparty to the transaction of the financial institution’s customer; Other counterparty information the Secretary may prescribe as mandatory on the reporting form for transactions subject to reporting pursuant to § 1010.316(b); Any other information that uniquely identifies the transaction, the accounts, and, to the extent reasonably available, the parties involved;

Unlike KYC (know your customer) requirements which arise from a direct customer relationship, KYCC (know your customer’s counterparty) requirements unreasonably obligate non-customers to provide personally identifying information to a VASP/MSB (virtual asset service provide/money services business) they do not know or do business with, and whose security and privacy practices they have not evaluated, simply because they happen to transact with one of its customers.

In its haste, the Treasury did not adequately consider the impact of these rules on consumer privacy for those that choose to use virtual currency and would create large scale government surveillance of small personal transactions. We call upon your leadership and expertise in this space to once again lead the charge for consumer protections and submit a comment letter opposing these portions of the proposed rule. Thank you for your consideration and please do not hesitate to reach out with any questions.

Kind Regards,

Ally Medina

Director, Blockchain Advocacy Coalition


FACILELOGIN

The Next TCP/IP Moment in Identity

Loved reading the book Ask Your Developer: How to Harness the Power of Software Developers and Win in the 21st Century by Jeff Lawson. Jeff says in the book that every company is on a journey to becoming a software company and everyone is starting to see the world through the lens of software. He defines the term, software person. A software person is not necessarily a developer, it’s anybod

Loved reading the book Ask Your Developer: How to Harness the Power of Software Developers and Win in the 21st Century by Jeff Lawson.

Jeff says in the book that every company is on a journey to becoming a software company and everyone is starting to see the world through the lens of software. He defines the term, software person. A software person is not necessarily a developer, it’s anybody who, when faced with a problem, asks the question, how can software solve this problem?

Build vs. Buy (or vs. Die)

In the book, Jeff takes the popular debate, build vs. buy, to another dimension; build vs die. As every company is becoming a software company, the competitive advantage they build is in the software they build. When software becomes the interface where the services you offer, meet the customers; unless you build it in the way you want; you die!

Building what you want gives you the freedom to experiment (or innovate). More you experiment or the ability to experiment more, gives you the edge to understand your customers more. Hence, you grow your business.

Build, does not necessarily mean building everything from scratch. You don’t build anything that already exists, given that it provides what you need. You only build things that are core to your business, which help building your competitive advantage over all the others. The rest, or the building blocks that help you build what you wish are part of the digital supply chain.

The Digital Supply Chain

Uber, for example, uses 4000+ microservices internally. However, not all of them are developed by Uber itself. Uber uses Google Maps API to pull out location data, the Twilio API to facilitate communication between passengers and drivers and many other APIs. All these APIs are coming from the digital supply chain Uber picks to build its product. Then again, these building blocks in Uber’s digital supply chain are also available to Lyft, and other Uber competitors around the world. What brings Uber the competitive advantage is in what they build!

The software you build, can be your product, at the same time it can be a building block for another product. Google Maps is Google’s product, however the Google Maps API is a building block for Uber. Alexa is a product of Amazon, however Alexa API is a a building block for Nissan.

Picking the right digital supply chain is equally important as what you pick to build. Think, what if Uber had to build something equivalent to Google Maps from the scratch? From 2016 to 2018, Uber paid 58M USD to Google for using Google Maps. But, then again it’s a peanut, when you compare that with their revenue in 2019, which was 14.15 billion USD.

Having the right digital supply chain helps you to optimize your development team to build only what you need and no more. Instagram, for example, was only a 13 people team, when Facebook acquired it for $1B in 2012; and WhatsApp team was only 50, when Facebook acquired it for $19B in 2014.

Build Your Own Identity Stack?

Every service you develop, every API you design, every device you use, every person you interact with, will have a managed identity, and in today’s hyperconnected world, the Identity integrations with these business applications and systems, is going to be critical.

Going back to the build vs. die debate; do you still have to build the Identity stack to gain the competitive advantage in your business? If you are in the Identity business, of course yes, for all the others no. Identity stack you need to build your product is a building block in the digital supply chain.

You never worried about building a TCP/IP stack yourself, so, don’t worry about building an Identity stack yourself. However, over the time we have spoken to over a thousand companies (hundreds of them are WSO2 customers), and in most of the cases they bring in unique identity requirements. The uniqueness comes in those requirements are specific to the industry they are in and also specific to the complexity of the business problem they want to solve.

Identity is core to any business, and how you manage identity will also help you in building competitive advantage. At WSO2, we have worked with 90% of the Identity Server customers to solve complex identity problems. Identity Server is open source, and if the business problem is straightforward, they don’t even talk to us, they simply use the product as it is. However, when we work with complex Identity requirements, we have extended the product to solve specific business problems.

Building these extensions, specific to unique business requirements helped companies to differentiate themselves from others. Then again, they didn’t want to build everything from scratch — rather they started with what’s common (and available to everyone) and started innovating on that. That drastically reduced the time-to-market, and also gave the freedom to innovate.

I don’t intend to contradict with what I mentioned before, that the Identity stack is part of the digital supply chain you pick, however, the Identity stack you pick for the digital supply chain should have the flexibility to extend with minimal effort to build business requirements specific to your business.

The TCP/IP Moment in Identity

In the 70’s, having support for TCP/IP in a product was considered to be a competitive advantage. Today, it’s given, and nobody worries about TCP/IP support; it’s everywhere.

Ian Glazer from Salesforce, mentioned in his keynote at the European Identity Conference 2016 that, it’s the TCP/IP moment in Identity now. He in fact talked about the open standards (SAML, OpenID Connect, OAuth, SCIM, XACML and so on) in the Identity domain, and how they are going to be part of every product, so no Identity vendor is going to gain competitive advantage just by supporting the open standards. RFPs looking for Identity products will not even worry about asking support for these open standards.

The Next TCP/IP Moment in Identity

Developers do not worry about building a TCP/IP stack, or even worrying about TCP/IP while building software. We believe, the Identity integrations with business applications and systems need to be developer-first (or developer-focused) with the right level of abstractions and tools. And, doing that right, would be the next TCP/IP moment in Identity, that will free the developers from worrying about complexities in Identity integrations.

The Developer-first IAM

The single Identity administrator role has started diminishing, and the role of the developer is becoming more prominent in Identity integrations. These developers need a better abstraction over core identity concepts; and the developer-first IAM is the way to realize the next TCP/IP moment in Identity.

In the consumer Identity space, when we talk to enterprises, they bring in their unique requirements. In many cases they look for a product that can be used to build an agile, event-driven consumer Identity (CIAM) platform that can flex to meet frequently changing business requirements.

A developer-first IAM product builds an abstraction over the core Identity concepts in the form of APIs and SDKs, provides tools for troubleshooting, has the ability to integrate with the organization’s build pipeline, carries the right level of developer experience and has the ability to extend product’s core capabilities to fit into organization’s complex IAM requirements.

As every company is becoming a software company, and starting to build their competitive advantage on the software they build, the developer-first IAM will free the developers from inherent complexities in doing Identity integrations. That’s the next TCP/IP moment in Identity!

The Next TCP/IP Moment in Identity was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 21. February 2021

reb00ted

Liberté, égalité, reparabilité.

Slowly, but steadily, some of tech’s worst practices are being chipped away. Why France’s New Tech ‘Repairability Index’ Is a Big Deal. Wired.

Slowly, but steadily, some of tech’s worst practices are being chipped away. Why France’s New Tech ‘Repairability Index’ Is a Big Deal. Wired.


Doc Searls Weblog

Radio 2.x

On Quora, somebody asks, How can the radio industry stay relevant in the age of streaming music and podcasts? Here’s my answer: It already is, if you consider streaming music and podcasting evolutionary forms of radio. But if you limit the meaning of radio to over-the-air broadcasting, the relevance will be a subordinate one to what’s happening […]

On Quora, somebody asks, How can the radio industry stay relevant in the age of streaming music and podcasts? Here’s my answer:

It already is, if you consider streaming music and podcasting evolutionary forms of radio.

But if you limit the meaning of radio to over-the-air broadcasting, the relevance will be a subordinate one to what’s happening over streaming, cellular and Internet connections, podcasting, satellite radio, digital audio broadcast (DAB) and various forms of Internet-shared video (starting with, but not limited to, YouTube).

The main way over-the-air radio can remain relevant in the long run is by finding ways for live streams to hand off to radio signals, and vice versa. Very little effort is going into this, however, so I expect over-the-air to drift increasingly to the sidelines, as a legacy technology. Toward this inevitable end, it should help to know that AM is mostly gone in Europe (where it is called MW, for MediumWave). This follows in the tracks of LW (longwave) and to some degree SW (shortwave) as well. Stations on those bands persist, and they do have their uses (especially where other forms of radio and Internet connections are absent); but in terms of popularity they are also-rans.

BUT, in the meantime, so long as cars have AM and FM radios in them, the bands remain relevant and popular. But again, it’s a matter of time before nearly all forms of music, talk and other forms of entertainment and sharing move from one-way broadcast to every-way sharing, based on digital technologies. (Latest example: Clubhouse.)

Friday, 19. February 2021

Damien Bod

Require user password verification with ASP.NET Core Identity to access Razor Page

This post shows how an ASP.NET Core application which uses ASP.NET Core Identity to authenticate and authorize users of the application can be used to require user password verification to view specific Razor pages in the application. If the user opens one of the Razor pages which require a password verification to open the page, […]

This post shows how an ASP.NET Core application which uses ASP.NET Core Identity to authenticate and authorize users of the application can be used to require user password verification to view specific Razor pages in the application. If the user opens one of the Razor pages which require a password verification to open the page, the user will be redirected to a separate Razor page to re-enter a password. All good, the original page can be opened.

Code https://github.com/damienbod/AspNetCoreHybridFlowWithApi

Setup the required password verification page

The RequirePasswordVerificationModel class implements the Razor page which requires that a user has verified a password for the identity user within the last ten minutes. The razor page inherits from the PasswordVerificationBase Razor page which implements the verification check. The constructor of the class needs to pass the parent dependencies. If the user has a valid verification, the page will be displayed, otherwise the application redirects to the password verification route.

public class RequirePasswordVerificationModel : PasswordVerificationBase { public RequirePasswordVerificationModel( UserManager<ApplicationUser> userManager) : base(userManager){} public async Task<IActionResult> OnGetAsync() { var passwordVerificationOk = await ValidatePasswordVerification(); if (!passwordVerificationOk) { return RedirectToPage("/PasswordVerification", new { ReturnUrl = "/DoUserChecks/RequirePasswordVerification" }); } return Page(); } }

The PasswordVerificationBase Razor page implements the PageModel. The ValidatePasswordVerification method checks if the user is already authenticated. It then checks if the user has not signed in after the last successful verification. The UserManager is used to fetch the data from the database. The last verification is implemented so that it can be no longer that ten minutes old.

public class PasswordVerificationBase : PageModel { public static string PasswordCheckedClaimType = "passwordChecked"; private readonly UserManager<ApplicationUser> _userManager; public PasswordVerificationBase(UserManager<ApplicationUser> userManager) { _userManager = userManager; } public async Task<bool> ValidatePasswordVerification() { if (User.Identity.IsAuthenticated) { if (User.HasClaim(c => c.Type == PasswordCheckedClaimType)) { var user = await _userManager.FindByEmailAsync(User.Identity.Name); var lastLogin = DateTime.FromFileTimeUtc( Convert.ToInt64(user.LastLogin)); var lastPasswordVerificationClaim = User.FindFirst(PasswordCheckedClaimType); var lastPasswordVerification = DateTime.FromFileTimeUtc( Convert.ToInt64(lastPasswordVerificationClaim.Value)); if (lastLogin > lastPasswordVerification) { return false; } else if (DateTime.UtcNow.AddMinutes(-10.0) > lastPasswordVerification) { return false; } return true; } } return false; } }

If the user needs to re-enter credentials, the PasswordVerificationModel Razor page is used for this. This class was built using the identity scaffolded login Razor page from ASP.NET Core Identity. The old password verifications claims are removed using the UserManager service. A new password verification claim is created, if the user successfully re-entered the password and the sign in is refreshed with the new ClaimIdentity instance.

public class PasswordVerificationModel : PageModel { private readonly UserManager<ApplicationUser> _userManager; private readonly SignInManager<ApplicationUser> _signInManager; private readonly ILogger<PasswordVerificationModel> _logger; public PasswordVerificationModel(SignInManager<ApplicationUser> signInManager, ILogger<PasswordVerificationModel> logger, UserManager<ApplicationUser> userManager) { _userManager = userManager; _signInManager = signInManager; _logger = logger; } [BindProperty] public CheckModel Input { get; set; } public IList<AuthenticationScheme> ExternalLogins { get; set; } public string ReturnUrl { get; set; } [TempData] public string ErrorMessage { get; set; } public class CheckModel { [Required] [DataType(DataType.Password)] public string Password { get; set; } } public async Task<IActionResult> OnGetAsync(string returnUrl = null) { if (!string.IsNullOrEmpty(ErrorMessage)) { ModelState.AddModelError(string.Empty, ErrorMessage); } var user = await _userManager.GetUserAsync(User); if (user == null) { return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'."); } var hasPassword = await _userManager.HasPasswordAsync(user); if (!hasPassword) { return NotFound($"User has no password'{_userManager.GetUserId(User)}'."); } returnUrl ??= Url.Content("~/"); ReturnUrl = returnUrl; return Page(); } public async Task<IActionResult> OnPostAsync(string returnUrl = null) { returnUrl ??= Url.Content("~/"); var user = await _userManager.GetUserAsync(User); if (user == null) { return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'."); } if (ModelState.IsValid) { // This doesn't count login failures towards account lockout // To enable password failures to trigger account lockout, set lockoutOnFailure: true var result = await _signInManager.PasswordSignInAsync(user.Email, Input.Password, false, lockoutOnFailure: false); if (result.Succeeded) { _logger.LogInformation("User password re-entered"); await RemovePasswordCheck(user); var claim = new Claim(PasswordVerificationBase.PasswordCheckedClaimType, DateTime.UtcNow.ToFileTimeUtc().ToString()); await _userManager.AddClaimAsync(user, claim); await _signInManager.RefreshSignInAsync(user); return LocalRedirect(returnUrl); } if (result.IsLockedOut) { _logger.LogWarning("User account locked out."); return RedirectToPage("./Lockout"); } else { ModelState.AddModelError(string.Empty, "Invalid login attempt."); return Page(); } } // If we got this far, something failed, redisplay form return Page(); } private async Task RemovePasswordCheck(ApplicationUser user) { if (User.HasClaim(c => c.Type == PasswordVerificationBase.PasswordCheckedClaimType)) { var claims = User.FindAll(PasswordVerificationBase.PasswordCheckedClaimType); foreach (Claim c in claims) { await _userManager.RemoveClaimAsync(user, c); } } } }

The PasswordVerificationModel Razor page html template displays the user input form with the password field.

@page @model PasswordVerificationModel @{ ViewData["Title"] = "Password Verification"; } <h1>@ViewData["Title"]</h1> <div class="row"> <div class="col-md-4"> <section> <form id="account" method="post"> <h4>Verify account using your password</h4> <hr /> <div asp-validation-summary="All" class="text-danger"></div> <div class="form-group"> <label asp-for="Input.Password"></label> <input asp-for="Input.Password" class="form-control" /> <span asp-validation-for="Input.Password" class="text-danger"></span> </div> <div class="form-group"> <button type="submit" class="btn btn-primary"> Re-enter password </button> </div> </form> </section> </div> </div> @section Scripts { <partial name="_ValidationScriptsPartial" /> }

The Login Razor page needs to be updated to add a login file time value for DateTime.UtcNow when the login successfully occurred. This value is used in the base Razor page to verify the password check. The LastLogin property was added for this.

var result = await _signInManager.PasswordSignInAsync( Input.Email, Input.Password, Input.RememberMe, lockoutOnFailure: false); if (result.Succeeded) { _logger.LogInformation("User logged in."); var user = await _userManager.FindByEmailAsync(Input.Email); if (user == null) { return NotFound("help...."); } user.LastLogin = DateTime.UtcNow.ToFileTimeUtc().ToString(); var lastLoginResult = await _userManager.UpdateAsync(user); return LocalRedirect(returnUrl); }

The LastLogin property was added to the ApplicationUser which implements the IdentityUser. This value is persisted to the Entity Framework Core database.

public class ApplicationUser : IdentityUser { public string LastLogin { get; set; } }

When the application is started, the user can login and will need to verify a password to access the Razor page implemented to require this feature.

Notes:

This was relatively simple to implement thanks to the helpers in ASP.NET Core Identity. It requires a user password login to work which is not always available or used. A FIDO2 verification would probably be better or a simple authenticator push notification. Some applications have the requirement for password verification to use a page, a view or a service for some extra sensitive processing and this would help here.

Links:

https://www.learnrazorpages.com/razor-pages/

https://docs.microsoft.com/en-us/aspnet/core/razor-pages

https://docs.microsoft.com/en-us/aspnet/core/security

Tuesday, 16. February 2021

SSI Ambassador

Self-sovereign identity: Legal compliance and the involvement of governments

SSI — Legal compliance and the involvement of governments This article describes how governments of sovereign states might be involved in building identity ecosystems based on self-sovereign principles and how regulatory conformity of such ecosystems can be achieved. When it comes to identity management the involvement of the government can be a tricky topic. It needs to be involved to enable acce
SSI — Legal compliance and the involvement of governments

This article describes how governments of sovereign states might be involved in building identity ecosystems based on self-sovereign principles and how regulatory conformity of such ecosystems can be achieved.

When it comes to identity management the involvement of the government can be a tricky topic. It needs to be involved to enable access to public services, adapt legislature and guarantee equal access for its citizens. However, it should not be able to control or monitor all aspects and activities of its citizens. Self-sovereign identity (SSI) might for some imply, that a citizen is suddenly able to issue his own ID-card, which isn’t the case. Governments are still the primary source of foundational identities.

The government as issuer of foundational identities

While individuals gain more autonomy with SSI the issuance of national IDs is still the responsibility of the public administration. The Pan Canadian Trust Framework (PCTF) differentiates between foundational and contextual identities.

“A foundational identity is an identity that has been established or changed as a result of a foundational event (e.g., birth, person legal name change, immigration, legal residency, naturalized citizenship, death, organization legal name registration, organization legal name change, or bankruptcy).” PCTF [1]

Hence, the government continues to be the issuer of foundational identities and still holds the authority to revoke these credentials when necessary. However, SSI also enables the usage of other identity providers, which are context dependent — leading to a contextual identity as further explained within the PCTF.

“A Contextual Identity is an identity that is used for a specific purpose within a specific identity context (e.g., banking, business permits, health services, drivers licensing, or social media). Depending on the identity context, a contextual identity may be tied to a foundational identity (e.g., a drivers licence) or may not be tied to a foundational identity (e.g., a social media profile).“ [1]

This means a customer of a bank can use his verified bank ID to identify himself at a credit bureau. Since the bank ID is based on a foundational identity, the contextual identity provided by the bank can be sufficient in this particular use-case given the regulatory environment allows such a usage. However, a contextual identity can, but doesn’t have to be based on a foundational identity.

The European Commission supports the continued usage of contextual identities online and only demands the usage of foundational identities when required by law as stated in the eIDAS public consultation [2] regarding the option to extend the regulation for the public sector:

“A European identity solution enabling trusted identification of citizens and companies in their digital interactions to access public or private online services (e.g. e- commerce), should be entirely voluntary for users to adhere to and fully protect data and privacy. Anonymity of the internet should be ensured at all times by allowing solutions for anonymous authentication anonymously where user identification is not required for the provision of the service.“ [2]
Regulatory compliancy

When dealing with personally identifiable information (PII) all involved stakeholders need to adhere to a certain set of laws, which dictate the usage of the such data. These laws highly depend on the citizenship of the data subject (the individual the PII is about) among other important factors. Although the following paragraphs are specifically devoted to the laws of the European Union, the conclusion might also be applied to similar laws such as the California Consumer Privacy Act (CCPA) or the Indian Personal Data Protection Bill (DPA).

Within the European Union, there are two laws, which have a significant influence on identity frameworks. The General Data Protection Regulation, better known as GDPR, determines how personal data from EU citizens can be collected and used. The other important law is the Electronic IDentification, Authentication and trust Services (eIDAS) provision specified in N°910/2014 [3]. It constitutes the main electronic identification trust framework in the EU and is an elemental building block of the digital single market.

GDPR — General Data Protection Regulation Overview of important roles regarding PII data management according to international standards.

The EBSI GDPR assessment [4] notes that

“According to this Regulation, there are two types of actors whose key role in data processing and whose relationship to the data within the data processing environment leads the European legislator to attribute them a set of obligations and responsibilities. Thus, these liable actors are subject to data protection rules.“ [4]

These are data controllers, which are defined in article 4(7) GDPR [5] as

“the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law” [5]

which

“have to take all necessary measures so that data subjects are sufficiently informed and have the ability to exercise their data protection rights.“ [4]

The other actor is the data processor who acts as delegate of the data controller and is a separate legal entity according to the opinion 1/2010 of the Data protection working party. [6] With multiple nodes running a decentralized network every node acts as a data processor or data controller depending on if the node operator is processing the data as a delegate or not. The EBSI GDPR report further notes:

“in case of joint controllership, data controllers can contractually assign partial responsibility based on distinct stages of data processing.“ While an agreement between these data processors can regulate the responsibilities, “data subjects will have ot (sic!) be able to exercise their rights against every joint controller“ and “nodes that add and process the on-chain ledger data in order to maintain the consensus will be individually qualified as joint data controllers and this, regardless of a contractual relationship stating the contrary.“ [4]

For public blockchains with permissionless write access such as Bitcoin or Ethereum, this means, that every miner, which is participating in the proof of work consensus is regarded as data processor given there is an unintentional personal data leakage or correlation with an URL (Uniform resource locator) of a service endpoint within DID Documents as pointed out as critical to keep PII private by the DID specification of the W3C in section 10.1.[7] This threat in addition of the numerous other correlation risks mentioned in section 10.2 and 10.3 of said specification make the current implementation of SSI based on permissionless blockchains, which inhibit the capability for natural persons to write a anywise DID on the ledger, a daunting privacy challenge.

Another important aspect is the question if credentials (or any other form of PII) is stored as hash on the verifiable data registry. A hash is a data digest and is considered a one-way function, which in theory leads to an anonymization of the original information. The debate around the question if a hash constitutes PII is likely to continue, since national data protection agencies are struggling to clearly define if the hashing can be considered an anonymization or pseudonymization.

“According to the Spanish DPA (Data protection agency), hashing can at times be considered as anonymization or pseudonymization depending on a variety of factors varying from the entities involved to the type of the data at hand.“ [4]

Even if the hash constitutes a one-way obfuscation technique, which anonymizes PII, it I) requires a transaction on a public ledger and II) it puts data controllers in a higher risk position with the obligation to avoid correlation of individuals. Risk minimizing obligations for data controllers are easier to implement when there is no hash of a verified credential or verified presentation stored on a public ledger.

When it comes to the wallet itself the EBSI GDPR report notes that

“there is growing consensus about the possibility of data subjects to being simultaneously considered as data controllers for the data that refer to themselves“. The report provides the recommendation that “the privacy preserving technical and organisational measures of the wallet and the personal data transmissions should ensure that the necessary safeguards are in place in order to not limit the empowerment of the data subject through the DLT chosen model.“ [4]

The report concludes, that data within the wallet application is considered personal data and therefor is subject to the data protection regulation.

While there is a general assumption that e.g. Hyperledger Indy implementations are GDPR compliant [8], ultimately courts have to decide if that claim holds up based on a case by case evaluation on the particular implementation. Nevertheless, avoiding the exposure of PII on the verifiable data registry, by I) not allowing natural persons to write public DIDs and II) not storing PII in hashed form on the verifiable data registry facilitate the GDPR compliance obligations.

eIDAS:

The eIDAS regulation [3] is concerned with two distinct topics. One part is concerned with trust services for private businesses such as electronic signatures, seal, time stamps etc. The other part is regulating the mutual recognition among member states of national implementations of electronic identification (eID) for the public sector. Is a technology neutral approach, which has a strong influence on the international regulatory space. The main goal of mutual recognition of eID is to enable EU citizens access to cross-border public services with their own national eID means. The implementation of eID schemes vary from member state to member state and not all member states have notified an eID scheme as illustrated by the overview of pre-notified and notified eID schemes [9] under eIDAS.

There are three levels of assurance specified for eIDs under eIDAS referring to the degree of confidence in the claimed identity of a person, which include detailed criteria allowing member states to map their eID means against a benchmark (low, substantial and high). Current SSI implementations have the objective to be recognized with a level of assurance specified as substantial.

It’s currently possible to be eIDAS compliant with SSI by leveraging one out of five scenarios described in the SSI eIDAS legal report by Dr. Ignacio Alamillo Domingo [10]. Especially interesting is the SSI eIDAS bridge, which adds legal value to verified credentials with the use of electronic certificates and electronic seals. However, it’s also possible to derive national eIDs notified in eIDAS, which are eIDAS linked by deriving a national eID by issuing a verifiable credential with a qualified certificate according to the technical specification.[12]

Nevertheless, there are also hindrances in the process of creating a qualified certificate with the derived national identity, because of the way the regulation is defining a qualified signature. Another issue is that national eID schemes require the keys to be in a secure element. However, current SSI wallets only offer software keys and do not leverage the security benefits of a hardware element. Furthermore, the eIDAS regulation doesn’t regulate the case of a private entity issuing an eID attribute to a natural person for the usage of it in other private interactions.

Furthermore, the authentication process to achieve the recognition of notified eIDAS schemes by other member states requires a national node, which provides the authentication service. While aimed to be technology neutral, the obligation to provide this authentication service as delegated authentication component has several drawbacks and also hinders the potential adoption of SSI. The EU has already identified the need to re-evaluate the policies set by eIDAS.

“Fundamental changes in the overall societal context suggest a revision of the eIDAS Regulation. These include a dramatic increase in the use of novel technologies, such as distributed-ledger based solutions, the Internet of Thing, Artificial Intelligence and biometrics, changes in the market structure where few players with significant market power increasingly act as digital identity ‘gatekeepers’, changes in user behavior with increasing demand for instant, convenient and secure identification and the evolution of EU Data Protection legislation” [2]

The consultation continues with its target:

“The objective of this initiative is, first of all, to provide a future proof regulatory framework to support an EU-wide, simple, trusted and secure system to manage identities in the digital space, covering identification, authentication and the provision of attributes, credentials and attestations. Secondly, the initiative aims at creating a universal pan-European single digital ID. These objectives could be achieved through an overhaul of the eIDAS system, an extension of eIDAS to the private sector, the introduction of a European Digital Identity (EUid) building on the eIDAS system or combination of both.” [2]

In an private interview for an academic study Dr. Ignacio Alamillo Domingo suggested embodying new technologies such as SSI into the revised regulation e.g. by not mandating the provision of an authentication facility and creating new trust services such as electronic identification. In another private interview Luca Boldrin suggested keeping national identity systems as they are but use a derivation of national identity for cross-border context for public and private businesses in parallel to current node implementation to enable a European identity.

Dr. Ignacio Alamillo Domingo argues that having derived national eIDs and eID trust services has the benefit of increased privacy by using a peer to peer authentication instead of a delegated authentication model (the national eIDAS node). This also leads to less liability issues by shifting the authentication part to private providers as well as less costs associated with running authentication infrastructure for governments, because these are provided by Distributed Public Key Infrastructure (DPKI) instead of national eIDAS nodes. These DPKI systems also have the benefits of being more resilient to attacks compared to a single node, which represents a single point of failure. However, regulating eID as trust service also means opening up identification for the private market, which might not be in the interest of national governments.

Disclaimer: This article does not represent the official view of any entity, which is mentioned in this article or which is affiliated with the author. It solely represents the opinion of the author.

SSI Ambassador
Adrian Doerk
Own your keys

Sources:

[1] PSP PCTF Working Group. ‘Pan Canadian Trust Framework (PCTF) V1.1’. GitHub, 2. June 2020. Accessed 22. June 2020. https://github.com/canada-ca/PCTF- CCP/blob/master/Version1_1/PSP-PCTF-V1.1-Consultation-Draft.pdf

[2] European Commission, ‘EU Digital ID Scheme for Online Transactions across Europe, Public Consultation, Inception Impact Assessment — Ares(2020)3899583’ Accessed 23. August 2020

[3] European parliament and the council of the European union. REGULATION (EU) No 910/2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (2014). Accessed 20. September 2020. https://ec.europa.eu/futurium/en/system/files/ged/eidas_regulation.pdf

[4] CEF Digital, University of Amsterdam. ‘EBSI GDPR Assessment, Report on Data Protection within the EBSI Version 1.0 Infrastructure.’ CEF Digital, April 2020. Accessed 18. August 2020. https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITALEBSI/Legal+Assessment+Reports

[5] ‘REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL (GDPR)’, 2016. Accessed 5. September 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679

[6] Working Party up under Article 29 of Directive 95/46/EC. ‘Article 29 Data Protection Working Party, “Opinion 1/2010 on the Concepts of ‘Controller’ and ‘Processor’” (2010)’, 16 February 2010. Accessed 5. September 2020. https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2010/wp169_en.pdf

[7] World Wide Web Consortium (W3C), ‘Decentralized Identifiers (DIDs) v1.0’. Accessed 18. August 2020. https://www.w3.org/TR/did-core/

[8] Sovrin Foundation. ‘GDPR Position Paper: Innovation Meets Compliance’, January 2020. Accessed 5. September 2020. https://sovrin.org/wp-content/uploads/GDPR-Paper_V1.pdf

[9] CEF Digital ‘Overview of Pre-Notified and Notified EID Schemes under EIDAS’, 2. January 2019. Accessed 6. September 2020. https://ec.europa.eu/cefdigital/wiki/display/EIDCOMMUNITY/Overview+of+pre- notified+and+notified+eID+schemes+under+eIDAS

[10] Dr. Ignacio Alamillo Domingo. ‘SSI EIDAS Legal Report’, April 2020, 150. Accessed 14. September 2020. https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITALEBSI/Legal+Assessment+Reports

[11] EBSI, ESSIF. ‘Technical Specification (15) — EIDAS Bridge for VC-ESealing’. CEF Digital, n.d. Accessed 2. September 2020. https://ec.europa.eu/cefdigital/wiki/cefdigital/wiki/display/CEFDIGITALEBSI/Technical+Specification+%2815%29+-+eIDAS+bridge+for+VC-eSealing


Damien Bod

Adding ASP.NET Core authorization for an Azure Blob Storage and Azure AD users using role assignments

This post shows how authorization can be implemented for Azure Storage Blob containers in an ASP.NET Core web application. The two roles Storage Blob Data Contributor and Storage Blob Data Reader are used to authorize the Azure AD users which use the Blob storage container. Users are assigned the roles using role assignment. This authorization […]

This post shows how authorization can be implemented for Azure Storage Blob containers in an ASP.NET Core web application. The two roles Storage Blob Data Contributor and Storage Blob Data Reader are used to authorize the Azure AD users which use the Blob storage container. Users are assigned the roles using role assignment. This authorization information is required in the ASP.NET Core application so that the users can be authorized to upload files, or just get authorized to download the files. The Azure Management Fluent rest API is used to select this data.

Code: https://github.com/damienbod/AspNetCoreAzureAdAzureStorage

Blogs in this series

Secure Azure AD User File Upload with Azure AD Storage and ASP.NET Core Adding ASP.NET Core authorization for an Azure Blob Storage and Azure AD users using role assignments Using Azure AD groups authorization in ASP.NET Core for an Azure Blob Storage

Setup the Azure App registration

To list the role assignments for Azure and Azure Storage in ASP.NET Core, a new Azure App registration was created. The service principle ID of the enterprise application (from this Azure App registration) is used to assign the contributor role on the subscription so that the application can list the role assignments from the scopes within the subscription. A client secret is setup for the application.

Setup the Role assignment for the Azure App registration

In the subscription, the Access control (IAM) blade is selected and a role assignment was added for the Azure Enterprise application which was created when we created the Azure App registration for this subscription. If the application registration is only required for the specific storage blob container, less rights would be required.

Implement the Azure Management Fluent Service

The Microsoft.Azure.Management.Fluent nuget package is used to implement the Azure rest API to access the role assignments for the Azure Storage Blob container.

The AuthenticateClient method is used to authenticate the Microsoft.Azure.Management.Fluent client using the Azure App registration clientId and the client secret from the App registration. The Authenticate method uses the Azure credentials to access the API.

private void AuthenticateClient() { // clint credentials flow with secret var clientId = _configuration .GetValue<string>("AzureManagementFluent:ClientId"); var clientSecret = _configuration .GetValue<string>("AzureManagementFluent:ClientSecret"); var tenantId = _configuration .GetValue<string>("AzureManagementFluent:TenantId"); AzureCredentialsFactory azureCredentialsFactory = new AzureCredentialsFactory(); var credentials = azureCredentialsFactory .FromServicePrincipal(clientId, clientSecret, tenantId, AzureEnvironment.AzureGlobalCloud); // authenticate to Azure AD _authenticatedClient = Microsoft.Azure.Management .Fluent.Azure.Configure() .Authenticate(credentials); }

The GetStorageBlobDataContributors method lists all the role assignments for the scope parameter. The scope parameter is the path to the Azure storage or whatever resource you want to check. This would need to be changed for your application. The Id for the role Storage Blob Data Contributor is used to filter only this role assignment in the defined scope.

/// <summary> /// returns IRoleAssignment for Storage Blob Data Contributor /// </summary> /// <param name="scope">Scope of the Azure storage</param> /// <returns>IEnumerable of the IRoleAssignment</returns> private IEnumerable<IRoleAssignment> GetStorageBlobDataContributors(string scope) { var roleAssignments = _authenticatedClient .RoleAssignments .ListByScope(scope); // https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles // Storage Blob Data Contributor == "ba92f5b4-2d11-453d-a403-e96b0029c9fe" // Storage Blob Data Reader == "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1" var storageBlobDataContributors = roleAssignments .Where(d => d.RoleDefinitionId .Contains("ba92f5b4-2d11-453d-a403-e96b0029c9fe")); return storageBlobDataContributors; }

The HasRoleStorageBlobDataContributorForScope method is used to check if the user principal id of the authenticated user in the ASP.NET core application has an assigned role for the Azure Storage, which was defined using the scope.

public bool HasRoleStorageBlobDataContributorForScope( string userPrincipalId, string scope) { var roleAssignments = GetStorageBlobDataContributors(scope); return roleAssignments .Count(t => t.PrincipalId == userPrincipalId) > 0; } public bool HasRoleStorageBlobDataReaderForScope( string userPrincipalId, string scope) { var roleAssignments = GetStorageBlobDataReaders(scope); return roleAssignments .Count(t => t.PrincipalId == userPrincipalId) > 0; }

Add authorization to the ASP.NET Core Web application

Now that the authorization data can be requested from Azure, this can be used in the ASP.NET Core application. Requirements, policies and handlers implementations can be used for this. The StorageBlobDataContributorRoleRequirement class was created using the IAuthorizationRequirement.

namespace AspNetCoreAzureStorage { public class StorageBlobDataContributorRoleRequirement : IAuthorizationRequirement { } }

The StorageBlobDataContributorRoleHandler implements the AuthorizationHandler which uses the StorageBlobDataContributorRoleRequirement requirement. The handler uses the AzureManagementFluentService to query the Azure data and the handler will succeed if the user has the built in Azure role Storage Blob Data Contributor.

using Microsoft.AspNetCore.Authorization; using System; using System.Linq; using System.Threading.Tasks; namespace AspNetCoreAzureStorage { public class StorageBlobDataContributorRoleHandler : AuthorizationHandler<StorageBlobDataContributorRoleRequirement> { private readonly AzureManagementFluentService _azureManagementFluentService; public StorageBlobDataContributorRoleHandler( AzureManagementFluentService azureManagementFluentService) { _azureManagementFluentService = azureManagementFluentService; } protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, StorageBlobDataContributorRoleRequirement requirement ) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scope = "subscriptions/..../storageAccounts/azureadfiles"; var spIdUserClaim = context.User.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/objectidentifier"); if (spIdUserClaim != null) { var success = _azureManagementFluentService .HasRoleStorageBlobDataContributorForScope(spIdUserClaim.Value, scope); if (success) { context.Succeed(requirement); } } return Task.CompletedTask; } } }

The Startup class adds the handlers and the services to the IoC of ASPNET Core. Two policies are added, one to check the Storage Blob Data Contributor requirement and a policy for the the Storage Blob Data Reader role check.

services.AddSingleton<IAuthorizationHandler, StorageBlobDataContributorRoleHandler>(); services.AddSingleton<IAuthorizationHandler, StorageBlobDataReaderRoleHandler>(); services.AddAuthorization(options => { options.AddPolicy("StorageBlobDataContributorPolicy", policyIsAdminRequirement => { policyIsAdminRequirement.Requirements .Add(new StorageBlobDataContributorRoleRequirement()); }); options.AddPolicy("StorageBlobDataReaderPolicy", policyIsAdminRequirement => { policyIsAdminRequirement.Requirements .Add(new StorageBlobDataReaderRoleRequirement()); }); });

The Razor page uses the policy to show or hide the data. If the user has authenticated in the ASP.NET Core application and has the required roles in Azure, the data will be returned. The checks are also validated on Azure and not just in the ASP.NET Core application.

@page @using Microsoft.AspNetCore.Authorization @inject IAuthorizationService AuthorizationService @model AspNetCoreAzureStorage.Pages.AzStorageFilesModel @{ ViewData["Title"] = "Azure Storage Files"; Layout = "~/Pages/Shared/_Layout.cshtml"; } @if ((await AuthorizationService.AuthorizeAsync(User, "StorageBlobDataContributorPolicy")).Succeeded) { <div class="card"> ///... rest of code omitted see src </div> } else { <p>User has not contributor access role for blob storage</p> }

When the application is started, the roles are checked and the views are displayed as required.

Notes:

This is just one way to use the role assignments authorization in an ASP.NET Core application using Azure Storage Blob containers. This would become complicated if using with lots of users or different types of service principals. In a follow up post we will explore other ways of implementing authorization in ASP.NET Core for Azure roles and azure role assignments.

Links Role assignments

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

Using Azure Management Libraries for .NET to manage Azure AD users, groups, and RBAC Role Assignments

https://management.azure.com/subscriptions/subscriptionId/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01

https://docs.microsoft.com/en-us/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-authentication

https://docs.microsoft.com/en-us/rest/api/authorization/role-assignment-rest-sample

Further Links:

https://github.com/Azure-Samples/storage-dotnet-azure-ad-msal

https://winsmarts.com/access-azure-blob-storage-with-standards-based-oauth-authentication-b10d201cbd15

https://stackoverflow.com/questions/45956935/azure-ad-roles-claims-missing-in-access-token

https://github.com/AzureAD/microsoft-identity-web/wiki

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

https://blog.soft-cor.com/empowering-developer-teams-to-manage-their-own-azure-rbac-permissions-in-highly-regulated-industries/

Monday, 15. February 2021

Jon Udell

My print edition superpower

When I walk around our neighborhood I see very few copies of our local newspaper on sidewalks, porches, and front lawns. We subscribe because it’s a lifelong habit, and because we like to support local journalism, and in my case because it’s a welcome reprieve from a long day of screentime. This archaic habit has … Continue reading My print edition superpower

When I walk around our neighborhood I see very few copies of our local newspaper on sidewalks, porches, and front lawns. We subscribe because it’s a lifelong habit, and because we like to support local journalism, and in my case because it’s a welcome reprieve from a long day of screentime.

This archaic habit has also, weirdly, become a kind of superpower. From time to time I meet people who are surprised when I know facts about them. These facts are public information, so why should it be surprising that I know them? Because they appear in the print edition of the newspaper. Consider the man in this photo.

We were out for a Sunday morning walk, carrying the newspaper, when we saw him in a cafe. We had just read an article about Whiskerino, the annual contest of fancy beards and mustaches in Petaluma. And here was the guy whose photo was featured in the article!

That was a nice coincidence, but the kicker is that he had no idea his picture was in the paper. Not being of newspaper-reading age, and not having friends of newspaper-reading age, he had to depend on elders to alert him. I guess we got to him before his parents did.

When a similar thing happened this weekend, it occurred to me that this could be a good marketing strategy for newspapers. Do you want to wield an information superpower? Do you want to amaze people by knowing facts that they can’t imagine you could possibly know? Subscribe to the print edition of your local newspaper!

Sunday, 14. February 2021

Identity Woman

My Articles On DIF

In 2020 I had a contract along with Juan Caballero to do communications at DIF for a few months. We got the youtube channel going with content from the F2F event and published several articles. I coauthored this one with Margo Johnson about the glossary process we went through to define Wallet, Agent and Credential. […] The post My Articles On DIF appeared first on Identity Woman.

In 2020 I had a contract along with Juan Caballero to do communications at DIF for a few months. We got the youtube channel going with content from the F2F event and published several articles. I coauthored this one with Margo Johnson about the glossary process we went through to define Wallet, Agent and Credential. […]

The post My Articles On DIF appeared first on Identity Woman.

Saturday, 13. February 2021

Information Answers

(P)algorithms

I mentioned the concept of Personal Algorithms, or (P)algorithms back in this post at the start of the Covid pandemic. I think they make for an […]
I mentioned the concept of Personal Algorithms, or (P)algorithms back in this post at the start of the Covid pandemic. I think they make for an […]

Friday, 12. February 2021

Phil Windley's Technometria

Passwords Are Ruining the Web

Summary: Passwords are ruining the web with awful, lengthy, and inconsistent user experiences. They're insecure and lead to data breaches. The good news is there are good ways for web sites to be passwordless. If you hate passwords, build the world you want to live in. Compare, for a moment, your online, web experience at your bank with the mobile experience from the same bank. Chances

Summary: Passwords are ruining the web with awful, lengthy, and inconsistent user experiences. They're insecure and lead to data breaches. The good news is there are good ways for web sites to be passwordless. If you hate passwords, build the world you want to live in.

Compare, for a moment, your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when I need somethings from my bank, I pull out my phone and use the mobile app. And it's not just banking. This experience is replicated on any web site that requires authentication. Passwords and the authentication experience are ruining the web.

I wouldn't be surprised to find businesses abandon functional web sites in the future. There will still be some marketing there (what we used to derisively call "brochure-ware") and a pointer to the mobile app. Businesses love mobile apps not only because they can deliver a better user experience (UX) but because they allow business to better engage people. Notifications, for example, get people to look at the app, giving the business opportunities to increase revenue. And some things, like airline boarding passes, just work much better on mobile.

Another factor is that we consider phones to be "personal devices". They aren't designed to be multi-user. Laptops and other devices, on the other hand, can be multi-user, even if in practice they usually are not. Consequently, browsers on laptops get treated as less insecure and session invalidation periods are much shorter, requiring people to login more frequently than in mobile apps.

Fortunately, web sites can be passwordless, relieving some of the pain. Technologies like FIDO2, WebAuthn, and SSI allow for passwordless user experiences on the web as well as mobile. The kicker is that this isn't a trade off with security. Passwordless options can be more secure, and even more interoperable, with a better UX than passwords. Everybody wins.

Notes This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present that next. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't.

Photo Credit: Login Window from AchinVerma (Pixabay)

Tags: identity passwords authentication ssi


MyDigitalFootprint

The New Fatigue - what is this all about?

Not sure about you but there is something different about the current situation (Feb 2021).  I wrote about the  7B’s as our responses to lockdown and I believe still that viewpoint stands. However, there is something new, right now, which is different it is a mistiness,  a malaise, fatigue, sapped, brain fog.  To be clear, this is not a loss of motivation, depression or oth


Not sure about you but there is something different about the current situation (Feb 2021).  I wrote about the  7B’s as our responses to lockdown and I believe still that viewpoint stands. However, there is something new, right now, which is different it is a mistiness,  a malaise, fatigue, sapped, brain fog.  To be clear, this is not a loss of motivation, depression or other mental issues - which are all very real and getting to everyone but I am talking about something else.

In 1996 (web 1.0) the market discovered the opportunity for an online market, everyone took their physical business and replicated it exactly (give or take a bit) to work in a web browser.  On-line arrived.  We quickly worked out that this was an unmitigated disaster as a user experience, operationally and back office was a mess.  Come 2001 post-crash we had stopped taking off-line thinking and plonking it on-line and saying to ourselves, this is fab.  We started digital-first.  

2020 we were forced to take the remains of all offline ways of working and stuff it online - meetings, innovation, communication, management, reporting, selling, socialising, dating, laughing, onboarding, education, training - everything else we did in person and face to face.

We have just replicated the 1996 sorry mess - we imaged that we could take what was offline and stuff it into the tech available and it would be magic. My fatigue I am sure is because it has not been designed digital-first, to be specific I mean the workflow, UI and UX. Starting from “what work is needed” and not starting from “does this great scalable platform do the job!”

Perhaps we need to stop, look in the mirror and say to ourselves: “look this pandemic is not going away.”  Then we need to work out what work needs to be done and how we can do it digital-first rather than pretending it is all ok and that we can ignore it as we will all go back to normal - please no.  Just expecting everyone to buckle down and get on is no longer a good strategy. 

I ask three questions

Is this working for us? - No it is not working, the malaise, fatigue and brain fog are a clue.  We got away with it for a time, but this is not a long term solution

Who is us?  - those working from home but the work is more than working at home, we have lost our connectedness, bonding and togetherness

Of whom are we asking the question? - Senior leadership and Directors who owe a duty of care and responsibility. It is now obvious that we have to reimagine work starting from digital-first for all those aspects of our working lives that we had left as they were too hard and difficult.   






Ludo Sketches

ForgeRock Directory Services 7

In August 2020, we’ve rolled out a new release of the ForgeRock Identity Platform which included updated versions of all of the products, including Directory Services 7.0. I didn’t write a post about the new release, mostly due to our… Continue reading →

In August 2020, we’ve rolled out a new release of the ForgeRock Identity Platform which included updated versions of all of the products, including Directory Services 7.0. I didn’t write a post about the new release, mostly due to our focus to deliver the ForgeRock Identity Cloud and family vacation.

But ForgeRock Directory Services 7.0 is a major release in many ways. It is the first to be released with a sample docker file and full support to run in Kubernetes environments in the Cloud. To achieve that, we’ve made a number of significant changes especially in how security is managed, and how replication is configured and enabled. The rest of the server remains quite the same, delivering consistent performance and reliability. You should read the release notes for all the details.

Since, DS 7 was successfully deployed in production, in VMs or in Docker/Kubernetes, and our customers have praised the simplicity and efficiency of the new version. However, some customers have experienced some difficulties with upgrading their current deployment to the 7.0 release, mostly due to the changes I’ve mentioned above. So we have been improving our Upgrade Guide, with greater details, and my colleague Mark Craig, has posted a series of 3 articles on Upgrading to Directory Services 7:

What has changed? Upgrading by adding new servers Doing In-place Upgrade

If you’re planning to upgrade an OpenDJ or a ForgeRock Directory Services to the latest release, I would high recommend to read the Directory Services Upgrade Guide, and then Mark’s posts.


Identity Woman

Podcast: Mint & Burn

I had a great time with the the folks at RMIT on their Mint & Burn Podcast. Enjoy! The post Podcast: Mint & Burn appeared first on Identity Woman.

I had a great time with the the folks at RMIT on their Mint & Burn Podcast. Enjoy!

The post Podcast: Mint & Burn appeared first on Identity Woman.


The Flavors of Verifiable Credentials

I have authored a new paper in my new role as Ecosystems Director at CCI. You can read the blog post about it on the Linux Foundation Public Health and download the paper in PDF form here. The post The Flavors of Verifiable Credentials appeared first on Identity Woman.

I have authored a new paper in my new role as Ecosystems Director at CCI. You can read the blog post about it on the Linux Foundation Public Health and download the paper in PDF form here.

The post The Flavors of Verifiable Credentials appeared first on Identity Woman.


Two Exciting New Roles

I should have written this post at the beginning of the year…but the year is still young. I have two new part time roles that I’m really excited about. I am the Ecosystems Director at the Covid-19 Credentials Initiative. I am working with a fantastic team helping lead/organize this community. Lucy Yang is the Community […] The post Two Exciting New Roles appeared first on Identity Woman.

I should have written this post at the beginning of the year…but the year is still young. I have two new part time roles that I’m really excited about. I am the Ecosystems Director at the Covid-19 Credentials Initiative. I am working with a fantastic team helping lead/organize this community. Lucy Yang is the Community […]

The post Two Exciting New Roles appeared first on Identity Woman.

Tuesday, 09. February 2021

Phil Windley's Technometria

Persistence, Programming, and Picos

Summary: Picos show that image-based development can be done in a manner consistent with the best practices we use today without losing the important benefits it brings. Jon Udell introduced me to a fascinating talk by the always interesting r0ml. In it, r0ml argues that Postgres as a programming environment feels like a Smalltalk image (at least that's the part that's germane to th

Summary: Picos show that image-based development can be done in a manner consistent with the best practices we use today without losing the important benefits it brings.

Jon Udell introduced me to a fascinating talk by the always interesting r0ml. In it, r0ml argues that Postgres as a programming environment feels like a Smalltalk image (at least that's the part that's germane to this post). Jon has been working this way in Postgres for a while. He says:

For over a year, I’ve been using Postgres as a development framework. In addition to the core Postgres server that stores all the Hypothesis user, group, and annotation data, there’s now also a separate Postgres server that provides an interpretive layer on top of the raw data. It synthesizes and caches product- and business-relevant views, using a combination of PL/pgSQL and PL/Python. Data and business logic share a common environment. Although I didn’t make the connection until I watched r0ml’s talk, this setup hearkens back to the 1980s when Smalltalk (and Lisp, and APL) were programming environments with built-in persistence. From The Image of Postgres
Referenced 2021-02-05T16:44:56-0700

Here's the point in r0ml's talk where he describes this idea:

As I listened to the talk, I was a little bit nostalgic for my time using Lisp and Smalltalk back in the day, but I was also excited because I realized that the model Jon and r0ml were talking about is very much alive in how one goes about building a pico system.

Picos and Persistence

Picos are persistent compute objects. Persistence is a core feature of how picos work. Picos exhibit persistence in three ways. Picos have1:

Persistent identity—Picos exist, with a single identity, continuously from the moment of their creation until they are destroyed. Persistent state—Picos have state that programs running in the pico can see and alter. Persistent availability—Picos are always on and ready to process queries and events.

Together, these properties give pico programming a different feel than what many developers are used to. I often tell my students that programmers write static documents (programs, configuration files, SQL queries, etc.) that create dynamic structures—the processes that those static artifacts create when they're run. Part of being a good programmer is being able to envision those dynamic structures as you program. They come alive in your head as you imagine the program running.

With picos, you don't have to imagine the structure. You can see it. Figure 1 shows the current state of the picos in a test I created for a collection of temperature sensors.

Figure 1: Network of Picos for Temperatures Sensors (click to enlarge)

In this diagram, the black lines show the parent-child hierarchy and the dotted pink lines show the peer-to-peer connections between picos (called "subscriptions" in current pico parlance). Parent-child hierarchies are primarily used to manage the picos themselves whereas the heterarchical connections between picos is used for programmatic communication and represent the relationships between picos. As new picos are created or existing picos are deleted, the diagram changes to show the dynamic computing structure that exists at any given time.

Clicking on one of the boxes representing a pico opens up a developer interface that enables interaction with the pico according to the rulesets that have been installed. Figure 2 shows the Testing tab for the develop interface of the io.picolabs.wovyn.router ruleset in the pico named sensor_line after the lastTemperature query has been made. Because this is a live view into the running system, the interface can be used to query the state and raise events in the pico.

Figure 2: Interacting with a Pico (click to enlarge)

A pico's state is updated by rules running in the pico in response to events that the pico sees. Pico state is made available to rules as persistent variables in KRL, the ruleset programming language. When a rule sets a persistent variable, the state is persisted after the rule has finished execution and is available to other rules that execute later2. The Testing tab allows developers to raise events and then see how that impacts the persistent state of the pico.

Programming Picos

As I said, when I saw r0ml's talk, I was immediately struck by how much programming picos felt like using the Smalltalk or Lisp image. In some ways, it's like working with Docker images in a Fargate-like environment since it's serverless (from the programmer's perspective). But there's far less to configure and set up. Or maybe, more accurately, the setup is linguistically integrated with the application itself and feels less onerous and disconnected.

Building a system of picos to solve some particular problem isn't exactly like using Smalltalk. In particular, in a nod to modern development methodologies, the rulesets are installed from URLs and thus can be developed in the IDE the developer chooses and versioned in git or some other versioning system. Rulesets can be installed and managed programmatically so that the system can be programmed to manage its own configuration. To that point, all of the interactions in developer interface are communicated to the pico via an API installed in the picos. Consequently, everything the developer interface does can be done programmatically as well.

Figure 3 shows the programming workflow that we use to build production pico systems.

Figure 3: Programming Workflow (click to enlarge)

The developer may go through multiple iterations of the Develop, Build, Deploy, Test phases before releasing the code for production use. What is not captured in this diagram is the interactive feel that the pico engine provides for the testing phase. While automated tests can test the unit and system functionality of the rules running in the pico, the developer interface provides a visual tool for envisioning the interaction of the picos that are animated by dynamic interactions. Being able to query the state of the picos and see their reaction to specific events in various configurations is very helpful in debugging problems.

A pico engine can use multiple images (one at a time). And an image can be zipped up and shared with another developer or checked into git. By default the pico engine stores the state of the engine, including the installed rulesets, in the ~/.pico-engine/ directory. This is the image. Developers can change this to a different directory by setting the PICO_ENGINE_HOME environment variable. By changing the PICO_ENGINE_HOME environment variable, you can keep different development environments or projects separate from each other, and easily go back to the place you left off in a particular pico application.

For example, you could have a different pico engine image for a game project and an IoT project and start up the pico engine in either environment like so:

# work on my game project PICO_ENGINE_HOME=~/.dnd_game_image pico-engine # work on IoT project PICO_ENGINE_HOME=~/.iot_image pico-engine Images and Modern Development

At first, the idea of using an image or the running system and interacting with it to develop an application may see odd or out of step with modern development practices. After all, developers have had the idea of layered architectures and separation of concerns hammered into them. And image-based development in picos seems to fly in the face of those conventions. But it's really not all that different.

First, large pico applications are not generally built up by hand and then pushed into production. Rather, the developers in a pico-based programming project create a system that comes into being programmatically. So, the production image is separate from the developer's work image, as one would like. Another way to think about this, if you're familiar with systems like Smalltalk and Lisp is that programmers don't develop systems using a REPL (read-eval-print loop). Rather they write code, install it, and raise events to cause the system to talk action.

Second, the integration of persistence into the application isn't all that unusual when one considers the recent move to microservices, with local persistence stores. I built a production connected-car service called Fuse using picos some years ago. Fuse had a microservice architecture even though it was built with picos and programmed with rules.

Third, programming in image-based systems requires persistence maintenance and migration work, just like any other architecture does. For example, a service for healing API subscriptions in Fuse was also useful when new features, requiring new APIs, were introduced since the healing worked as well for new, as it did existing, API subscriptions. These kinds of rules allowed the production state to migrate incrementally as bugs were fixed and features added.

Image-based programming in picos can be done with all the same care and concern for persistence management and loose coupling as in any other architecture. The difference is that developers and system operators (these days often one and the same) in a pico-based development activity are saved the effort of architecting, configuring, and operating the persistence layer as a separate system. Linguistically incorporating persistence in the rules provides for more flexible use of persistence with less management overhead.

Stored procedures will not likely soon lose their stigma. Smalltalk images, as they were used in the 1980's, are unlikely to find a home in modern software development practices. Nevertheless, picos show that image-based development can be done in a manner consistent with the best practices we use today without losing the important benefits it brings.

Future Work

There are some improvements that should be made to the pico-engine to make image-based development better.

Moving picos between engines is necessary to support scaling of pico-based system. It is still too hard to migrate picos from one engine to another. And when you do, the parent-child hierarchy is not maintained across engines. This is a particular problem with systems of picos that have varied ownership. Managing images using environment variables is clunky. The engine could have better support for naming, creating, switching, and deleting images to support multiple project. Bruce Conrad has created a command-line debugging tool that allows declarations (which don't affect state) to be evaluated in the context of a particular pico. This needs functionality could be better integrated into the developer interface.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

Notes These properties are dependent on the underlying pico engine and the persistence of picos is subject to availability and correct operation of the underlying infrastructure. Persistent variables are lexically scoped to a specific ruleset to create a closure over the variables. But this state can be accessed programmatically by other rulesets installed in the same pico by using the KRL module facility.

Tags: picos identity persistence programming

Monday, 08. February 2021

Matt Flynn: InfoSec | IAM

Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management

Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management Over a decade ago, the need for quicker SaaS onboarding led to Siloed IAM for early IDaaS adopters. For many, IDaaS evolved to a Hybrid IAM approach. Today, Oracle’s IDaaS provides comprehensive coverage for enterprise apps.  "IDaaS has matured quite a bit over the last several years and no l

Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management

Over a decade ago, the need for quicker SaaS onboarding led to Siloed IAM for early IDaaS adopters. For many, IDaaS evolved to a Hybrid IAM approach. Today, Oracle’s IDaaS provides comprehensive coverage for enterprise apps. 

"IDaaS has matured quite a bit over the last several years and no longer relies as much on SAML or pre-built app templates. Today, Oracle Identity Cloud Service helps manage access to virtually any enterprise target. To accomplish that, we’ve introduced several technical approaches to bringing more applications into the IDaaS fold with less effort. These approaches, combined, provide the easiest path toward enabling the service to manage access for more systems and applications."

Read more on the Oracle Cloud Security Blog > Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management.

Sunday, 07. February 2021

Doc Searls Weblog

Why the Chiefs will win the Super Bowl

I think there are more reasons to believe in the Bucs than the Chiefs today: better offensive line, better defense, Brady’s unequaled Super Bowl experience, etc. But the Chiefs are favored by 3.5 points, last I looked, and they have other advantages, including the best quarterback in the game—or maybe ever—in Patrick Mahomes. And that’s […]

I think there are more reasons to believe in the Bucs than the Chiefs today: better offensive line, better defense, Brady’s unequaled Super Bowl experience, etc. But the Chiefs are favored by 3.5 points, last I looked, and they have other advantages, including the best quarterback in the game—or maybe ever—in Patrick Mahomes.

And that’s the story. The incumbent GOAT (greatest of all time) is on his way out and the new one is on his way in. This game will certify that. I also think the Chiefs will beat the spread. By a lot. Because Mahomes and the Chiefs’ offense is just that good, and that ready.

Disclosures… In 2016, I correctly predicted, for the same reason (it makes the best story) that Lebron James and the Cleveland Cavaliers would beat the Golden State Warriors for the NBA championship. Also, a cousin of mine (once removed—he’s the son of my cousin) is Andy Heck, the Chiefs’ offensive line coach. So, as a long-time fan of both the Patriots and Tom Brady, I’ll be be cool with either team winning.

But I do think a Chiefs win makes a better story. Especially if Mahomes does his magic behind an offensive line of injuries and substitutes outperforming expectations.

[Later…] The Chiefs lost, 31-9, and their o-line was terrible. Poor Pat had to use his scrambling skills to the max, running all over the backfield looking for a well-covered receiver. And he came inches from hitting one in the end zone at least twice, while on the run 50 or more yards away. This was the Chief’s worst loss ever in the Mahomes era. Anyway, it looked and felt like it. But hey: congrats to the Bucs. They truly kicked ass.

 

 


Jon Udell

Continental drift

In a 1999 interview David Bowie said that “the potential for what the Internet is going to do to society, both good and bad, is unimaginable.” I had no problem imagining the good, it was the bad where my imagination failed. The web of the late 90s was a cornucopia of wonder, delight, and inspiration. … Continue reading Continental drift

In a 1999 interview David Bowie said that “the potential for what the Internet is going to do to society, both good and bad, is unimaginable.” I had no problem imagining the good, it was the bad where my imagination failed. The web of the late 90s was a cornucopia of wonder, delight, and inspiration. So was the blogosophere of the early 2000s. I know a lot of us are nostalgic for those eras, and depressed about how things have turned out. The bad is really quite bad, and sometimes I feel like there’s no way forward.

And then something wonderful happens. This time the spark was David Grinspoon aka @DrFunkySpoon. I’ve written before about a Long Now talk in which he posits that we might not just be living through the beginning of a geological epoch called the Anthropocene but rather — and far more profoundly — the dawn of an eon that he calls the Sapiezoic. Today he posted a stunning new visualization of plate tectonics.

As always when I think about plate tectonics, I’m reminded of the high school science teacher who introduced me to the topic. His name is John Ousey, and this happened almost 50 years ago. What always stuck with me is the way he presented it. Back then, plate tectonics was a new idea. As Mr. Ousey (he later became Dr. Ousey) described the continents sliding apart, I can still see the bemused look on his face. He was clearly wrestling with the concept, unsure whether to believe things really happen that way. That healthy skepticism, coupled with trust in the scientific process, made an indelible impression on me.

One of the wonders of the Internet is the ability to find people. It took some sleuthing, but I did find him and this ensued.

I wrote to John Ousey and he replied!

"I learned about Plate Tectonics (then called Continental Drift) by taking weekend courses taught by the great teacher/researcher Ehrling Dorf of Princeton. It was brand new and I'm not sure that he was totally convinced about it either."

— Jon Udell (@judell) February 6, 2021

That’s the kind of magic that can still happen, that does happen all the time.

I learned for the first time that John Ousey’s introduction to plate tectonics came by way of “weekend courses taught by the great teacher/researcher Ehrling Dorf of Princeton” who was himself perhaps “not totally convinced.” Despite uncertainty, which he acknowledged, John Ousey was happy to share an important new idea with his Earth science class.

What a privilege to be able to thank him, after all these years, for being a great teacher who helped me form a scientific sensibility that has never mattered more than now. And to share a moment of appreciation for an extraordinary new visualization of the process once known as continental drift. Yes, there’s a dark side to our connected world, darker than I was once willing to imagine. But there is also so much light. It’s always helpful to consider deep geological time. That video shows a billion years of planetary churn. We’ve only been connected like this for 25 years. Maybe we’ll figure it out. For today, at least, I choose to believe that we will.

Friday, 05. February 2021

Identity Woman

Radical Exchange Talk: Data Agency. Individual or Shared?

I had a great time on this Radical Exchange conversation The post Radical Exchange Talk: Data Agency. Individual or Shared? appeared first on Identity Woman.

I had a great time on this Radical Exchange conversation

The post Radical Exchange Talk: Data Agency. Individual or Shared? appeared first on Identity Woman.

Thursday, 04. February 2021

MyDigitalFootprint

Quantum Risk: a wicked problem that emerges at the boundaries of our data dependency

Framing the problem I am fighting bias and prejudice about risk perceptions; please read the next lines before you click off.  We tend to be blind sighted to “risk” because we have all lived it, read it and listened to risk statements.  The ones on the TV and radio for financial products, the ones at the beginning of investment statements, ones for health and safety for machinery, ones
Framing the problem

I am fighting bias and prejudice about risk perceptions; please read the next lines before you click off.  We tend to be blind sighted to “risk” because we have all lived it, read it and listened to risk statements.  The ones on the TV and radio for financial products, the ones at the beginning of investment statements, ones for health and safety for machinery, ones for medicine, ones on the packets of cigarettes, the one when you open that new app on your new mobile device. We are bombarded with endless risk statements that we assume we know the details of, or just ignore.  There are more books on risk than on all other management and economics topics together.  There is an entire field on the ontologies of risk; such is the significance of this field. This article is suggesting that all that body of knowledge and expertise has missed something.  A bold statement, but quantum risk is new, big, ugly, and already here, it's just that we are willingly blind to it. 

At the end of the Board pack or PowerPoint deck for new investment, intervention case or for the adoption of the new model, there is a risk and assumptions list.  We have seen these so many times we don’t read them.  These statements are often copies, and the plagiarism of risk statements inaccurately copied is significant; no effort is put in as such statements have become a habit in the process methodology.   The problem we all have with risk is that we know it all. Quite frankly, we see risk as the prime reason to stop something and occasionally manage it closer but never too understand something better.  If you are operating a digital or data business you have new risks that are not in your risk statement, you have not focussed on them before, you are unlikely to have been exposed to them, and this article is to bring them to your attention.  Is that worth 8 minutes?

Many thanks to Peadar Duffy whom I have been collaborating with on this thinking, and he has published a super article on the same topic (quantum risk) here 

The purpose of business 

We know that 3% of our data lake is finance data today; shockingly, 90% of our decisions are based on this sliver of data (Source Google). As we have to aim for a better ratio of “data: decisions” that includes non-financial data; we will make progress towards making better decisions that benefit more than a pure shareholder primacy view of the world.  As leaders, we have a desire to make the best possible decisions we can. We fuse data, experience and knowledge to balance our perception of risk, probability and desired outcomes.  

The well-publicised “Business Roundtable” report in Aug 2019 redefines a corporation’s purpose to promote ‘An Economy That Serves All … [Americans]’.  The idea that company purpose should be closer to ecosystem thinking has been gaining prevalence since the financial crisis in 2008.  The thinking has significant supporters such as Larry Fink,  Blackrock’s founder and CEO, who is an influential voice for ESG reporting and promotes changes to decision making criteria for better outcomes. His yearly letters are an insightful journey. 

Sir Donald Brydon's Dec 2019 report highlights that governance and audit need attention if we are to deliver better decisions, transparency and accountability. The report concludes that audit has significant failings and our approach to tick box compliance is not serving directors, shareholders or society to the level expected. Given that so much of our risk management depends on the quality of the audit, internal and external, it is likely that we are unduly confident in data that is unreliable. This point alone about audit failure could be sufficient for this article’s conclusion; however, we are here to explore Quantum Risk. Quantum Risk only exists because of the business dependency we now have on data from our co-dependent supply chains to dependent ecosystems.  

Quantum Risk is NEW  

As a term from physics that describes particles’ properties, “quantum” will help frame new risk characteristics.  The primary characteristics of quantum particles’ behaviour are:- the uncertainty principle, composite systems and entanglement.   In a language, I understand these characteristics for Quantum risk are:

When you observe the same risk twice, it might not be there, and it will look different.

The same risk can be in many places simultaneously, but it is only one risk.

Your risk and my risk directly affect each other across our data ecosystem; they are coupled but may not be directly connected.

Framing Risk

Risk, like beauty, privacy, trust, transparency and many other ideals, is a personal perspective on the world; however, we all accept that we have to live with risk.

Risk, and the management of risk, fundamentally assumes that you can identify it first.  If you cannot identify the risk, there is no risk to consider or manage. 

Having identified the risk, you assess the risks to categorise and prioritise them using the classic impact vs likelihood model. 

Finally, the management (review and control) of risk determines if you are doing the right things or action is needed.  

It is possible to add a third axis to a classic likelihood, impact risk model, “quality of knowledge.” The third axis visually highlights that a focus on high risks accumulates the most knowledge as that is where the management focus and control is required, and it needs data which becomes knowledge.    If there is a deficit in knowledge because of poor data, it translates into an increased risk hidden because of poor data at any point in the matrix.  Poor data (knowledge) can mean that either the impact (consequence) will be more severe or the likelihood (probability) is more likely. In part, we can overcome poor data problems by recognising that it always exists, but it easily hides the rather current issues of pandemics and systemic risk. However, if the quality of knowledge is based on erroneous data (data without rights and attestation), we have no truth to the likelihood and impact.

 


Some sophisticated models and maths help qualify and understand the nature of risk depending on its nature and size.  However, the list of risks that any one company faces is defined, specified and has been thought about over a long period.  Uncovering new risk is considered unlikely; however, it is this that we are exploring and given our natural confirmational bias towards risk (we know it) - this is hard.  

Classic risk models are framed to gain certainty, where risk is the identification, understanding, management and control of uncertainty.  Existing risk models are highly efficient within this frame of reference, and we can optimise to our agreed boundaries of control with incredible success.  Risk within our boundary (sphere of direct control) is calculated, and it becomes a quantified measure, enabling incentives to be created that provide a mechanism for management control.   Risk outside our boundary (indirect control on a longer supply or value chain), whilst it is someone else’s risk we are dependent on them to manage it. Such dependencies are vital in modern global businesses. We have developed methodology (contracts) and processes (audit) to ensure that we are confident that any risk to us, inside or outside of our direct control, is identified and managed.

However, as leaders, we face three fundamental issues on this move to an economy that serves broader eco-systems as the boundaries we are dependent on have become less clear.  

1. The quality of the data and implied knowledge we receive from our direct and dependent* eco-system, even if based on audit for financial and non-financial data, is unreliable and is increasingly complicated due to different data proposes and ontologies.

2. The quality of the knowledge we receive from our indirect and interdependent** eco-system, even if based on audit for financial and non-financial data, is unreliable and is increasingly complicated due to different data proposes and ontologies.

3. Who is responsible and accountable at second and third-order data boundaries? (assumption first boundary is direct and already in control in our risk model)

* Dependent: balancing being able to get what I want by my own effort as contingent on or determined by the actions of someone else to make it work  ** Interdependence combine my efforts with the efforts of others to achieve successful outcomes together but does not have to be mutual or controlled 

Risk as a shared belief has wider dependencies. 

Who is responsible and accountable at second and third-order data boundaries? (Point 3 above) introduces the concept of second and third-order boundaries for broader (inter)-dependent ecosystems. This short section explains where those boundaries are and why they matter in the context of a business’s purpose moving toward a sustainable ecosystem (ESG.)

The figure below expands on the dependency thinking into a visual representation. The three-axis are values/ principles as a focus [self, society, planet earth], who has accountability/ obligations [no-one, an elected authority such as a director, society or all of humanity], and the health of our eco-systems (prime, secondary, tertiary and all).

The small blue area shows the limitations of our current shareholder primacy remit, where Directors have a fiduciary duty to ensure that their prime business thrives and value is created for shareholders (stakeholders,) at the expense of others. Having a healthy ecosystem helps (competition, choice, diverse risk, margin.)  As envisaged by the Business Roundtable, a sustainable ecosystem is the orange area, expanding the Directors remit to more eco-systems and embracing more of a “good for society” value set but does not increase director accountability.  ESG v1.0 widens the remit to the green area; this step-change expands all current thinking and dependencies of any one player on others on a broader ecosystem. We become sustainable together. 

How is it possible for unidentified risks to exist?

In simple terms, there is no new unknown risk; however, what is known to someone may not be known by everyone. Risk is hiding in plain sight. As we are expanding our remits as discussed in the last section above, we are increasingly dependent on others managing their risk to the same level we manage risk and share data across the ecosystem. This is where Quantum Risk arises, at the boundaries, in the long-tail of the universe of risk.

In the figure below, The Growing Universe of Risk. We are very good at the management of insurable, measurable known:known (identified and shared) risk. We are also very good at un-insurable, measurable (impact, likelihood, knowledge) and known:unknown risk mainly because the determined likelihood of occurrence and impact is moderate.  Indeed, we have created excellent tools to help mitigate and accept uninsurable, un-measurable “unknown:unknown” risk.  In mitigation we accept that the data quality  (knowledge) is poor, but the impact is low, as is the likelihood.  

Quantum risk is the next step out; it is emergent at the boundaries of (inter)-dependencies created as we need to create sustainable ecosystems where we share data. We are increasingly reliant on data from indirectly related players to our ecosystem, and we have no power or control. We have no rights to data and no clue on attestation. Quantum risk is not in our current risk model, or existing risk frameworks and is unimagined to us. 


Business Risk Vs Data Risk

Business risk is something that every business has to deal with.  Kodak and Nokia maybe not as well as say IBM, Barclays or Microsoft.   Mobile phone networks should have seen mobile data services coming and therefore the advent of international voice and video apps that meant there was always going to be a natural decline in SMS, local and international mobile revenue. Most rejected this business risk in 2005 only seeing growth in core areas.  However good hindsight is, apps such as Signal, WhatsApp and Telegram came about due to the timing of three interrelated advances, which created business risk.   Device capability, network capability and pricing.  Device designers and manufacturers have to keep pushing technology to keep selling devices; device technology will always advance.   Network capacity was always going to increase, and packet-switched capability has massive economies of scale over voice circuits. Large, fast packet circuits were always going to win.  Pricing by usage prevents usage; bundles work for increasing capacity.  For a mobile operator, the objective is to fill the network capacity that is built to maximise ROI, bundles work, as does Apps that move revenue from one product to the next.  This is a business risk created by change and dependencies on others in your ecosystem, quantum risks are a business risk but hide in data.

Data Risk falls into three buckets.   

Data that you collect directly as part of the process of doing business.  Critically you can determine the attestation (provenance and lineage) of the data, and it comes from your devices, sensors and systems.  There is a risk that you don’t collect, store, protect, analyse or know if the data is true.  In truth, this is the front end of the long tail in the universe of risk, and it is precisely where we put priority. Nothing new here.

Data you collect from third parties who you have a relationship with. A supplier, partners, collaborator, associate or public data.  Whilst you are likely to have the “rights to use data” (contract), you are unlikely to have “attestation” (provenance and lineage) of the shared data back to the origin. You will have access to summary or management levels (knowledge and insights), and you should have audit and other contractual agreements to check.   There is often a mutual relationship where you both share data, both dependent on the data quality. The risk is that you don’t qualify, check, question or analyse this 3rd party data.  In truth, this is another head-end risk of the long tail in the universe of risk, and it is precisely where we put significant resources. The exception will be public data as there is no route to understanding bias, ontology or purpose, however public data is not usually used exclusively for decision making, with one exception right now ESG and this worries me.  

Quantum Risk is a data risk where you neither have control of nor access to, data. Still, this data set has become critical to decision making as we move to sustainable ecosystems, stewardship codes and ESG.  However, it requires us to dig into the dark and mysterious world data ontologies, which we have to unpack quickly.   

Ontologies  

To explain your reasoning, rationale or position, you need to define how entities are grouped into basic categories that structure your worldview and perspective. If you have a different perspective, you will behave and act differently.  Such a structure is called ontology (how we view the world) and is related to epistemology (how do we know what is true and how we have gone about investigating/ proving it?). Ontology is a branch of philosophy but is critical in understanding data and information science as it encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities that substantiate one, many, or all domains of discourse. Think of data ontology as a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject. 

At this point you would have thought with 5,000 years of thinking about this we would have one top-level ontology from which everything would flow.  Alas, we don’t have one for anything.  There is no black and white agreed way to look at anything in philosophy, physics, biology, humanities, data, climate, language, sound, knowledge, compute, behaviour and every other topic. This means that it is safe to assume your way of describing your world, in your organisation, through data is different from everyone else in your ecosystem.  Those same data points represented in 1 and 0’s mean completely other things in different ontologies. Your worst scenario is different ontologies inside your silos which means you have different world views but may not know this.  Ontology is one of the roles for a CDO, explored here.  Now to epistemology, which is concerned with the creation of knowledge, focusing on how knowledge is obtained and investigating the most valid ways to reach the truth. Epistemology essentially determines the relationship between the data, analyst and reality and is rooted in your ontological framework. Different data science teams can have the same data set and very different views, and then we add the statistics team.  What truth or lies do you want?  This matters when data is shared - how do you know what your business partners thinks is true about their data?

It only gets more complicated the more you unpack this and I will write an article about this soon. However, as shown in the figure, knowing how you view the world in data, does not guarantee that everyone else in your ecosystem has the same view.  I have seen very few contracts for data sharing at business data levels share the ontology and mapping schedules between then. Yes we often share naming/ data dictionary level, but that is not ontology. Assuming that shared data has the same purpose between the partner is “quantum risk.” This risk is at the boundaries, and it only appears when you look.  Imagine you are sharing data in your ecosystem on critical systems and as you read this, you realise you have not asked the question about the different world views you and your partners have towards collecting, analysing, and reporting for data.  The event is not the same thing.  Remember, at the start, we know everything about risk. I am in the same bucket. This is all new.  

Responses to Quantum Risk

I made two bold claims at the beginning. “The problem we all have with risk is that we know it all,” and “a bold statement, but quantum risk is new, big, ugly and is already here, it's just that we are willingly blind to it.”  I wish it were easy, but Quantum Risk emerges at our digital business boundaries where we share data, the further we go out the less attestation and rights we have. The complexity of Quantum Risk creates havoc with our existing frameworks and models as:

When you observe the same quantum risk twice, it might not be there, and it will look different.

The same quantum risk can be in many places at the same time, but it is only one risk.

Your quantum risk and my quantum risk directly affect each other across our data ecosystem, but they are not connected and not seen.

Given this, how do we respond? We need to get better with understanding the purpose of our data; we need to find CDO expertise to help us unpack our data ontologies and rethink what we consider are boundaries for commercial purposes, which means revisiting our contracts and terms.  One question for those who get this far, have you tested how your users understand your Terms and Conditions on data use and privacy. I have never seen it in a test schedule as it is a barrier not a value proposition. We tell users to “Click here” fast and trust us. It is an obvious gap to investigate from a partner when you depend on that data and it is shared with you, and your advertising model now depends on it.

Any good economist/ strategist will immediately recognise the opportunity to game data at the boundary. How can I create an advantage, and what are the implications is another whole topic to unpack.  

As a final thought, will your corporation consider Quantum Risk? 

If your fellow senior leadership team is focused on the head end of the long tail, you will see a focus on implementing processes that align to regulation/ rules/ law and policies. You are likely to manage risk very well and be rewarded for doing so via cascading KPI’s.  Quantum risk will be thought about when there are best practices or a visible loss of competitive position.   

Corporates with a more mature risk profile know there are loopholes and whilst have a focus on compliance, they have a hand in the lobby forums so they can benefit by putting risk onto others and gaining an advantage from being the owner of IP when the lobby work becomes policy.  Quantum risk thinking will emerge when there is a clear identification of competitive advantage.

The most mature risk leadership teams are creating new thinking to ensure that they are sustainable and not forced to make retrospective changes as they just focussed on compliance and had delivery based KPI linked bonuses.  These are the pioneers in digital and will pick up quantum risk first. 



Jon Udell

How and why to tell your story online, revisited

I wrote this essay in 2006 as part of a series of Internet explainers I did for New Hampshire Public Radio. It never aired for reasons lost to history, so I’m publishing this 15-year-old time capsule here for the first time. My motive is of course not purely archival. I’m also reminding myself why I … Continue reading How and why to tell your story online, revisited

I wrote this essay in 2006 as part of a series of Internet explainers I did for New Hampshire Public Radio. It never aired for reasons lost to history, so I’m publishing this 15-year-old time capsule here for the first time. My motive is of course not purely archival. I’m also reminding myself why I should still practice now what I preached then.

How and why to tell your story online

Teens and twenty-somethings are flocking to social websites like MySpace and Facebook, where they post photos, music, and personal diaries. Parents, teachers, and cops wish they wouldn’t. It’s a culture war between generations, and right now everybody’s losing.

Kids: Listen up. Did you hear the story about the college student who didn’t get hired because of his Facebook page? Or the teenage girl whose MySpace blog told an attacker when she’d be home alone? These things happen very rarely, but they can happen. Realize that the words and pictures you publish online will follow you around for the rest of your lives. Realize that wrong choices can have embarrassing or even tragic consequences.

Now, grownups, it’s your turn to listen up. You’re right to worry about the kids. But there’s another side to the story. The new forms of Internet self-publishing — including social networks, blogs, podcasting, and video sharing — can be much more than narcissistic games. Properly understood and applied, they’re power tools for claiming identity, exerting influence, and managing reputation. Sadly, very few adults are learning those skills, and fewer still are teaching them.

It’s not enough to condemn bad online behavior. We’ve got to model good online behavior too — in schools, on the job, and in civic life. But we’re stuck in a Catch-22 situation. Kids, who intuit the benefits of the new social media, fail to appreciate the risks. Grownups, meanwhile, see only risks and no benefits.

There’s a middle ground here, and we need to approach it from both sides of the generation gap. The new reality is that, from now on, our lives will be documented online — perhaps by us, perhaps by others, perhaps by both. We may or may not influence what others will say about us. But we can surely control our own narratives, and shape them in ways that advance our personal, educational, professional, and civic agendas.

Your online identity is a lifelong asset. If you invest in it foolishly you’ll regret that. But failing to invest at all is equally foolish. The best strategy, as always, is to invest wisely.

Here’s a simple test to guide your strategy. Imagine someone searching Google for your name. That person might be a college admissions officer, a prospective employer, a new client, an old friend, or even a complete stranger. The reason for the search might be to evaluate your knowledge, interests, agenda, accomplishments, credentials, activities, or reputation.

What do you want that person to find? That’s what you should publish online.

To find three examples of what I mean, try searching the web for the following three names: Todd Suomela, Martha Burtis, Thomas Mahon. In each case, the first Google result points to a personal blog that narrates a professional life.

Todd Suomela is a graduate student at the University of Michigan. On his blog, Todd writes about what he’s learning, and about how his interests and goals are evolving. He hasn’t launched his professional career yet. But when he does, his habit of sharing the information resources he collects, and reflecting thoughtfully on his educational experience, will serve him well.

Martha Burtis is an instructional technologist at the University of Mary Washington. She and her team research and deploy the technologies that students, faculty, and staff use to learn, teach, and collaborate. On her blog, Martha writes about the tools and techniques she and her team are developing, she assesses how her local academic community is making use of those tools and techniques, and thinks broadly about the future of education.

Thomas Mahon is a Savile Row tailor. His shop in London caters to people who can spend two thousand pounds on a classic handmade suit. I’ll never be in the market for one of those, but if I were I’d be fascinated by Mahon’s blog, EnglishCut.com, which tells you everything you might want to know about Savile Row past and present, about how Mahan practices the craft of bespoke tailoring, and about how to buy and care for the garments he makes.

For Todd and Martha and Thomas, the benefits of claiming their Net identities in these ways run wide and deep. Over time, their online narratives become autobiographies read by friends, colleagues, or clients, and just as importantly, read by people who may one day become friends, colleagues, or clients.

In most cases, of course, the words, pictures, audio, and video you might choose to publish online won’t attract many readers, listeners, or viewers. That’s OK. The point is that the people they do attract will be exactly the right people: those who share your interests and goals.

We’ve always used the term ‘social networking’ to refer to the process of finding and connecting with those people. And that process has always depended on a fabric of trust woven most easily in the context of local communities and face-to-face interaction.

But our interests and goals aren’t merely local. We face global challenges that compel us to collaborate on a global scale. Luckily, the new modes of social networking can reach across the Internet to include people anywhere and everywhere. But if we’re going to trust people across the Internet, we’ll need to be able check their references. Self-published narrative is one crucial form of evidence. The public reaction to such narratives, readily discoverable thanks to search engines and citation indexes, is another.

Is this a new and strange new activity? From one perspective it is, and that’s why I can’t yet point to many other folks who’ve figured out appropriate and effective ways to be online, as Todd and Martha and Thomas have.

But from another perspective, Internet self-publishing is just a new way to do what we’ve been doing for tens of thousand years: telling stories to explain ourselves to one another, and to make sense of our world.

Wednesday, 03. February 2021

Information Answers

Applying for, and being, a MyData Operator

I’m on a panel this afternoon at this Canadian Data Privacy Week event; the subject I’m due to discuss is as per the title above – […]
I’m on a panel this afternoon at this Canadian Data Privacy Week event; the subject I’m due to discuss is as per the title above – […]

MyDigitalFootprint

What is the purpose of data? V2

We continually move towards better data-led decisions; however, we can easily ask our dataset’s wrong question. Without understanding “What is the purpose of data” on which we are basing decisions and judgements, it is easy to get an answer that is not in the data. How can we understand if our direction, Northstar or decision is a good one?  Why am I interested in this? I am focusing on how
We continually move towards better data-led decisions; however, we can easily ask our dataset’s wrong question. Without understanding “What is the purpose of data” on which we are basing decisions and judgements, it is easy to get an answer that is not in the data. How can we understand if our direction, Northstar or decision is a good one?  Why am I interested in this? I am focusing on how we improve governance and oversight in a data-led world. 

I wrote a lengthy article on Data is Data. It was a kickback at the analogies that data is oil, gold, labour, sunlight - data is not. Data is unique; it has unique characteristics. That article concluded that the word “Data” is also part of the problem, but we should think of data as if discovering a new element with unique characteristics.  

Data is a word, and it is part of the problem.  Data doesn’t have meaning or shape, and data will not have meaning unless we can give it context. As Theodora Lau eloquently put it; if her kiddo gets 10 points in a test today (data as a state), the number 10 has no meaning, unless we say, she scored 10 points out of 10 in the test today (data is information). And even then, we still need to explain the type of test (data is knowledge) and what to do next or how to improve (data is insights).  Each of these is a “data” point, and we don’t differentiate the use of the word “data” in these contexts.

Data’s most fundamental representation is “state” where it represents the particular condition something is in at a specific time.  I love Hugh’s work @gapingvoid (below) representation  Information is knowing that there are different “states” (on/off). Knowledge is finding patterns and connections.  Insight knows comparatives to state. Wisdom is the journey.  We live in the hope that the data we have will have an impact.

For a while, the data community has rested on two key characteristics of data: non-rivalrous (which plays havoc with our current understanding of ownership) and non-fungible (which is true if you assume that data carries information.)  Whilst these are both accurate observations; they are not that good as universal characteristics.

Non-rivalrous. Economists call an item that can only be used by one person at a time as "rivalrous." Money and capital are rivalrous. Data is non-rivalrous as a single item of data can simultaneously fuel multiple algorithms, analytics, and applications.  This is, however, not strictly true. Numerous perfect copies of “data” can be used simultaneously because the marginal cost of reproduction and sharing is zero.   

Non-fungible. When you can substitute one item for another, they are said to be fungible.  One sovereign bill can be replaced for another sovereign bill of the same value; one barrel of oil is the same as the next.  So the thinking goes, data is non-fungible and cannot be substituted because it carries information.  However, if your view is that data carries state (the particular condition that something is in at a specific time), then data is fungible. Higher-level ideals of data that is processed (information, knowledge, insights) are increasingly non-fungible.

Money as a framework to explore the purpose of data  

Sovereign currency (FIAT), money in this setting, has two essential characteristics.  It has rivalrous and fungible.  Without these foundational characteristics, money cannot fulfil its original purpose (it has many others now); a trusted exchange medium.  Money removes the former necessity of a direct barter, where equal value had to be established, and the two or more parties had to meet for an exchange.  What is interesting is that there are alternatives to FIAT which exploit other properties.  Because of fraud, we have to have security features, and there is a race to build the most secure wall.


[Just as a side note - money is an abstraction and part of the rationale for a balance sheet was to try to connect the abstraction back to real things. Not sure that works any more]

Revising the matrix “what problem is to be solved?” 

Adding these other options of exchange onto the matrix, we have a different way to frame what problem each type of currency offers as a method of exchange mechanism. This is presented in the chart below.  Sand and beans can be used, but they provide a messy tool compared to a sovereign currency.  Crypto works, and it solves the problem, but without exchange to other currencies, it had fundamental limits.  

If we now add digital data and other aspects of our world onto the matrix, we have a different perspective. We all share gravity, sunsets and broadcast TV/ radio on electromagnetic waves.  However, only one atom can be used at a time, and that atom is not-interchangeable (to get the same outcome.)  The point is that digital data is not in the same quadrant as sovereign currency and electrons as a beautiful solution based on being fungible and rivalrous.  

In the broadest definition of data which is “state”; chemical, atoms, gravity, electrons have state and therefore are also data.  To be clear will now use Digital Data to define our focus and not all data. 

These updates to the matrix highlight that, if data is non-rivalrous and non-fungible, these characteristics mean that is is very unclear to what problem digital data is solving.  We see this all the time in the digital data market, as we cannot agree on what “data” is, it is messy. 

The question for us as a digital data community is; “what are the axis [characteristics] that mean digital data is in the top corner of a matrix? This is where digital data is a beautiful solution to a defined problem, given that digital data is at its core is “knowing state.”  I explored this question on a call with Scott David, and we ended up resting on “Rights and Attestation” as the two axes 

Rights in this context are that you have gained rights from the Parties.  What and how those rights were acquired is not the question; it is just that you have the rights you need to do what you need to do.

Attestation in this context is the evidence or proof of something.  It is that you know what you have is true and that you can prove the state exists. How you do this is not the point; it is just you know it is provable.

As we saw with the money example, data will never have these (rights and attestation) characteristics exclusively; it is just when it has them, data is most purposeful.  Without attestation, the data you have is compromised, and any conclusions you reach may not be true or real. Continually we have to test both our assumptions and the provability of our digital data.   Rights are different as rights are not correlated with data quality, but rights may help resolve ownership issues.  A business built without rights to the data they are using is not stable or sustainable.  How and if those Rights were obtained ethically are matters to be investigated.  Interestingly, these characteristics (rights and attestation)  would readily fit into existing risk and audit frameworks. 

I have a specific focus on ESG, sustainability, data for decision making, and better data for sharing.  Given that most comparative ESG data is from public reports (creative commons or free of rights), it is essential to note there is a break in the attestation.  ESG data right now is in the least useful data bucket for decision making, but we are making critical investment decisions on this analysis data set. It is something that we have to address. 


In summary

If the purpose of data is “to share state” then the two essential characteristics data must have are rights and attestation.   Further, as data becomes information (knowing state), knowledge (patterns of states), insight (context in states) and wisdom - these characteristics of rights and attestation matter even more.  If you are making decisions on data that you don’t know if it is true or have the rights to it, becomes a dangerous place. 

As a side, there is lots of technology and processes to know if the state is true (as in correct - not truth); if the state sensing is working and the level of accuracy; if the state at both ends has the same representation (providence/ lineage ); if it is secure; if we can gain information; if we can combine data sets and what the ontology is.  But these are not fundamental characteristics; they are supportive and ensure we have a vibrant ecosystem of digital data.   

I am sure there are other labels for such a matrix and interested in your views, thoughts and comments. 

Monday, 01. February 2021

MyDigitalFootprint

Does data have a purpose?

We are continually moving towards better data-led decisions, however, without understanding “What is the purpose of data / Does data have a purpose”   on which we are basing decisions and judgements, it is hard to understand if our north star (a good decision) is a good one.  Why am I interested in this, as I am focusing on how we do governance and oversight better in a data-led world.&
We are continually moving towards better data-led decisions, however, without understanding “What is the purpose of data / Does data have a purpose”   on which we are basing decisions and judgements, it is hard to understand if our north star (a good decision) is a good one.  Why am I interested in this, as I am focusing on how we do governance and oversight better in a data-led world. 

I wrote a lengthy article on Data is Data. It was a kickback at the analogies that data is oil, gold, labour, sunlight - data is not. Data is unique; it has unique characteristics. That article concluded that the word “Data” is also part of the problem, but we should think of data as if discovering a new element with unique characteristics.   

For a while, the data community has rested on two key characteristics of data: non-rivalrous (which plays havoc with our current understanding of ownership) and non-fungible (which is true if you assume that data carries information.)  Whilst these are both accurate observations; they are not universal.

Non-rivalrous. Economists call an item that can only be used by one person at a time. "rivalrous." Money and capital are rivalrous. Data is non-rivalrous as a single item of data can simultaneously fuel multiple algorithms, analytics, and applications.  This is, however, not strictly true. It is that numerous perfect copies of data can be used simultaneously.   

Non-fungible. When you can substitute one item for another, they are said to be fungible.  One sovereign bill can be replaced for another sovereign bill of the same value; one barrel of oil is the same as the next.  So the thinking goes, data is non-fungible and cannot be substituted because it carries information.  However, if your view is that data carries state (the particular condition that something is in at a specific time), then data is fungible. 

I love Hugh’s work, and @gapingvoid nailed this.  Data’s most basic representation is “state” where is represents the particular condition something is in at a specific time.  Information is knowing that there are different “states” (on/off). Knowledge is finding patterns and connections.  Insights know there is an exception to the current state. Wisdom is the journey.  The point is that non-rivalrous and non-fungible is not good enough as “data” is the mechanism for representation of all these properties in a digital world.  


Money as a framework to explore the purpose of data  

Sovereign FIAT currency, money in this setting, has two essential characteristics.  It has rivalrous and fungible.  Without these foundational characteristics, money cannot fulfil its purpose; a trusted medium of exchange.  Money removes the former necessity of a direct barter, where equal value had to be established, and the two or more parties had to meet.  What is interesting is that there are alternatives to FIAT which exploit other properties.  Because of fraud, we have to have security features, and there is a race to build the most secure wall.


[Just as a side note - money is an abstraction and part of the rationale for a balance sheet was to try to connect the abstraction back to real things. Not sure that works any more]

Revising the matrix but thinking about what problem is to be solved. 

We are now adding data and other ideals on the matrix, as a different way to frame data. 

These updates to the matrix highlight that, if data is non-rivalrous and non-fungible, these characteristics mean that is is very unclear to what problem data is solving.  Indeed we see this all the time in the data market, as we cannot agree on what data is, it is messy. 

The question for us as a data community is; “what are the axis [characteristics] that mean data is in the top corner of a matrix? This is where data is a beautiful solution to a defined problem, given that data is at its core is “share state.”  We explored this question and proposed Rights and Attestation as the two axes on a call with Scott David.  

Rights in this context are that you have gained rights from the Parties.  What and how those rights were acquired is not the question; it is just that you have the rights you need to do what you need to do.

Attestation in this context is the evidence or proof of something.  It is that you know what you have is true and that you can prove the state exists.

As we saw with the money example, data will never have these characteristics exclusively; it is just when it has them, data is most purposeful.  Without attestation, the data you have is compromised, and any conclusions you reach may not be true or real. Continually we have to test both our assumptions and the provability of the data.   Rights are different as rights are not correlated with data quality.  A business built without rights to the data they are using is not stable or sustainable.  How and if those Rights were obtained ethically are issues to be investigated.  Interestingly, these characteristics would readily fit into a risk and audit framework today. 

I have a specific focus on ESG, sustainability and better data for decision making, and better data for sharing.  Given that most comparative ESG data is from public reports (creative commons or free of rights), but more importantly, there is a break in the attestation.  ESG data right now is in the least useful data bucket for decision making, but we are making critical investment decisions on this analysis data set. It is something that we have to address. 


In summary

If the purpose of data is “to share state” then the two essential characteristics data must have are rights and attestation.   Further, as data becomes information (knowing state), knowledge (patterns of states), insight (issues in states) and wisdom - these characteristics of rights and attestation matter even more.  If you are making decisions on data that you don’t know if it is true or have the rights to it, becomes a dangerous place. 

As a side, there is lots of technology and processes to know if the state is true (as in correct - not truth); if the state sensing is working and the level of accuracy; if the state at both ends has the same representation (providence/ lineage ); if it is secure; if we can gain information; if we can combine data sets and what the ontology is.  But these are not so fundamental; they are supportive that make the ecosystem of data work.   

We are sure there are other labels for such a matrix and interested in your views, thoughts and comments. 



Saturday, 30. January 2021

Identity Woman

Internet of People is doing false advertising

I just learned about the internet of people project. It seems cool…I need to dig in a bit more…but already there is a huge red flag/disconnect for me. These are the guys who are signing off on this post they put a picture of themselves on zoom. These are the women (many of them of […] The post Internet of People is doing false advertising appeared first on Identity Woman.

I just learned about the internet of people project. It seems cool…I need to dig in a bit more…but already there is a huge red flag/disconnect for me. These are the guys who are signing off on this post they put a picture of themselves on zoom. These are the women (many of them of […]

The post Internet of People is doing false advertising appeared first on Identity Woman.


Mike Jones: self-issued

Be part of the Spring 2021 IIW!

Are you registered for the Internet Identity Workshop (IIW) yet? As I wrote a decade, a year, and a day ago, “It’s where Internet identity work gets done.” That remains as true now is it was then! As a personal testimonial, I wrote this to the IIW organizers after the 2020 IIWs: “Thanks again for […]

Are you registered for the Internet Identity Workshop (IIW) yet? As I wrote a decade, a year, and a day ago, “It’s where Internet identity work gets done.” That remains as true now is it was then!

As a personal testimonial, I wrote this to the IIW organizers after the 2020 IIWs:

“Thanks again for running the most engaging and successful virtual meetings of the year (by far!). While I’ve come to dread most of the large virtual meetings, IIW online remains true to the spirit of the last 15 years of useful workshops. Yes, I miss talking to Rich and the attendees in the coffee line and having impromptu discussions throughout, and we’ll get back to that in time, but the sessions remain useful and engaging.”

I’m also proud that Microsoft is continuing its 15-year tradition of sponsoring the workshop. Rather than buying dinner for the attendees (the conversations at the dinners were always fun!), we’re sponsoring scholarships for those that might otherwise not be able to attend, fostering an even more interesting and diverse set of viewpoints at the workshop.

I hope to see you there!

Thursday, 28. January 2021

Information Answers

BLTS > TBSL, the order matters

OK, yes the post heading is a bit obscure and for a specific audience; so let me explain. Over in the MyData.org community (and other such […]
OK, yes the post heading is a bit obscure and for a specific audience; so let me explain. Over in the MyData.org community (and other such […]

Werdmüller on Medium

Your 401(k) hates you

How a retirement vehicle from the seventies is crippling America Continue reading on Medium »

How a retirement vehicle from the seventies is crippling America

Continue reading on Medium »


8 simple ways to get the most out of today

It’s a brand new day! Time to seize it. Continue reading on Medium »

It’s a brand new day! Time to seize it.

Continue reading on Medium »

Tuesday, 26. January 2021

Phil Windley's Technometria

Generative Identity

Summary: Generative identity allows us to live digital lives with dignity and effectiveness, contemplates and addresses the problems of social inclusion, and supports economic equality to everyone around the globe. This article describes the implementation self-sovereign identity through protocol-mediated credential exchange on the self-sovereign internet, examines its properties, and argues

Summary: Generative identity allows us to live digital lives with dignity and effectiveness, contemplates and addresses the problems of social inclusion, and supports economic equality to everyone around the globe. This article describes the implementation self-sovereign identity through protocol-mediated credential exchange on the self-sovereign internet, examines its properties, and argues for it generative nature from those properties.

The Generative Self-Sovereign Internet explored the generative properties of the self-sovereign internet, a secure overlay network created by DID connections. The generative nature of the self-sovereign internet is underpinned by the same kind of properties that make the internet what it is, promising a more secure and private, albeit no less useful, internet for tomorrow.

In this article, I explore the generativity of self-sovereign identity—specifically the exchange of verifiable credentials. One of the key features of the self-sovereign internet is that it is protocological—the messaging layer supports the implementation of protocol-mediated interchanges on top of it. This extensibility underpins its generativity. Two of the most important protocols defined on top of the self-sovereign internet support the exchange of verifiable credentials as we'll see below. Together, these protocols work on top of the the self-sovereign internet to give rise to self-sovereign identity through a global identity metasystem.

Verifiable Credentials

While the control of self-certifying identifiers in the form of DIDs is the basis for the autonomy of the self-sovereign internet, that autonomy is made effective through the exchange of verifiable credentials. Using verifiable credentials, an autonomous actor on the self-sovereign internet can prove attributes to others in a way they can trust. Figure 1 shows the SSI stack. The self-sovereign internet is labeled "Layer Two" in this figure. Credential exchange happens on top of that in Layer Three.

Figure 1: SSI Stack (click to enlarge)

Figure 2 shows how credentials are exchanged. In this diagram, Alice has DID-based relationships with Bob, Carol, Attestor.org and Certiphi.com. Alice has received a credential from Attestor.org. The credential contains attributes that Attestor.org is willing to attest belong to Alice. For example, Attestor might be her employer attesting that she is an employee. Attestor likely gave her a credential for their own purposes. Maybe Alice uses it for passwordless login at company web sites and services and to purchase meals at the company cafeteria. She might also use it at partner websites (like the benefits provider) to provide shared authentication without federation (and it's associated infrastructure). Attestor is acting as a credential issuer. We call Alice a credential holder in this ceremony. The company and partner websites are credential verifiers. Credential issuance is a protocol that operates on top of the self-sovereign internet.

Figure 2: Credential Exchange (click to enlarge)

Even though Attestor.org issued the credential to Alice for its own purposes, she holds it in her wallet and can use it at other places besides Attestor. For example, suppose she is applying for a loan and her bank, Certiphi, who wants proof that she's employed and has a certain salary. Alice could use the credential from Attestor to prove to Certiphi that she's employed and that her salary exceeds a given threshold1. Certiphi is also acting as a credential verifier. Credential proof and verification is also protocol that operates on top of the self-sovereign internet. As shown in Figure 2, individuals can also issue and verify credentials.

We say Alice "proved" attributes to Certiphi from her credentials because the verification protocol uses zero knowledge proof to support the minimal disclosure of data. Thus the credential that Alice holds from Attestor might contain a rich array of information, but Alice need only disclose the information that Certiphi needs for her loan. In addition, the proof process ensures that Alice can't be correlated though the DIDs she has shared with others. Attribute data isn't tied to DIDs or the keys that are currently assigned to the DID. Rather than attributes bound to identifiers and keys, Alice's identifiers and keys empower the attributes.

Certiphi can validate important properties of the credential. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger (Layer One in Figure 1), retrieving Attestor's public DID from the credential definition, and resolving it to get Attestor.org's public key to check the credential's signature. At the same time, the presentation protocol allows Certiphi to verify that the credential is being presented by the person it was issued to and that it hasn't been revoked (using a revocation registry store in Layer 1). Certiphi does not need to contact Attestor or have any prior business relationship to verify these properties.

The global identity metasystem, shown as the yellow box in Figure 1, comprises the ledger at Layer 1, the self-sovereign internet at Layer 2, and the credential exchange protocols that operate on top of it. Together, these provide the necessary features and characteristics to support self-sovereign identity.

Properties of Credential Exchange

Verifiable credentials have five important characteristics that mirror how credentials work in the offline world:

Credentials are decentralized and contextual. There is no central authority for all credentials. Every party can be an issuer, a holder, or a verifier. Verifiable credentials can be adapted to any country, any industry, any community, or any set of trust relationships. Credential issuers decide what data is contained in their credentials. Anyone can write credential schemas to the ledger. Anyone can create a credential definition based on any of these schemas. Verifiers make their own decisions about which credentials to accept—there's no central authority who determines what credentials are important or which are used for a given purpose. Verifiers do not need to contact issuers to perform verification—that's what the ledger is for. Credential verifiers don't need to have any technical, contractual, or commercial relationship with credential issuers in order to determine the credentials' fidelity. Credential holders are free to choose which credentials to carry and what information to disclose. People and organizations are in control of the credentials they hold and to determine what to share with whom.

These characteristics underlie several important properties that support the generativity of credential exchange. Here are the most important:

Private— Privacy by Design is baked deep into the architecture of the identity metasystem as reflected by several fundamental architectural choices:

Peer DIDs are pairwise unique and pseudonymous by default to prevent correlation. Personal data is never written to the ledgers at Layer 1 in Figure 1—not even in encrypted or hashed form. Instead, all private data is exchanged over peer-to-peer encrypted connections between off-ledger agents at Layer 2. The ledger is used for anchoring rather than publishing encrypted data. Credential exchange has built-in support for zero-knowledge proofs (ZKP) to avoid unnecessary disclosure of identity attributes. As we saw earlier, verifiers don’t need to contact the issuer to verify a credential. Consequently, the issuer doesn’t know when or where the credential is used.

Decentralized—decentralization follows directly from the fact that no one owns the infrastructure that supports credential exchange. This is the primary criterion for judging the degree of decentralization in a system. Rather, the infrastructure, like that of the internet, is operated by many organizations and people bound by protocol.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Participants in credential exchange relate to each other as peers and are autonomous.

Interoperable—verifiable credentials have a standard format, readily accessible schemas, and a standard protocols for issuance, proving (presenting), and verification. Participants can interact with anyone else so long as they use tools that follow the standards and protocols. Credential exchange isn't a single, centralized system from a single vendor with limited pieces and parts. Rather, interoperability relies on interchangeable parts, built and operated by various parties. Interoperability supports substitutability, a key factor in autonomy and flexibility.

Substitutable—the tools for issuing, holding, proving, and verifying are available from multiple vendors and follow well-documented, open standards. Because these tools are interoperable, issuers, holders, and verifiers can choose software, hardware, and services without fear of being locked into a proprietary tool. Moreover, because many of the attributes the holder needs to prove (e.g. email address or even employer) will be available on multiple credentials, the holder can choose between credentials. Usable substitutes provide choice and freedom.

Flexible—closely related to substitutability, flexibility allows people to select appropriate service providers and features. No single system can anticipate all the scenarios that will be required for billions of individuals to live their own effective lives. The characteristics of credential exchange allow for context-specific scenarios.

Reliable and Censorship Resistant—people, businesses, and others must be able to exchange credentials without worrying that the infrastructure will go down, stop working, go up in price, or get taken over by someone who would do them harm. Substitutability of tools and credentials combined with autonomy makes the system resistant to censorship. There is no hidden third party or intermediary in Figure 2. Credentials are exchanged peer-to-peer.

Non-proprietary and Open—no one has the power to change how credentials are exchanged by fiat. Furthermore, the underlying infrastructure is less likely to go out of business and stop operation because its maintenance and operation are decentralized instead of being in the hands of a single organization. The identity metasystem has the same three virtues of the Internet that Doc Searls and Dave Weinberger enumerated as NEA: No one owns it, Everyone can use it, and Anyone can improve it. The protocols and code that enable the metasystem are open source and available for review and improvement.

Agentic—people can act as autonomous agents, under their self-sovereign authority. The most vital value proposition of self-sovereign identity is autonomy—not being inside someone else's administrative system where they make the rules in a one-sided way. Autonomy requires that participants interact as peers in the system, which the architecture of the metasystem supports.

Inclusive—inclusivity is more than being open and permissionless. Inclusivity requires design that ensures people are not left behind. For example, some people cannot act for themselves for legal (e.g. minors) or other (e.g. refugees) reasons. Support for digital guardianship ensures that those who cannot act for themselves can still participate.

Universal—successful protocols eat other protocols until only one survives. Credential exchange, built on the self-sovereign internet and based on protocol, has network effects that drive interoperability leading to universality. This doesn't mean that the metasystem will be mandated. Rather, one protocol will mediate all interaction because everyone in the ecosystem will conform to it out of self-interest.

The Generativity of Credential Exchange

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of self-sovereign identity.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features.

Traditional identity systems have been anemic, supporting simple relationships focused on authentication and a few basic attributes their administrators need. They can't easily be leveraged by anyone but their owner. Federation through SAML or OpenID Connect has allowed the authentication functionality to be leveraged in a standard way, but authentication is just a small portion of the overall utility of a digital relationship.

One example of the capacity of credential exchange for leverage is to consider that it could be the foundation for a system that disintermediates platform companies like Uber, AirBnB, and the food delivery platforms. Platform companies build proprietary trust frameworks to intermediate exchanges between parties and charging exorbitant rents for what ought to be a natural interaction among peers. Credential exchange can open these trust frameworks up to create open marketplaces for services.

The next section on Adaptability lists a number of uses for credentials. The identity metasystem supports all these use cases with minimal development work on the part of issuers, verifiers, and holders. And because the underlying system is interoperable, an investment in the tools necessary to solve one identity problem with credentials can be leveraged by many others without new investment. The cost to define a credential is very low (often less than $100) and once the definition is in place, there is no cost to issue credentials against it. A small investment can allow an issuer to issue millions of credentials of different types for different use cases.

Adaptability

Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to a technology's capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Identity systems based on credential exchange provide people with the means of operationalizing their online relationships by providing them the tools for acting online as peers and managing the relationships they enter into. Credential exchange allows for ad hoc interactions that were not or cannot be imagined a priori.

The flexibility of credentials ensures they can be used in a variety of situations. Every form or official piece of paper is a potential credential. Here are a few examples of common credentials:

Employee badges Drivers license Passport Wire authorizations Credit cards Business registration Business licenses College transcripts Professional licensing (government and private)

But even more important, every bundle of data transmitted in a workflow is a potential credential. Since credentials are just trustworthy containers for data, there are many more use cases that may not be typically thought of as credentials:

Invoices and receipts Purchase orders Airline or train ticket Boarding pass Certificate of authenticity (e.g. for art, other valuables) Gym (or any) membership card Movie (or any) tickets Insurance cards Insurance claims Titles (e.g. property, vehicle, etc.) Certificate of provenance (e.g. non-GMO, ethically sourced, etc.) Prescriptions Fractional ownership certificates for high value assets CO2 rights and carbon credit transfers Contracts

Since even a small business might issue receipts or invoices, have customers who use the company website, or use employee credentials, most businesses will define at least one credential and many will need many more. There are potentially tens of millions of different credential types. Many will use common schemas but each credential from a different issuer constitutes a different identity credential for a different context.

With the ongoing work in Hyperledger Aries, these use cases expand even further. With a “redeemable credentials” feature, holders can prove possession of a credential in a manner that is double-spend proof without a ledger. This works for all kinds of redemption use cases like clocking back in at the end of a shift, voting in an election, posting an online review, or redeeming a coupon.

The information we need in any given relationship varies widely with context. Credential exchange protocols must be flexible enough to support many different situations. For example, in You've Had an Automobile Accident, I describe a use case that requires the kinds of ad hoc, messy, and unpredictable interactions that happen all the time in the physical world. Credential exchange readily adapts to these context-dependent, ad hoc situations.

Ease of Mastery

Ease of mastery refers to the capacity of a technology to be easily and broadly adapted and adopted. One of the core features of credential exchange on the identity metasystem is that supports the myriad use cases described above without requiring new applications or user experiences for each one. The digital wallet that is at the heart of credential exchange activities on the self-sovereign internet supports two primary artifacts and the user experiences to manage them: connections and credentials. Like the web browser, even though multiple vendors provide digital wallets, the underlying protocol informs a common user experience.

A consistent user experience doesn’t mean a single user interface. Rather the focus is on the experience. As an example, consider an automobile. My grandfather, who died in 1955, could get in a modern car and, with only a little instruction, successfully drive it. Consistent user experiences let people know what to expect so they can intuitively understand how to interact in any given situation regardless of context.

Accessibility

Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. Because of it's openness, standardization, and support by multiple vendors, credential exchange is easily available to anyone with access to a computer or phone with an internet connection. But we can't limit its use to individuals who have digital access and legal capacity. Ensuring that technical and legal architectures for credential exchange support guardianship and use on borrowed hardware can provide accessibility to almost everyone in the world.

The Sovrin Foundation's Guardianship Working working group has put significant effort into understanding the technical underpinnings (e.g., guardianship and delegation credentials), legal foundations (e.g., guardianship contracts), and business drivers (e.g., economic models for guardianship). They have produced an excellent whitepaper on guardianship that "examines why digital guardianship is a core principle for Sovrin and other SSI architectures, and how it works from inception to termination through looking at real-world use cases and the stories of two fictional dependents, Mya and Jamie."

Self-Sovereign Identity and Generativity

In What is SSI?, I made the claim that SSI requires decentralized identifiers, credential exchange, and autonomy for participants. Dick Hardt pushed back on that a bit and asked me if decentralized identifiers were really necessary? We had a several fun discussions on that topic.

In that article, I unfortunately used decentralized identifiers and verifiable credentials as placeholders for their properties. Once I started looking at properties, I realized that generative identity can't be built on an administrative identity system. Self-sovereign identity is generative not only because of the credential exchange protocols but also because of the properties of the self-sovereign internet upon which those protocols are defined and operate. Without the self-sovereign internet, enabled through DIDComm, you might implement something that works as SSI, but it won't provide the leverage and adaptability necessary to creating a generative ecosystem of uses that creates the network effects needed to propel it to ubiquity.

Our past approach to digital identity has put us in a position where people's privacy and security are threatened by the administrative identity architecture it imposes. Moreover, limiting its scope to authentication and a few unauthenticated attributes, repeated across thousands of websites with little interoperability, has created confusion, frustration, and needless expense. None of the identity systems in common use today offer support for the same kind of ad hoc attribute sharing that happens everyday in the physical world. The result has been anything but generative. Entities who rely on attributes from several parties must perform integrations with all of them. This is slow, complex, and costly, so it typically happens only for high-value applications.

An identity metasystem that supports protocol-mediated credential exchange running on top of the self-sovereign internet solves these problems and promises generative identity for everyone. By starting with people and their innate autonomy, generative identity supports online activities that are life-like and natural. Generative identity allows us to live digital lives with dignity and effectiveness, contemplates and addresses the problems of social inclusion, and supports economic access for people around the globe.

Notes For Alice to prove things about her salary, Attestor would have to include that in the credential they issue to Alice.

Photo Credit: Generative Art Ornamental Sunflower from dp792 (Pixabay)

Tags: ssi identity generative credentials self-sovereign+internet


Doc Searls Weblog

Just in case you feel safe with Twitter

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it: WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million&nb

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy.

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

Monday, 25. January 2021

A Distributed Economy

Comment from Ockam Hello Moved Here

Comment: "I am not part of Ockam, but I've known the folks behind this for awhile. There has been a lot of hair pulling to get to this: https://www.w3.org/TR/did-core/ . This was back in the day. http://manu.sporny.org/2014/credential-based-login/ . A big issue that they still seem to have is data mapping. It happens here: https://www.youtube.com/watch?v=2EP35HO2HVQ&feature=youtu.be [What is

Comment:
"I am not part of Ockam, but I've known the folks behind this for awhile. There has been a lot of hair pulling to get to this:
https://www.w3.org/TR/did-core/ . This was back in the day. http://manu.sporny.org/2014/credential-based-login/ .

A big issue that they still seem to have is data mapping. It happens here: https://www.youtube.com/watch?v=2EP35HO2HVQ&feature=youtu.be [What is a Personal Knowledge Graph- with Ruben Verborgh - The Graph Show]
 and even in the DID space where they rant about interoperability. There is crossover between the SoLiD community and DIDs. They talk about even bigger systems, beyond PDS. In my humble opinion, I believe that there is a blind spot amongst programmers about the wonders about applied category theory. I'm still trying to grasp it myself, but you see it here: https://arxiv.org/abs/1909.04881 [Algebraic Property Graphs], and here: categoricaldata.net/ , and here https://web-cats.gitlab.io/ --> https://arxiv.org/abs/1706.00526 [Knowledge Representation in Bicategories of Relations], and here https://www.youtube.com/watch?v=vnbDmQDvxsE&t=3m41s [ACT 2020 industry showcase]. My feeling is there a white X on the ground that says dig here. It's a reason to learn the maths."

Related to this for Bicatagories of Relations:
Description Logics? https://www.csee.umbc.edu/courses/graduate/691/fall19/07/papers/DescriptionLogicHandbook.pdf



Werdmüller on Medium

The Whole-Employee Professional Development Plan

How I support an employee’s goals beyond their tenure at the company Continue reading on The Startup »

How I support an employee’s goals beyond their tenure at the company

Continue reading on The Startup »


Jon Udell

The Image of Postgres

At the 2015 Postgres conference, the great IT philosopher Robert r0ml Lefkowitz delivered a talk entitled The Image of Postgres. Here’s the blurb. How do you think about what a database is? If you think of a database as only a place to store your data, then perhaps it does not really matter what the … Continue reading The Image of Postgres

At the 2015 Postgres conference, the great IT philosopher Robert r0ml Lefkowitz delivered a talk entitled The Image of Postgres. Here’s the blurb.

How do you think about what a database is? If you think of a database as only a place to store your data, then perhaps it does not really matter what the internals of that database are; all you really need is a home for your data to be managed, nothing more.

If you think of a database as a place where you develop applications, then your expectations of your database software change. No longer do you only need data management capabilities, but you require processing functions, the ability to load in additional libraries, interface with other databases, and perhaps even additional language support.

If your database is just for storage, there are plenty of options. If your database is your development framework, you need Postgres.

Why? Well, let’s get philosophical.

For over a year, I’ve been using Postgres as a development framework. In addition to the core Postgres server that stores all the Hypothesis user, group, and annotation data, there’s now also a separate Postgres server that provides an interpretive layer on top of the raw data. It synthesizes and caches product- and business-relevant views, using a combination of PL/pgSQL and PL/Python. Data and business logic share a common environment. Although I didn’t make the connection until I watched r0ml’s talk, this setup harkens back to the 1980s when Smalltalk (and Lisp, and APL) were programming environments with built-in persistence. The “image” in r0ml’s title refers to the Smalltalk image, i.e. the contents of the Smalltalk virtual machine. It may also connote reputation, in the sense that our image of Postgres isn’t that of a Smalltalk-like environment, though r0ml thinks it should be, and my experience so far leads me to agree.

I started writing a book to document what I’ve learned and done with this idea. It’s been a struggle to find motivation because, well, being the patron saint of trailing-edge technologies is often lonely and unrewarding. A book on this particular topic is likely to appeal to very few people. Stored procedures? So last century! Yes, Python provides a modern hook, but I can almost guarantee that one of the comments on my first book — “has a vision, but too weird” — would come back around.

I’m tempted not to bother. Maybe I should just focus on completing and polishing the things the book would describe.

And yet, it’s hard to let go. This isn’t just a compelling idea, it’s delivering excellent results. I rewatched r0ml’s talk today and got fired up again. Does it resonate for you? Would you like to see the ideas developed? If you watch the talk, please let me know.

Here are some excerpts to pique your interest.

On databases vs file systems:

I submit that the difference between the database and a file system is that database is a platform for enforcing your business rules.

On ontology:

client: The business guys are always upset because they want to know how many customers we have and we can’t tell them.

r0ml: That doesn’t sound like a very hard problem. SELECT * from the customer table, right?

client: No you don’t understand the problem.

r0ml: OK, what’s the problem?

client: It depends what you mean by customer because if you’re selling cell phone insurance, is the customer the person who has the cell phone account? What if they have two handsets and they’re both insured? What if it’s a family account and there are kids on the plan, do they count as customers? What if it’s a business account and you have 1000 people covered but only 700 using?

r0ml: How my customers you have, that’s a business rule.

So figuring out what your schema is, and figuring out how you organize the stuff and what do you do in the database, that’s all part of enforcing your business rules.

You have to decide what these things mean.

It’s an ontological problem.

You have to classify your knowledge and then enforce your business rules.

On n-tier architecture:

Let us think about the typical web application architecture. This architecture is called the three-tier architecture because it has four tiers. You have your browser, your web server, the thing that runs Python or PHP or JavaScript or Ruby or Java code, and then the database. And that’s always how you do it. And why do you do it that way? Well because that’s how everybody does it.

On Smalltalk and friends:

This is the BYTE magazine cover from August of 1981. In the 70s and the 80s, programming languages had this sort of unique perspective that’s completely lost to history. The way it worked: a programming environment was a virtual machine image, it was a complete copy of your entire virtual machine memory and that was called the image. And then you loaded that up and it had all your functions and your data in it, and then you ran that for a while until you were sort of done and then you saved it out. And this wasn’t just Smalltalk, Lisp worked that way, APL worked that way, it was kind of like Docker only it wasn’t a separate thing because everything worked that way and so you didn’t worry very much about persistence because it was implied. If you had a programming environment it saved everything that you were doing in the programming environment, you didn’t have to separate that part out. A programming environment was a place where you kept all your data and business logic forever.

So then Postgres is kind of like Smalltalk only different.

What’s the difference? Well we took the UI out of Smalltalk and put it in the browser. The rest of it is the same, so really Postgres is an application delivery platform, just like we had back in the 80s.

Sunday, 24. January 2021

Werdmüller on Medium

Here’s what I earned from my tech career

A history of not quite making bank Continue reading on Medium »

A history of not quite making bank

Continue reading on Medium »


reb00ted

In praise of incompetence

Shortly after the 2016 election, I pulled out a book written by my grandfather, chronicling the history of Bachhagel, the village in southern Germany where he grew up. I re-read the chapters describing how the Nazis, in short order, took over life there in 1933. His eye-witness account describes in fascinating, and horrifying detail, how quickly the established order and century-old traditions w

Shortly after the 2016 election, I pulled out a book written by my grandfather, chronicling the history of Bachhagel, the village in southern Germany where he grew up.

I re-read the chapters describing how the Nazis, in short order, took over life there in 1933. His eye-witness account describes in fascinating, and horrifying detail, how quickly the established order and century-old traditions were hollowed out and then overrun.

Bachhagel at the time was a tiny place, probably less than 1000 people, out in the countryside, of no political or economic importance. I could have understood how the Nazis would concentrate on the major population and economic centers to crush the opposition, but Bachhagel certainly was as far away from that as possible.

Nevertheless it just took a few months, after which the established order had been swept out and the thugs were fully in charge, day to day, from school to church to public events, and their entire worldview was the only thing that mattered.

With Joe Biden in the office this week, it seems we have turned a chapter. And looking back to the 2016 election day, I realize that although the past four years were bad, people died, children got separated, and many other outrages, we have been lucky. In 2016, I had been expecting worse, and possibly much worse.

Why didn’t it turn out as bad as I had feared? It’s not that the defenders of the republic did a particularly good job. Instead, the would-be usurpers just sucked at getting anything done, including just actually using the power in their. hands. If it had been the original Nazis, the consequences would have been so much worse.

I vastly prefer better defenses, however, than being lucky with having an incompetent opponent. In computer security terms, Trump was a Zero Day Vulnerability of the constitutional system of the US – a successful attack vector that previously had not been known.

Unfortunately, people still aren’t taking this attack vector as a seriously as they should, otherwise we’d have specific legal and practical fix proposals all over the news, which we don’t. Which means the vulnerability remains, and our primary defense will remain the same: hoping that the attacker is incompetent. As long as we don’t fix the system, the next attacker is going to try a similar route and they may very well be more capable. In which case we’d really be in trouble.

So: I raise my glass to imcompetence. Next time, may we get a similar bunch of incompetents. Or actually get our act together and make sure there won’t be a next time.

Friday, 22. January 2021

Rebecca Rachmany

Group Currency: What if you could only transact as a community?

Sufficiency Currency: What Communities Want Starting out with some assumptions, I returned from 11 weeks of travel including visits to 7 intentional communities (ecovillages) with a more solidified idea of what the “sufficiency currency” might look like. Before I go into that, it’s useful for me to distinguish the major differences between the Sufficiency Currency project and other projects. By
Sufficiency Currency: What Communities Want

Starting out with some assumptions, I returned from 11 weeks of travel including visits to 7 intentional communities (ecovillages) with a more solidified idea of what the “sufficiency currency” might look like. Before I go into that, it’s useful for me to distinguish the major differences between the Sufficiency Currency project and other projects.

By the way, we aren’t even sure that “Currency” is the right name for the project, and you can check out this blog for a discussion of different names for the project.

Sufficiency Currency Inquiry

The Sufficiency Currency project is looking at non-monetary currency solutions and the two major inquiries are:

How can we create “group” measures and currencies. Production is a group activity, but money is an individual measure. What if we could only transact as a group or community? What would that type of communications would represent the complexity of interactions among communities? If the purpose of an economy is to provide people’s basic survival needs (food, shelter, energy, health), what would we measure such that we can support everyone and increase the capacity of a society to grow to support more people?

We assert that market economies are not appropriate for support systems, and that we should use a system of pooling for essential services. The idea of pooling asserts that when there is a visible shared pool, people don’t let other people starve, and that if there isn’t enough to go around, the group will be inclined towards group problem-solving.

Goals and Hypothesis

We assert that market economies are not appropriate for support systems, and that we should use a system of pooling for essential services. The idea of pooling asserts that when there is a visible shared pool, people don’t let other people starve, and that if there isn’t enough to go around, the group will be inclined towards group problem-solving.

The fundamental goals of the project are:

Create a form of economy that looks directly at the sustainability of life in the economy, rather than a proxy measure (money). Create local self-sufficiency for communities, and in particular, encourage local regenerative and healthy forms of food and energy production. Specifically for the regenerative/intentional/ecovillage/permaculture movements, support three outcomes: Make it easy to transact within the network as well as with bodies outside of the network. Make it easy to expand the movement and better share the resources coming into the network. For people who want to join or create regenerative communities, make it easier to do so through this unified economic structure.

The initial hypothesis for the Sufficiency Currency is that there are three important measures that a group of communities would want to look at in order to meet the

Market Research: How Does It Really Look?

In September-November of 2020, I traveled to a number of Ecovillages in Italy, Slovenia and Spain, to discuss my vision and understand what they really need in terms of their interactions among themselves and with their neighboring communities who may or may not be ecovillages.

The first thing I realized was that there are quite a few community currency and cryptocurrency projects trying to push their ideas into ecovillages. There are some isolated examples of successful community currency attempts within the intentional community movement, and dozens, if not hundreds of failure stories. The communities themselves are quite aware that they don’t need an alternative monetary currency to function. They are also aware that using monetary and trade based currency doesn’t reflect their values. Finally, these people are busy and they don’t have a lot of spare time for currency experiments. Usually there are one or two people who are responsible for anything that would require a computer. Others have computers and phones, but they are highly disinterested in activities that would require any serious amount of time in front of a screen.

The most gratifying find was that the kind of pooling proposed in the Sufficiency Currency project does appeal to the ecovillages. It would have to be very easy to manage, but they are already managing multiple interactions in their vicinity, and systematizing that is of interest. In fact, the same kind of thinking process has come from some of the national ecovillage support networks, but it hasn’t been a priority, nor is it really within the core competencies of the coordination networks for intentional communities.

All of the communities have some form of trade with the other communities in the area. Depending on the location, they might be interacting with local farms and businesses, cooperatives, or other ecovillages. The agreements look different among different entities, but in general there is a looser type of trade than you would see between businesses. For example, at one ecovillage the nearby town flooded and they went down to help out, taking the volunteers with them and neglecting their harvesting work for a couple of days. There was no formal trade — it was just helping people out — and at the same time, they know that this will be helpful for them when it comes to their needs vis-à-vis the municipal government. Similarly, they had an informal agreement with a local agricultural cooperative, where they were helping with the farm work in return for some produce, but they were giving it a try for a year before they came to any formal agreement. The assumption of cooperation and reciprocation was more important than the specifics of the deal.

Designing the Currency

Designing the “representables”, or currency, for the communities is like designing any product. The first step is to get clear on the problems the community wants to solve. The way they described their problems were mostly in terms of overwhelm:

Every contract needs to be negotiated separately and the documentation is scattered. The negotiation takes a lot of time. Different parties have different ideas of fairness, and fairness isn’t necessarily expressed in monetary value. Some items are truly scarce, while some are abundant, and monetary exchange doesn’t help them identify that type of value or seek solutions. Depending on the type of organization, the terms of business are different. If it’s another ecovillage, it’s a very different type of relationship than if it’s just a nearby village that doesn’t belong to the regenerative agriculture movement.

When translated into measurable currencies that can be represented in software, the Sufficiency Currency would aim, firstly, to create an easy-to-use interface for putting together contacts between two entities. Secondly, the dashboard would include the following measures:

Fairness. Although fairness is subjective, it’s probably the most important measure to maintain for long-term relationships. For a sustainable network of ecovillages, it’s important for the members to feel the other group is dealing fairly with them. Scarcity and abundance. Some items are scarce, for example, the number of trucks that are available. A good representation might be dark or light colors, or thickness of a line. Others are abundance, for example, squash in the summer. Even if the trade is fair, some things might wear out over time while others are easily replenished. Representing the scarcity or abundance of something in the communities allows people to identify joint problems to solve. For example, if there aren’t enough trucks, perhaps they would train some community members to repair and assemble trucks from spare parts. One of the main functions of the currency is to help the communities take joint action. Reputation. Reputation would be a multi-dimensional measure that allows communities to get information about one another, such as how ecologically conscious one community is, whether they have traded fairly in the past, etc. The reputation measures need to be developed over time.

The three measures above are a start for creating an alternative to monetary trading. The goal over time will be to have the communities simply share both their resources and their challenges to grow over time as a movement.


Werdmüller on Medium

Pulmonary fibrosis and me

The moment that gave me back the rest of my life Continue reading on Medium »

The moment that gave me back the rest of my life

Continue reading on Medium »

Thursday, 21. January 2021

MyDigitalFootprint

Mapping Consumer Financial Services through a new lens!

Unpacking and explaining the Peak Paradox model is here, you will need this backgrounder to understand the model so that this article will make sense; it is a 5-minute read. A new way of seeing the same thing will mean we can act differently; this is the peak paradox model’s core tenet.  ---- A recurring question on an executives mind in the finance industry is; “Why does up or cross-selli

Unpacking and explaining the Peak Paradox model is here, you will need this backgrounder to understand the model so that this article will make sense; it is a 5-minute read. A new way of seeing the same thing will mean we can act differently; this is the peak paradox model’s core tenet. 

----

A recurring question on an executives mind in the finance industry is; “Why does up or cross-selling not deliver in the way we forecast or predict?”  Cross-selling is foundational hypnosis for growth.  H0; this customer has one product, and therefore they should want my other products; all customers need all my core finance products.  With years of data, it is evident that a customer who uses my payment system does not want my other products, and after years of cross-selling we still only have 30% of customers with all products. Whilst we can conclude there is a problem with the hypothesis, we choose to ignore this fact and continue to try to upsell as we don’t have a framework to explore why! If we try a different marketing message, this one will work (what was it about repeating the same thing and expecting a different outcome?) 

This article maps core consumer financial services offerings of Payment (spending), Saving (available, current, on-demand and surplus), Borrowings (mortgage, credit, debt, overdraft, loans) and Investment (long-term growth and or income) onto the Peak Paradox model.  

I will unpack a delta between what a bank or financial service providers products say about their products’ purpose in marketing and terms; and the consumers’ purpose for the same products.  When mapped, it is evident that cross-selling will only work in a limited capacity, and in the majority of cases, there is a misalignment. 

On paper, and with years of data, our upselling hypothesis appears to be misguided as the utopia for growth.  The market has introduced a plethora of new banks,  neo-banks, open banking platforms, banking 1.0, 2.0, 3.0, 4.0 and more; promising digital products for a new next generation. Each new company has a model that attracts customers from accepting the tempting offers from existing providers and then upselling.   We will apply the Peak Paradox model on this perplexing and thorny topic because when we can see something through a different lens, we may decide to act differently, as we have new insights.   

Without a doubt, we all have a reason why upselling and cross-selling does not materialise to the level we predict/ want/ desire; but why does a customer then buy a very similar product from a different financial provider, even though we offer it?  Finding the data to confirm that we have a specific, sound and justifiable explainable reason, which is different to others interruptions, leads to much tension, anxiety, stress, argument and disagreements in our teams and enterprises; both on why we miss the targets and what to do about it.  

We are taking the core banking products in turn.  Remember the purpose here is to provide a non-conformational way for your team to look at the same problem with a new perspective. 

Payment (spending available, accessible and current cash resource).  Spending maps across the entire Peak Paradox map.  We need to spend to survive and therefore is essential at Peak Human Purpose as without food and water we don’t tend to last very long.   Spending on luxury goods meets a requirement for being at Peak Individual Purpose where you look after yourself.  Giving gifts to friends, philanthropy and donations to charity moves the coverage to Peak Society Purpose. Finally, we cannot work without payments,  payments get us towards Peak Work Purpose and covering the entire map.

The observation is that, only at this moment-in-time, when you are using your payment service, can you highlight if the payment provider’s values reflect your purpose or if you feel there is a conflict.  Users have started to use different payment mechanisms to reflect an alignment between their use and how they perceive the payment providers own marketing.  But does that allow them to use this payment provider for everything, or do they have two or more to cope with conflicts?  Can the customer hold the conflict of using the same ethical payment platform to gamble and buy adult services as they give to charity?  How does the market, market payment services?

Saving (surplus).  Saving tend not used for day to day survival (yes individuals often find they have to as a reality of our economic mess), therefore saving does not tend to feature towards Peak Human Purposes for most consumers.  However, if savings are for a house and the house is attractive to a mate for reproduction it can be argued that there is a connection to Peak Human Purpose, but is this saving or borrowings (loans).   However, savings are used to further your own individual purpose or help those closest to you, which means there is at least some societal benefit.  (A gift to charity will be in payments, payments on death to a charity by a Will is a different dimension to be explored,  more case-studies will help your team explore this in more detail.]    Whilst not explicit, some savers understand the economics of savings by themselves leads to borrowings (lending) for others, me saving has an indirect benefit for society. Fewer savers may realise that spending savings are a better way of benefiting a societal purpose as it encourages economic activity and growth.   The point is that “saving” is positioned differently on the Peak Paradox model to that of your payment service.  How your company positions these products will have a direct impact on how users perceive them.  Indeed how your competitor and the media report on these products directly affects user positioning.  We seek the delta between your marketing, the markets positions, the media view, and the users’ own opinion. 

Borrowings (mortgage, credit, debt, overdraft, loans) Brownings at a consumer retail banking level, because the terms and conditions are for an individual or a couple who equally take on the responsibility, meaning that Borrowings are for Individual needs more than society.  Since the use of borrowings that a requester has is vetted by the provider, a company’s processes and regulation (compliance) mean that borrowings focus on Peak Individual Purpose.  Borrowings give the person(s) more agency, and one can argue more freedom, but that depends on responsible lending to ensure that levels of debt are not a burden, which sadly is not always the case.  Borrowings to give to someone else still has to be repaid and when you look at the terms and the processes; borrowing to give away is not an acceptable practice - whereas guarantees are.   

Therefore, borrowings have a different position on the Peak Paradox model.   There is a position where lending does support more basic human survival (payday loans), but this creates tension and conflicts in the users of borrowing products between two Peak Purposes.  Debt can also be used as part of a personal guarantee to provide working capital for a business or enterprise.  This means that borrowings are dragged towards Peak Work Purpose, and depending on if the Borrowings are for growth (thriving) or survival creates different tensions again.   Consider, customers are not buying a specific product; they are buying a generic idea.  When these ideas become confused, it creates tensions.  We might like to give them cool marketing names to deceive ourselves to what is being offered, but it is quite evident in the terms. 

Investment (long-term growth and or income from the investment - capital at risk).  The final bucket considered here. Pre-crypto, climate and ethical investing; investment occupied an area between Peak Individual Purpose and Peak Work Purpose, with variance in risk and return creating an area.  As real-time trading led to no responsibility for the shareholder, the area has shifted towards Peak Individual Purpose.  Crypto, angel, seed and start-up investing has pushed the upper boundary even further to Peak Individual.  However, social and impact investing (labelled here as ESG for convenience) has created a broader and wider market. Those who seek more ethical ideals that align with their own position of the map means the investment is also firmly heading into Peak Society Purpose.  Such diversity.   The same questions need to be reflected on: how is the positioning of investment products aligned to the individual’s need and purpose? Is there a gap between brand, marketing and overall positioning?  


What do we learn? 

Financial services are a very cluttered landscape. Just offering slicker, quicker, faster, less paperwork, and sexy digital services are avoiding the very core of the problem that the Peak Paradox model exposes.  Yes, we can play with terms and marketing. We can create great names, become ever more personalised and seemingly more differentiated in niche services, but fundamentally avoid the conflicts and tensions this creates. 

Innovation.  A term on the lips of every board, executive and management personnel in the finance industry.  We have to be more innovative.  We have clearly been more “clever” in how we bundle, describe, consider risk and differentiate our core products. Still, we appear not to have aligned our corporate purpose, with our products’ purpose with the customers’ purpose for financial services.   Perhaps we should look at innovation in this area. 

Positioning. Suppose we attracted a customer with a faster payment method for essential services, positioned towards Peak Human Purpose and Peak Social Purpose. Why would that customer naturally consider other series until you have either developed trust or re-educated them on your brand position?  Fabulous branding for new customers’ attraction may generate the numbers and look good in the business plan until you need that same “easy to attract customer” to buy something else. 

This is tricky as it might mean rewriting marketing and positioning, and will the marketing/branding team understand this, given that it could affect their KPI’s and bonus?    Such actions also require time and reflection, always the most precious things in an early-stage growth company, as far too many jobs to be done.   

There a delta between an individual consumers perspective of the financial products they are using, the marketing position/ branding you offer and different core products align to other areas meaning that there is unlikely to be a natural cross-selling opportunity, with one exception. There is an alignment of all the products focussed at Peak Individual Purpose. Maybe that is why High Net Wealth (HNW) teams, wealth management and those looking after the ultra-wealthy in banks appear to have a very successful, and aligned business. 

Trust. This asks how to explore the alignment between your product's purpose and the consumer purpose and how this correlates to the “trust” in your brand. If there is a high R2 does it lead to a propensity to utilise more than one product from a financial institute? 


We need to add pensions, tax, gifts, inheritance, B2B, business, corporate and many other financial services to complete the picture, and then the role of the regulator and whose purpose they are working to protect! Anyone up for mapping one company's financial products?



Werdmüller on Medium

How to startup like a bro

The complete guide to crushing it Continue reading on Medium »

The complete guide to crushing it

Continue reading on Medium »

Wednesday, 20. January 2021

Werdmüller on Medium

Do No Harm

It’s been the guiding rule for my entire career. But I was applying it wrong. Continue reading on The Startup »

It’s been the guiding rule for my entire career. But I was applying it wrong.

Continue reading on The Startup »

Monday, 18. January 2021

Identity Woman

Podcast: The Domains of Identity and SSI

I was on the UbiSecure Podcast where I talked about The Domains of Identity and SSI. You can also listen to it on  Apple, Google, Spotify etc. The post Podcast: The Domains of Identity and SSI appeared first on Identity Woman.

I was on the UbiSecure Podcast where I talked about The Domains of Identity and SSI. You can also listen to it on  Apple, Google, Spotify etc.

The post Podcast: The Domains of Identity and SSI appeared first on Identity Woman.

Sunday, 17. January 2021

Aaron Parecki

The Perfect Remote Control ATEM Mini Interview Kit

This tutorial will walk you through setting up an ATEM Mini Pro kit you can ship to a remote location and then control from your studio. You can use this to ship a remote interview kit to someone where all they have to do is plug in a few connections and you'll be able to control everything remotely!

This tutorial will walk you through setting up an ATEM Mini Pro kit you can ship to a remote location and then control from your studio. You can use this to ship a remote interview kit to someone where all they have to do is plug in a few connections and you'll be able to control everything remotely!

The overall idea is you'll ship out an ATEM Mini Pro (or ISO), a Blackmagic Pocket Cinema Camera 4K (or any other camera, but the Pocket 4K can be controlled by the ATEM!), and a Raspberry Pi. The guest will connect the Raspberry Pi to their home network, connect the camera's HDMI to the ATEM, turn everything on, and you'll immediately be able to control the remote end yourself!

The remote ATEM can then stream to any streaming service directly, or if you have a Streaming Bridge, you can get a high quality video feed of the remote guest brought directly into your studio!

Devices Used in this Tutorial ATEM Mini Pro or ATEM Mini Pro ISO GL.iNet AR750S Travel Router Raspberry Pi 4 (a Pi 3 will also work, but they aren't that much cheaper anyway) a MicroSD Card for the Raspberry Pi, I like the 32gb A2 cards a USB Ethernet adapter to get an additional ethernet port on the Raspberry Pi Blackmagic Pocket Cinema Camera 4K (any camera will do, but the Pocket 4K can be controlled remotely too!) Blackmagic Streaming Bridge to receive the remote ATEM video feed in your studio Set up the Studio Side

We'll start with setting up the studio side of the system. This is where your computer to control the remote ATEM will be, and this end will have the VPN server. If you have a Streaming Bridge, it would be on this end as well in order to receive the streaming video feed from the remote ATEM.

First we're going to set up the GL.iNet router as a Wireguard server.

Plug your computer in to one of the two ethernet ports on the right, or connect to its WiFi hotspot. There's a sticker on the bottom showing the default wifi name and password as well as the IP address of the router. (It's probably 192.168.8.1).

Open that address in your browser and it will prompt you to set up an admin password. Make sure you keep that somewhere safe like a password manager.

Set up Internet Access for the Travel Router

The first thing we need to do is get internet access set up on this router. You'll definitely want to hardwire this in to your studio network rather than use wifi. Plug an ethernet cord from your main router in the studio to the port on the left. Once that's connected, the admin dashboard should pop up a section to configure the wired connection. You can use DHCP to get it on your network. Eventually you'll want to give this device a fixed IP address by going into your main router and setting a static DHCP lease. The specifics of that will depend on your own router so I'll leave that up to you to look up.

In my case the travel router's IP address is 10.10.12.102.

Configure the Wireguard Server

Now we're ready to set up the travel router as a Wireguard server. Wireguard is relatively new VPN software that is a lot faster and easier to use compared to older VPN software like OpenVPN. It's conveniently already built in to the travel router as well, making it an excellent option for this.

Go to the VPN menu on the side and expand it to reveal the Wireguard Server tab.

Click "Initialize Wireguard Server" and you'll be able to set it up. Enable the "Allow Access Local Network" toggle, and change the "Local IP" to 172.16.55.1. (It doesn't really matter what IP range you use for the Wireguard interface, but this address is unlikely to conflict with your existing network.)

Now you can click "Start", and then go into the "Management" tab.

Click the big "Add a New User" button. Give it a name like "RemoteATEM". This will create a config for it which you'll use to set up the remote side.

Click on the icon under "Configurations" and click the "Plain Text" tab.

Copy that text into a text editor (not Word or Google Docs!). We're going to make a few changes to make it look like the below.

[Interface] Address = 172.16.55.2/32 ListenPort = 42170 PrivateKey = <YOUR PRIVATE KEY> [Peer] AllowedIPs = 172.16.55.0/24, 192.168.8.0/24 Endpoint = <YOUR IP ADDRESS>:51820 PersistentKeepalive = 25 PublicKey = <YOUR PUBLIC KEY>

We don't want the ATEM to send the video feed over the VPN, so change the AllowedIPs line to: 172.16.55.0/24, 192.168.8.0/24. If it's set to 0.0.0.0/0 then all the traffic on the remote end will be funneled through your VPN server and your studio's internet connection. That's usually what you want when you're using a VPN for privacy, but we don't want to add latency to sending the video feed if you're streaming from the remote end to YouTube directly. The IP address in the Endpoint = line is the public IP address of your studio network, so make sure you leave that line alone. You can remove the DNS = line since we aren't routing all network traffic through the VPN. Make sure you keep the PrivateKey and PublicKey that your router generated though!

This next part is the magic that makes it work. The key here is we need to let other devices on the WireGuard server end access devices on the LAN side of the WireGuard client. The ATEM will be behind the WireGuard client (described in the next section), the ATEM itself wouldn't normally be visible to other things on the Wireguard server side.

Unfortunately this is the most complicated step. You'll need to edit a text file on the router by connecting to it via ssh. Open Terminal on a mac, or PuTTY on Windows.

ssh root@192.168.8.1

The root ssh password is the same admin password you created when you first set up the router. (You won't see your keystrokes as you're typing your password.)

You'll need a text editor to edit the file. If you're familiar with vi, it's already installed. If you want something easier to use, then you can install nano.

opkg update opkg install nano

Now you can edit the file using nano:

nano /etc/config/wireguard_server

Navigate to the bottom of the file and add this line:

list subnet '192.168.5.0/24'

The file should now look something like the below

config servers option local_port '51820' option local_ipv6 'fd00:db8:0:abc::1' option private_key '<YOUR PRIVATE KEY>' option public_key '<YOUR PUBLIC KEY>' option access 'ACCEPT' option local_ip '172.16.55.1' option enable '1' config peers 'wg_peer_9563' option name 'RemoteATEM' option client_key '<CLIENT PUBLIC KEY>' option private_key '<CLIENT PRIVATE KEY>' option client_ip '172.16.55.2/32' list subnet '192.168.5.0/24'

To save and exit, press control X, and press Y and enter when prompted. You'll need to reboot this device once this is set up, so click the "reboot" button in the top right corner of the admin panel.

Set Up Port Forwarding

The last step on the studio side is to set up port forwarding from your studio router to forward port 51820 to the IP address of the WireGuard server. This is what will let the remote end be able to connect to your studio VPN. How you do this will depend on what router you use in your studio. Most routers will have a web interface to configure them which will let you set up port forwarding rules.

You'll need to know the IP address of your studio router as well as the IP address of the travel router. Create a port forwarding rule to forward port 51820 to your travel router's IP address.

For example, this is what it looks like in my Unifi router to create the forwarding rule.

Alright, we're set on this end! You now have a WireGuard server accessible from outside of your network! If you're curious to keep experimenting with WireGuard, you can even set up your mobile phone or laptop as a WireGuard client so that you can connect back to your studio on the go!

Next, we're ready to set up the remote end for the ATEM Mini Pro.

Set up the Remote Side for the ATEM

The remote kit side will have the ATEM Mini Pro, the camera, and the WireGuard client.

When the WireGuard client powers on, it will connect to your studio network and make the ATEM Mini acessible from your studio network.

I want to preface this section by saying I tried really hard to make this work with another GL.iNet router, but ran into a few blockers before switching to using a Raspberry Pi. As such, this step is a lot more involved and requires quite a bit more command line work than I would like. If you are reading this and happen to know h