Last Update 1:40 AM December 04, 2021 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Saturday, 04. December 2021

Ben Werdmüller

If you haven't yet, for whatever reason: ...

If you haven't yet, for whatever reason: please get vaccinated. It really matters.

If you haven't yet, for whatever reason: please get vaccinated. It really matters.

Friday, 03. December 2021

Doc Searls Weblog

Remembering Kim Cameron

Got word yesterday that Kim Cameron had passed. Hit me hard. Kim was a loving and loved friend. He was also a brilliant and influential thinker and technologist. That’s Kim, above, speaking at the 2018 EIC conference in Germany. His topics were The Laws of Identity on the Blockchain and Informational Self-Determination in a Post Facebook/Cambridge Analytica Era (in […]

Got word yesterday that Kim Cameron had passed.

Hit me hard. Kim was a loving and loved friend. He was also a brilliant and influential thinker and technologist.

That’s Kim, above, speaking at the 2018 EIC conference in Germany. His topics were The Laws of Identity on the Blockchain and Informational Self-Determination in a Post Facebook/Cambridge Analytica Era (in the Ownership of Data track).

The laws were seven:

User control and consent Minimum disclosure for a constrained use Justifiable parties Directed identity (meaning pairwise, known only to the person and the other party) Pluralism of operators Human integration Consistent experience across contexts

He wrote these in 2004, when he was still early in his tenure as Microsoft’s chief architect for identity (one of several similar titles he held at the company). Perhaps more than anyone at Microsoft—or at any big company—Kim pushed constantly toward openness, inclusivity, compatibility, cooperation, and the need for individual agency and scale. His laws, and other contributions to tech, are still only beginning to have full influence. Kim was way ahead of his time, and its a terrible shame that his own is up. He died of cancer on November 30.

But Kim was so much more—and other—than his work. He was a great musician, teacher (in French and English), thinker, epicure, traveler, father, husband, and friend. As a companion, he was always fun, as well as curious, passionate, caring, gracious.

I am reminded of what a friend said of Amos Tversky, another genius of seemingly boundless vitality who died too soon: “Death is unrepresentative of him.”

That’s one reason it’s hard to think of Kim in the past tense, and why I resisted the urge to update Kim’s Wikipedia page earlier today. (Somebody has done that now, I see.)

We all get our closing parentheses. I’ve gone longer without closing mine than Kim did before closing his. That also makes me sad, not that I’m in a hurry. Being old means knowing you’re in the exit line, but okay with others cutting in. I just wish this time it wasn’t Kim.

Britt Blaser says life is like a loaf of bread. It’s one loaf no matter how many slices are in it. Some people get a few slices, others many. For the sake of us all, I wish Kim had more.

Here is an album of photos of Kim, going back to 2005 at Esther Dyson’s PC Forum, where we had the first gathering of what would become the Internet Identity Workshop, the 34th of which is coming up next Spring. As with many other things in the world, it wouldn’t be the same—or here at all—without Kim.


Ben Werdmüller

Fairness Friday: Jackson Women's Health Organization

I’m posting Fairness Fridays: a new community social justice organization each week. I donate to each featured organization. If you feel so inclined, please join me. This week I’m donating to the Jackson Women’s Health Organization. Based in Jackson, Mississippi, JWHO provides important women’s health services to its community, including abortions. It is the clinic at the center of the curren

I’m posting Fairness Fridays: a new community social justice organization each week. I donate to each featured organization. If you feel so inclined, please join me.

This week I’m donating to the Jackson Women’s Health Organization. Based in Jackson, Mississippi, JWHO provides important women’s health services to its community, including abortions. It is the clinic at the center of the current Supreme Court case that threatens to overturn Roe v Wade and rob 65 million women of their right to choose. It is also the only abortion clinic in the state of Mississippi.

It describes its mission as follows:

‌Jackson Women’s Health Organization (JWHO) offers affordable abortion care to women living in Mississippi and/or traveling to the state of Mississippi.

‌Our commitment is to provide confidential health care to women in a safe and professional environment. It is our conviction to respect a woman’s reproductive choices specifically regarding a woman’s right to control whether she wants to become a parent or not.

The clinic provides vital services for its community, and its fight will have a disproportionate effect on the human rights of women across America. There are few more important battles today.

I donated. If you have the means, please join me here.


US rejects calls for regulating or banning ‘killer robots’

"Speaking at a meeting in Geneva focused on finding common ground on the use of such so-called lethal autonomous weapons, a US official balked at the idea of regulating their use through a “legally-binding instrument”." It may seem laughable now, but technology improvements will make this feasible very shortly. Internationally agreed upon protections would be smart.

"Speaking at a meeting in Geneva focused on finding common ground on the use of such so-called lethal autonomous weapons, a US official balked at the idea of regulating their use through a “legally-binding instrument”." It may seem laughable now, but technology improvements will make this feasible very shortly. Internationally agreed upon protections would be smart.

[Link]


Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them

"Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods. [...] “No one has done the work you guys are doing, which is looking at the data,” said Andrew Ferguson, a law professor at American University who is a national expert on predictive policing. “This isn’t a continuation of resear

"Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods. [...] “No one has done the work you guys are doing, which is looking at the data,” said Andrew Ferguson, a law professor at American University who is a national expert on predictive policing. “This isn’t a continuation of research. This is actually the first time anyone has done this, which is striking because people have been paying hundreds of thousands of dollars for this technology for a decade.”"

[Link]


Altmode

Sussex Day 9: Brighton to London

Friday, November 12, 2021 Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were […]

Friday, November 12, 2021

Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were a bit surprised that this time we would have to do a proctored test where the proctor would watch us take the test via video chat. The next surprise was that you seem to need both a smartphone to run their app and some other device for the chat session. So we got out our iPads, and (third surprise) there was apparently a bug in their application causing it not to work on an iPad. So we got out my Mac laptop and (fourth surprise) couldn’t use my usual browser, Firefox, but could fortunately use Safari. Each test took about half an hour, including a 15-minute wait for the test to develop. Following the wait, a second video chat was set up where they read the test with you and issued your certificate. Very fortunately, both of our tests were negative.

We checked out of the apartment/hotel just before checkout time and stored our bags. Then the question was what to do until Celeste finished classes so we could all take the train to London. The answer was the Sea Life Brighton, apparently the oldest aquarium in the world. While not an extensive collection, many of the exhibits were in a classic style with ornate frames supporting the glass windows. There was a very enjoyable tunnel where you can sit while fish (and turtles!) swim overhead. The aquarium covered a number of regions of the world, with more of an emphasis on fresh-water fish than many others we have seen.

After browsing a bookstore for a while, we collected our bags and headed for the train station. Trains run to Victoria Station in London every half hour, and fortunately that connected well with the train Celeste took from Falmer to meet us.

After the train trip and Tube ride to Paddington Station, we walked the short distance to our hotel, a newly renovated boutique hotel called Inhabit. We chose it largely because it had nice triple rooms, including an actual bed (not sofa bed) for Celeste. No London trip would be complete without a hotel where it’s necessary to lug your bags up a flight of stairs, but fortunately this one only required a single flight. Our room was modern and comfortable.

I had booked a table at the Victoria, a pub in the Paddington area, and we were seated in a pleasant and not noisy dining room upstairs. Dinner was excellent. Upon returning to the hotel, Celeste immediately collapsed for the night on her cozy bed.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Ben Werdmüller

I've had (and am having) a number ...

I've had (and am having) a number of really hard conversations this week. I'm also really conflict-averse; a textbook people-pleaser. It's not a positive trait, and just makes it all exponentially harder. If this is also you, how have you got better at it?

I've had (and am having) a number of really hard conversations this week. I'm also really conflict-averse; a textbook people-pleaser. It's not a positive trait, and just makes it all exponentially harder. If this is also you, how have you got better at it?


John Philpin : Lifestream

Whoa!

Whoa!

Whoa!



Very neat Craft update in email this morning - including fil

Very neat Craft update in email this morning - including filling out Craft X - wondering what it would take to add Micro Blog to Ghost and Medium in the ‘publish to blogs section’. The Deets

Very neat Craft update in email this morning - including filling out Craft X - wondering what it would take to add Micro Blog to Ghost and Medium in the ‘publish to blogs section’.

The Deets

Thursday, 02. December 2021

John Philpin : Lifestream

The 9th most expensive broadband on the planet. Clearly b

The 9th most expensive broadband on the planet. Clearly broadband is the exception in the ‘land of the free’.

The 9th most expensive broadband on the planet.

Clearly broadband is the exception in the ‘land of the free’.


Break up Twitter? I didn’t think Twitter was big enough t

Break up Twitter? I didn’t think Twitter was big enough to break up … more dumbass thinking from an idiot.

Break up Twitter?

I didn’t think Twitter was big enough to break up … more dumbass thinking from an idiot.


It’s clear from his post that @dave listened to the Fresh Ai

It’s clear from his post that @dave listened to the Fresh Air interview with McCartney .. so his question in the final paragraph is strange .. I think Paul answered it in that very interview. No?

It’s clear from his post that @dave listened to the Fresh Air interview with McCartney .. so his question in the final paragraph is strange .. I think Paul answered it in that very interview. No?


Ben Werdmüller

Abortion is a human right.

Abortion is a human right.

Abortion is a human right.


Simon Willison

100 years of whatever this will be

100 years of whatever this will be This piece by apenwarr defies summarization but I enjoyed reading it a lot.

100 years of whatever this will be

This piece by apenwarr defies summarization but I enjoyed reading it a lot.


Ben Werdmüller

Non-work conversations I’ve been a part of ...

Non-work conversations I’ve been a part of lately have included improving working conditions under one fundamentalist dictatorship and helping a family safely leave another, and I’m feeling very grateful and privileged to be trusted in these spaces.

Non-work conversations I’ve been a part of lately have included improving working conditions under one fundamentalist dictatorship and helping a family safely leave another, and I’m feeling very grateful and privileged to be trusted in these spaces.


Altmode

Sussex Day 8: Hove and Skating

Thursday, November 11, 2021 While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back […]

Thursday, November 11, 2021

While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back to the west. We stopped to look at one of the characteristic crescent-shaped residential developments, and continued toward Brighton. We considered going on the i360 observation tower, but it wasn’t particularly clear and the expense didn’t seem worth it.

Celeste and a friend of hers (another exchange student from Colorado) joined us in the afternoon to go ice skating at the Royal Pavilion Ice Rink. While I am used to hockey skates, it was a bit of an adjustment to the others who are used to the toe picks on figure skates. We all got the hang of it; the ice was beautifully maintained (although with some puddles) and the rink was not particularly crowded for our 3 pm session.

After skating we sat in the attached cafe to chat until it was time for dinner, which we had at an Italian restaurant, Bella Italia, in the Lanes.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Sussex Day 7: Pavilion and Museum

Wednesday, November 10, 2021 Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King […]

Wednesday, November 10, 2021

Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King George IV into an ornate building, with the exterior having an Indian theme and the interior extensively decorated and furnished in Chinese style.

Brighton’s Royal Pavilion has had a varied history, having been of less interest to Queen Victoria (George IV’s successor in the throne) who moved most of the furnishings to London and sold the building to the City of Brighton. Over the years it has been refurnished in the original style and with many of the original furnishings, some of which have been loaned by Queen Elizabeth. The Pavilion was in the process of being decorated for Christmas, which reminded us of a visit we made two years ago to Filoli in California.

After the Pavilion, we went across the garden to the Brighton Museum, which had a wide range of exhibits ranging from ancient history of the British Isles and ancient Egypt to LGBT styles of the late 20th century and modern furniture.

Having finished her classes, Celeste joined us for lunch at Itsu, one of a chain of Asian-inspired fast food restaurants. We then returned with Celeste to the museum to see a bit more and allow her time to do some research she had planned.

We then made our way behind the Pavilion, where a seasonal ice rink is set up for recreational ice skating. With its location next to the Pavilion it is a particularly scenic place to skate. We are looking forward to doing that tomorrow.

Celeste returned to campus, and Kenna and I, having had a substantial lunch, opted for a light dinner at Ten Green Bottles, a local wine bar.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


John Philpin : Lifestream

I guess ‘block’ relates to ‘square’ more than ‘chain’.

I guess ‘block’ relates to ‘square’ more than ‘chain’.

I guess ‘block’ relates to ‘square’ more than ‘chain’.

Wednesday, 01. December 2021

John Philpin : Lifestream

“I stood on a hill and I saw the Old approaching, but it c

“I stood on a hill and I saw the Old approaching, but it came as the New.” 💬 Bertolt Brecht 💬 Bertolt Brecht 💬 Bertolt Brecht

“I stood on a hill and I saw the Old approaching, but it came as the New.”
💬 Bertolt Brecht

💬 Bertolt Brecht

💬 Bertolt Brecht


Ben Werdmüller

When COVID patients get new lungs, sould vaccine status matter?

"About one in 10 lung transplants in the United States now go to COVID-19 patients, according to data from the United Network for Organ Sharing, or UNOS. The trend is raising questions about the ethics of allocating a scarce resource to people who have chosen not to be vaccinated against the coronavirus." Healthcare should save everybody's life, regardless of choices. But this

"About one in 10 lung transplants in the United States now go to COVID-19 patients, according to data from the United Network for Organ Sharing, or UNOS. The trend is raising questions about the ethics of allocating a scarce resource to people who have chosen not to be vaccinated against the coronavirus." Healthcare should save everybody's life, regardless of choices. But this is such a frustrating trend.

[Link]


Identity Woman

Joining Secure Justice Advisory Board

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […] The post Joining Secure Justice Advisory Board appeared first on Identity Wo

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […]

The post Joining Secure Justice Advisory Board appeared first on Identity Woman.


Ben Werdmüller

Thinking about the future in every way. ...

Thinking about the future in every way. There's A Lot to Think About.

Thinking about the future in every way. There's A Lot to Think About.


Reading, watching, playing, using: November, 2021

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for November, 2021. It’s a little shorter than normal because I spent a portion of the month offline. Notable Articles Business Research: People prefer friendliness, trustworthiness in teammates over skill competency. “People who are friendly and trustworthy are more likely to be selecte

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for November, 2021. It’s a little shorter than normal because I spent a portion of the month offline.

Notable Articles Business

Research: People prefer friendliness, trustworthiness in teammates over skill competency. “People who are friendly and trustworthy are more likely to be selected for teams than those who are known for just their skill competency and personal reputation, according to new research from Binghamton University, State University of New York.” File under “no shit, Sherlock”: if you’ve got to work with someone every day, you want them to be kind and trustworthy, regardless of how good they actually are at their job. Ideally, you want both; if you can only have one, the person who’s a better human will and should win out every time.

Remote work will break the US monopoly on global talent. “Tech companies based in San Francisco and Seattle have “innovation hubs” whose primary role is to create a place that talent that hasn’t been able to get a visa to the US. We’ve also started to see this in places like Lagos and Buenos Aires. Nigerian developers can work alongside startups in Berlin and London, while Argentinian developers work as consultants for companies based in the US. We’re going to be seeing a lot more of this now that remote work is more widely accepted by companies worldwide.” This is a really positive change.

Putting Post Growth Theory Into Practice. “The Post Growth Entrepreneurship Incubator helps founders break free from traditional business models and implement sustainable non-extractive practices. […] We promote cross-subsidizing charity with our businesses, and we’re trying to offer an alternative for startup founders who want to bring their activist, artistic, spiritual business ideas to life without selling out in the commercial startup ecosystem. Too much of the startup ecosystem uses the Silicon Valley model of ‘capital, scale, exit.’ Instead we’re promoting: bootstrapping, flat growth, and non-extraction.”

Theranos patient says blood test came back with false positive for HIV. “Erin Tompkins, who got her blood drawn from a Theranos device at a Walgreens in Arizona, said the test misdiagnosed her as having an HIV antibody, sending her into a panic.”

Crypto

The Token Disconnect. “Silicon Valley ran dry on large breakthroughs in software, so we decided to invent the “blockchain”, a simulacrum of innovation that organically fermented from the anti-institutional themes in the Western zeitgeist to spawn an absurdly large asset bubble with absolutely nothing at the center. There is no there there, and crypto morphed into a pure speculative mania which attracted a fanatic quasi-religious movement fueled by gambling addiction and the pseudo-intellectual narrative economics of the scheme. All conversation around crypto is now simply the sound and fury of post-hoc myth making to rationalize away the collective incoherence of the bubble in a near perfect exemplar of the motivated reasoning of economic determinism.” Sharing because it’s an interesting take; I don’t necessarily agree with everything here.

Culture

Appalling Monica Lewinsky Jokes—And the Comedians Who’ve Apologized. “But in the two-plus decades since those jokes were made, some comedians have taken responsibility for their cruel comedy. Ahead, a rundown of some of the hosts and comedy programs that targeted Lewinsky and Tripp—and the parties who have since publicly taken responsibility for their hurtful barbs over the years.”

Belgian gallery uses art after being turned down by artist. “The friendly stranger who clocked the familiar image asked the gallery about it, and a representative allegedly claimed they’d been in touch with Bateman and worked something out. Bateman searched her email and found a permission request from the gallery, dated in March—which she had politely declined and promptly forgotten about. Somehow, what the gallery had taken away from the exchange was that it could just use her work anyway.” I used to share an office with Hallie and have followed her journey. (My current Twitter avatar - a picture of me - was drawn by her.) This gallery’s actions were a very unfair devaluation of the value of her work and her rights as an artist.

Conservative MP Nick Fletcher Blames Crime On Female Doctor Who. Doesn’t he look tired?

Media

The global streaming boom is creating a severe translator shortage. “Training a new generation of translators to meet this supply issue in certain translation hot spots will take time, and most importantly, better compensation, said Lee, whose company Iyuno-SDI operates in over 100 languages and routinely clocks in over 600,000 episodes of translations every year. Lee said that roughly one in 50 applicants are able to pass Iyuno-SDI’s translator qualification exam. “I don’t think we’re happy with even 10% or 15% of who we work with,” he said. “We just have no other options because there’s just not enough professional translators.””

Danny Fenster, U.S. Journalist in Myanmar, Gets 11 Years in Jail. “The sentence seemed to be the latest signal that Myanmar’s military, which seized power in February, would not bow to pressure, including sanctions, from the United States and other countries. The State Department has repeatedly called for Mr. Fenster’s release.” Imprisoned by a despotic regime and failed comprehensively by the US.

How Facebook and Google fund global misinformation. “An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.”

Politics

Secret recordings of NRA officials after Columbine school shooting show strategy. “In addition to mapping out their national strategy, NRA leaders can also be heard describing the organization’s more activist members in surprisingly harsh terms, deriding them as “hillbillies” and “fruitcakes” who might go off script after Columbine and embarrass them.”

It’s not ‘polarization.’ We suffer from Republican radicalization. “The polarization argument too often treats both sides as equally worthy of blame, characterizing the problem as a sort of free-floating affliction (e.g., “lack of trust”). This blurs the distinction between a Democratic Party that is marginally more progressive in policy positions than it was a decade ago, and a Republican Party that routinely lies, courts violence and seeks to define America as a White Christian nation.”

Spotsylvania School Board orders libraries to remove 'sexually explicit' books. Here’s why this is of note: ”“I think we should throw those books in a fire,” Abuismail said, and Twigg said he wants to “see the books before we burn them so we can identify within our community that we are eradicating this bad stuff.”” Holy shit.

Science

Octopuses, crabs and lobsters to be recognised as sentient beings under UK law following LSE report findings. “Octopuses, crabs and lobsters will receive greater welfare protection in UK law following an LSE report which demonstrates that there is strong scientific evidence that these animals have the capacity to experience pain, distress or harm.”

What would health experts do? 28 share their holiday plans amid Covid-19. “To try to gauge where things stand, we asked a number of infectious diseases experts about the risks they are willing to take now, figuring that their answers might give us a sense of whether we’re making our way out of the woods.”

Society

Gresham High students speak out against school resource officers. “Group member Stasia recalled being accused of carrying drugs by a staff member. “I was told that I would end up like Breonna Taylor if I had a substance on me that I shouldn’t have had,” Stasia said, referencing a Black woman killed by police in Louisville, Kentucky.” Police officers and guns don’t belong in schools. Period.

38% of US adults believe government is faking COVID-19 death toll. “The finding is likely unsettling to the surviving loved ones of the nearly 756,000 Americans who have already died of COVID-19. It also squares with previous survey results from KFF showing that personally knowing someone who became severely ill or died of COVID-19 was one of the strongest motivators for convincing unvaccinated people to get vaccinated.”

Experience: I taught two dogs to fly a plane. “I have trained a 190kg boar to pretend to attack an actor, a cat to plunge shoulder-deep into water as if catching a fish and a cockatoo to winch up a bucket, take out a coin and drop it into a piggy bank. But when a TV company asked if I could teach a dog to fly a plane, I faced the toughest challenge of my career.”

Work is no longer the meaning of life for some Americans. “Before the coronavirus pandemic, nearly one quarter of all Americans said that they find meaning and purpose in their lives because of their work and their jobs. Now, that number has declined by more 9% in a new Pew research study, affirming anecdotal stories about the American population’s increasing disinterest in participating in the labor market.” To be honest: good.

ICU is full of the unvaccinated – my patience with them is wearing thin. “Translating this to the choice not to take the vaccine, however, I find my patience wearing thin. I think this is for a number of reasons. Even if you are not worried about your own risk from Covid, you cannot know the risk of the people into whose faces you may cough; there is a dangerous and selfish element to this that I find hard to stomach.”

The abolitionist history of pumpkin pie and Thanksgiving. “The Northern farmer, just by existing, was a natural-born abolitionist, she argued. Pumpkin pie and Thanksgiving were celebrations of a better, more godly way of agriculture without the institution of slavery.”

Since the Thanksgiving Tale Is a Myth, Celebrate It This Way. “It was the Wampanoag in 1621 who helped the first wave of Puritans arriving on our shores, showing them how to plant crops, forage for wild foods and basically survive. The first official mention of a “Thanksgiving” celebration occurs in 1637, after the colonists brutally massacre an entire Pequot village, then subsequently celebrate their barbaric victory.”

Why overly kind and moral people can rub you up the wrong way. “All this means that altruistic behaviour can make us walk a metaphorical tightrope. We need to balance our generosity perfectly, so that we are seen as cooperative and good, without arousing the suspicion that we are acting solely for the status.”

Hanukkah’s darker origins feel more relevant in time of rising antisemitism, intense interest in identity. ““The old message of 15 or 20 years ago was: It’s all about unity. Now it’s all about identity and difference. The Jewish story is in conflict between sameness and difference. On the one hand, our grandparents fought so hard for us to fit in, to pass, quote-unquote. We want that, but we’re conflicted. Now someone views me as ‘White,’ and it’s like: ‘No, I’m Jewish.’”” Lots to think about here, including with respect to my own identity.

The English turned Barbados into a slave society. Now, after 396 years, we’re free. “Prof Hilary Beckles, a Barbadian historian, the current vice-chancellor of the University of the West Indies and a leading figure in the push by Caribbean islands to secure reparations, sums it up best. “Barbados was the birthplace of British slave society and the most ruthlessly colonised by Britain’s ruling elites,” he writes. “They made their fortunes from sugar produced by an enslaved, ‘disposable’ workforce, and this great wealth secured Britain’s place as an imperial superpower and caused untold suffering.””

Technology

Tracy Chou's life as a tech activist: abuse, and optimism. “As an Asian-American woman who has spent much of her career calling out the gender inequities and racism embedded in Silicon Valley, Chou is all too familiar with this sort of abuse and harassment. Since 2013, when she famously urged tech companies to share data on women in technical roles, the 34-year-old software engineer has been a key figure in the industry’s prolonged reckoning with its culture of exclusion. But whatever progress she’s made has come at great personal cost—especially as her Twitter following has ballooned to more than 100,000 accounts. “In doing this diversity and inclusion activism work,” she says, “I built more of a profile that then exposed me to more harassment.””

Why you should prioritise quality over speed in design systems. “Speed for the sake of speed means nothing. If our design systems don’t ultimately lead to better quality experiences, we’re doing it wrong.” Not just design systems.

U.S. Treasury Is Buying Private App Data to Target People. “Two contracts obtained via a Freedom of Information Act request and shared with The Intercept by Tech Inquiry, a research and advocacy group, show that over the past four months, the Treasury acquired two powerful new data feeds from Babel Street: one for its sanctions enforcement branch, and one for the Internal Revenue Service. Both feeds enable government use of sensitive data collected by private corporations not subject to due process restrictions. Critics were particularly alarmed that the Treasury acquired access to location and other data harvested from smartphone apps; users are often unaware of how widely apps share such information.”

'Dog phone' could help lonely pooches call owners. ““Whatever form that takes, we’ve taken another step towards developing some kind of ‘dog internet’, which gives pets more autonomy and control over their interaction with technology,” she added.”


MyDigitalFootprint

"Hard & Fast" Vs "Late & Slow"

The title might sound like a movie but this article is about unpacking decision making.   We need leaders to be confident in their decisions so we can hold them accountable.  We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer.  However, those in leadership who we hold acco

The title might sound like a movie but this article is about unpacking decision making.  

We need leaders to be confident in their decisions so we can hold them accountable.  We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer.  However, those in leadership who we hold accountable do not want to make the “wrong” decisions. A wrong decision can mean liability, loss of reputation or perceived to be too risky.  A long senior leadership career requires navigating a careful path between not takingtoo much risk by going too “early”, which leads to failure, and not being late such that anyone could have made the decision earlier and looking incompetent.  Easy leadership does not look like leadership as it finds a path of not being early or late (the majority)

When we unpack leadership trends over the past 100 years that include ideas such as improving margin, diversification, reduction, speed to market, finance lead decisions, data-led, customer first, agile, just-in-time, customer centricity, digital first, personalisation, automated decisions, innovation, transformation, ethics, diversity, privacy by design, shareholder primacy, stakeholder management, re-engineering, outsourcing to name a few.  Over the same period of time our ideas of leadership styles has also evolved. 

There is an inference or hypothesis that we can test, which is that our approach to risk means we have the leaders that we now deserve. Does our risk create the leadersghip we have or does leadership manage risk to what we want is a cause and effect problem that results from the complex market we have. 

The Ladder of Inference below is a concept developed by the late Harvard Professor Chris Argyris, to help explain why anyone reading this and looking at (the same/a) set of evidence we can draw very different conclusions. However, the point is that what we want leadership who has the courage for decisions that are “hard and fast”, but what we get “late and slow”   Data led, waiting for the data, following the model all confirm that the decisions we are taking are late and slow.  We know there is a gap, it is just hard to know why.  Hard and fast occurs when there is a lack of data or evidence and rests on judgment and not confirmation, the very things we value but peanilse for at the same time.


Right now we see this with how the government have reacted to COVID, again we can conclude with hindsight that no-one country leadership got it right and the majority appear to continuen to get it wrong in the view that the voters will not vote from them if they take the hard choices, follow the science, follow the data making sure we are late and slow.

Climate change and COP26.  There will never be enough data and waiting for more data confirms our need to manage to a risk model that does not account for the environment with the same weight as finance.  

Peak Paradox

The Peak Paradox framework forces us to address the question “what are we optimising for?”  Previous articles have highlighted the issues about decision making at Peak Paradox, however at each point we should also consider the leadership style “Hard & Fast versus Late & Slow”


The Peak Paradox model gives us a position in space, at each point where we are, thinking about hard & fast vs late and slow introduces a concept of time and direction into the model.



Simon Willison

Weeknotes: Shaving some beautiful yaks

I've been mostly shaving yaks this week - two in particular: the Datasette table refactor and the next release of git-history. I also built and released my first Web Component! A Web Component for embedding Datasette tables A longer term goal that I have for Datasette is to figure out a good way of using it to build dashboards, tying together summaries and visualizations of the latest data fro

I've been mostly shaving yaks this week - two in particular: the Datasette table refactor and the next release of git-history. I also built and released my first Web Component!

A Web Component for embedding Datasette tables

A longer term goal that I have for Datasette is to figure out a good way of using it to build dashboards, tying together summaries and visualizations of the latest data from a bunch of different sources.

I'm excited about the potential of Web Components to help solve this problem.

My datasette-notebook project is a very early experiment in this direction: it's a Datasette notebook that provides a Markdown wiki (persisted to SQLite) to which I plan to add the ability to embed tables and visualizations in wiki pages - forming a hybrid of a wiki, dashboarding system and Notion/Airtable-style database.

It does almost none of those things right now, which is why I've not really talked about it here.

Web Components offer a standards-based mechanism for creating custom HTML tags. Imagine being able to embed a Datasette table on a page by adding the following to your HTML:

<datasette-table url="https://global-power-plants.datasettes.com/global-power-plants/global-power-plants.json" ></datasette-table>

That's exactly what datasette-table lets you do! Here's a demo of it in action.

This is version 0.1.0 - it works, but I've not even started to flesh it out.

I did learn a bunch of things building it though: it's my first Web Component, my first time using Lit, my first time using Vite and the first JavaScript library I've ever packaged and published to npm.

Here's a detailed TIL on Publishing a Web Component to npm encapsulating everything I've learned from this project so far.

This is also my first piece of yak shaving this week: I built this partly to make progress on datasette-notebook, but also because my big Datasette refactor involves finalizing the design of the JSON API for version 1.0. I realized that I don't actually have a project that makes full use of that API, which has been hindering my attempts to redesign it. Having one or more Web Components that consume the API will be a fantastic way for me to eat my own dog food.

Link: rel="alternate" for Datasette tables

Here's an interesting problem that came up while I was working on the datasette-table component.

As designed right now, you need to figure out the JSON URL for a table and pass that to the component.

This is usually a case of adding .json to the path, while preserving any query string parameters - but there's a nasty edge-case: if your SQLite table itself ends with the string .json (which could happen! Especially since Datasette promises to work with any existing SQLite database) the URL becomes this instead:

/mydb/table.json?_format=json

Telling users of my component that they need to first construct the JSON URL for their page isn't the best experience: I'd much rather let people paste in the URL to the HTML version and derive the JSON from that.

This is made more complex by the fact that, thanks to --cors, the Web Component can be embedded on any page. And for datasette-notebook I'd like to provide a feature where any URLs to Datasette instances - no matter where they are hosted - are turned into embedded tables automatically.

To do this, I need an efficient way to tell that an arbitrary URL corresponds to a Datasette table.

My latest idea here is to use a combination of HTTP HEAD requests and a Link: rel="alternate" header - something like this:

~ % curl -I 'https://latest.datasette.io/fixtures/compound_three_primary_keys' HTTP/1.1 200 OK date: Sat, 27 Nov 2021 20:09:36 GMT server: uvicorn Link: https://latest.datasette.io/fixtures/compound_three_primary_keys.json; rel="alternate"; type="application/datasette+json"

This would allow a (hopefully fast) fetch() call from JavaScript to confirm that a URL is a Datasette table, and get back the JSON that should be fetched by the component in order to render it on the page.

I have a prototype of this in Datasette issue #1533. I think it's a promising approach!

It's also now part of the ever-growing table refactor. Adding custom headers to page responses is currently far harder than it should be.

sqlite-utils STRICT tables

SQLite 3.37.0 came out at the weekend with a long-awaited feature: STRICT tables, which enforce column types such that you get an error if you try to insert a string into an integer column.

(This has been a long-standing complaint about SQLite by people who love strong typing, and D. Richard Hipp finally shipped the change for them with some salty release notes saying it's "for developers who prefer that kind of thing.")

I started researching how to add support for this to my sqlite-utils Python library. You can follow my thinking in sqlite-utils issue #344 - I'm planning to add a strict=True option to methods that create tables, but for the moment I've shipped new introspection properties for seeing if a table uses strict mode or not.

git-history update

My other big yak this week has been work on git-history. I'm determined to get it into a stable state such that I can write it up, produce a tutorial and maybe produce a video demonstration as well - but I keep on finding things I want to change about how it works.

The big challenge is how to most effectively represent the history of a bunch of different items over time in a relational database schema.

I started with a item table that presents just the most recent version of each item, and an item_version table with a row for every subsequent version.

That table got pretty big, with vast amounts of duplicated data in it.

So I've been working on an optimization where columns are only included in an item_version row if they have changed since the previous version.

The problem there is what to do about null - does null mean "this column didn't change" or does it mean "this column was set from some other value back to null"?

After a few different attempts I've decided to solve this with a many-to-many table, so for any row in the item_version table you can see which columns were explicitly changed by that version.

This is all working pretty nicely now, but still needs documentation, and tests, and then a solid write-up and tutorial and demos and a video... hopefully tomorrow!

One of my design decisions for this tool has been to use an underscore prefix for "reserved columns", such that non-reserved columns can be safely used by the arbitrary data that is being tracked by the tool.

Having columns with names like _id and _item has highlighted several bugs with Datasette's handling of these column names, since Datasette itself tries to use things like ?_search= for special query string parameters. I released Datasette 0.59.4 with some relevant fixes.

A beautiful yak

As a consumate yak shaver this beautiful yak that showed up on Reddit a few weeks ago has me absolutely delighted. I've not been able to determine the photography credit.

Releases this week s3-credentials: 0.7 - (7 releases total) - 2021-11-30
A tool for creating credentials for accessing S3 buckets datasette: 0.59.4 - (102 releases total) - 2021-11-30
An open source multi-tool for exploring and publishing data datasette-table: 0.1.0 - 2021-11-28
A Web Component for embedding a Datasette table on a page TIL this week Pausing traffic and retrying in Caddy Publishing a Web Component to npm Reusing an existing Click tool with register_commands Ignoring a line in both flake8 and mypy

Tuesday, 30. November 2021

John Philpin : Lifestream

🎵🎶🎼 What do you call the first 10 minutes of a song by Yes?

🎵🎶🎼 What do you call the first 10 minutes of a song by Yes? ‘The Introduction’ … as heard on ‘The Rockentours’ with Rick Wakeman

🎵🎶🎼 What do you call the first 10 minutes of a song by Yes?

‘The Introduction’

… as heard on ‘The Rockentours’ with Rick Wakeman


Altmode

Sussex Day 6: Downtime

Tuesday, November 9, 2021 Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session […]

Tuesday, November 9, 2021

Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session from noon to 2 pm local time today, so I needed to be at the hotel for that. Meanwhile Kenna and Celeste did some exploring around the little shops in the Brighton Lanes.

Our downtime day also gave us an opportunity to do some laundry. One of the attractive features of our “aparthotel” is a compact combination washer/dryer. Our room also came with a couple of detergent pods, which were unfortunately and unexpectedly heavily scented. We will be using our own detergent in the future. The dryer was slow, but it did the job.

IETF virtual venue

I am again thankful for the good internet service here; the meeting went without a hitch (my co-chair is in Melbourne, Australia). Kenna and Celeste brought lunch from Pret a Manger to eat between meeting sessions I needed to attend. Following the second session we went off for dinner at a pizza place we had discovered, Franco Manca. The pizza and surroundings were outstanding; we would definitely return (and Celeste probably will). We then saw Celeste off to her bus back to campus and we returned to our hotel.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Matt Flynn: InfoSec | IAM

Introducing OCI IAM Identity Domains

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains.  We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM se

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains

We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM service to create a cloud platform IAM service unlike any other. We encountered numerous challenges along the way that would have been much easier if we allowed for customer interruption. But we had a key goal to not cause any interruptions or changes in functionality to our thousands of existing IDaaS customers. It's been immeasurably impressive to watch the development organization attack and conquer those challenges.

Now, with a few clicks from the OCI admin console, customers can create self-contained IDaaS instances to accommodate a variety of IAM use-cases. And this is just the beginning. The new, upgraded OCI IAM service serves as the foundation for what's to come. And I've never been more optimistic about Oracle's future in the IAM space.

Here's a short excerpt from our blog post Introducing OCI IAM Identity Domains:

"Over the past five years, Oracle Identity Cloud Service (IDCS) has grown to support thousands of customers and currently manages hundreds of millions of identities. Current IDCS customers enjoy a broad set of Identity and Access Management (IAM) features for authentication (federated, social, delegated, adaptive, multi-factor authentication (MFA)), access management, manual or automated identity lifecycle and entitlement management, and single sign-on (SSO) (federated, gateways, proxies, password vaulting).

In addition to serving IAM use cases for workforce and consumer access scenarios, IDCS has frequently been leveraged to enhance IAM capabilities for Oracle Cloud Infrastructure (OCI) workloads. The OCI Identity and Access Management (OCI IAM) service, a native OCI service that provides the access control plane for Oracle Cloud resources (networking, compute, storage, analytics, etc.), has provided the IAM framework for OCI via authentication, access policies, and integrations with OCI security approaches such as compartments and tagging. OCI customers have adopted IDCS for its broader authentication options, identity lifecycle management capabilities, and to provide a seamless sign-on experience for end users that extends beyond the Oracle Cloud.

To better address Oracle customers’ IAM requirements and to simplify access management across Oracle Cloud, multi-cloud, Oracle enterprise applications, and third-party applications, Oracle has merged IDCS and OCI IAM into a single, unified cloud service that brings all of IDCS’ advanced identity and access management features natively into the OCI IAM service. To align with Oracle Cloud branding, the unified IAM service will leverage the OCI brand and will be offered as OCI IAM. Each instance of the OCI IAM service will be managed as identity domains in the OCI console."

Learn more about OCI IAM identity domains


John Philpin : Lifestream

To replace my main computer I used to need to double 2 out o

To replace my main computer I used to need to double 2 out of 3 of ‘HARD DRIVE’ ‘RAM’ ‘CLOCK SPEED’ before I even thought ‘replacement’. The Mac I want has same RAM and Hard Drive as my current Mac I guess I need to develop some new gating logic! Any ideas?

To replace my main computer I used to need to double 2 out of 3 of

‘HARD DRIVE’ ‘RAM’ ‘CLOCK SPEED’

before I even thought ‘replacement’.

The Mac I want has same RAM and Hard Drive as my current Mac

I guess I need to develop some new gating logic!

Any ideas?


I just got my first “dear john … I’m a dying widow with a ma

I just got my first “dear john … I’m a dying widow with a massive inheritance” email in YEARS … love to know how it got through the spam filters!

I just got my first “dear john … I’m a dying widow with a massive inheritance” email in YEARS … love to know how it got through the spam filters!

Monday, 29. November 2021

John Philpin : Lifestream

Word

Word

Word


‘Anonymized Data’ Is A Gibberish Term, And Rampant Location

‘Anonymized Data’ Is A Gibberish Term, And Rampant Location Data Sales Is Still A Problem Exactly - people still spend too much time talking about the data - and not the connections between the data. THAT’s where the power is.

‘Anonymized Data’ Is A Gibberish Term, And Rampant Location Data Sales Is Still A Problem

Exactly - people still spend too much time talking about the data - and not the connections between the data. THAT’s where the power is.


JOHN . PHILPIN . COM is working again ....

My thanks to @manton for resetting my blog that I had broken. I mean really broken. How - because I was trying to be clever with the footer - and I clearly was not! Now back up with the Tufte Theme - courtesy of @pimoore running as it should - and the footer kind of organised as I wanted. Next step - now I understand the very clever nuances that Pete has included in the theme - is to rein

My thanks to @manton for resetting my blog that I had broken. I mean really broken.

How - because I was trying to be clever with the footer - and I clearly was not!

Now back up with the Tufte Theme - courtesy of @pimoore running as it should - and the footer kind of organised as I wanted.

Next step - now I understand the very clever nuances that Pete has included in the theme - is to reintroduce some of the emphasis I want.

I am slightly hesitant to rely to much on some of the shortcodes he has added (yet) since those short codes (I presume) will only work on this template?

More diving needed.

john.philpin.com


Altmode

Sussex Day 5: Lewes

Monday, November 8, 2021 We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. […]

Monday, November 8, 2021

We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. The studio was exactly as I had imagined, and it was a lot of fun doing the show with her. We each contributed a couple of songs to the playlist, and got to introduce them briefly.

After the show, Celeste had classes so we continued on to Lewes. We hadn’t been able to see much on our short visit Sunday evening. We started out at Lewes Castle & Museum, again getting an idea of the history of the place and then visiting portions of the castle itself. It was a clear day, and the view from the top was excellent. As with many of these sites, the castle went through many changes through the centuries as political conditions changed.

Lewes Barbican Gate and view from the Castle

After climbing around the castle, we were ready for lunch. We checked out a few restaurants in town before settling on the Riverside Cafe, in an attractive area on the River Ouse. After lunch, we walked among a number of small shops before entering a Waterstones bookstore. How we miss spending time in quality bookstores! I expect we’ll be seeking them out more once we return.

We then took the train back to Brighton, since I had a meeting to attend for work. The meeting went well; the internet connection at the hotel is solid and makes it seem like it hardly matters where in the world I am when attending these meetings.

Celeste came down to Brighton to have dinner with us. We decided to go with Latin American food at a local chain called Las Iguanas. The food was quite good although somewhat standard, at least to those of us from California and Colorado.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Phil Windley's Technometria

Digital Memories

Summary: Digital memories are an important component of our digital embodiment. SSI provides a foundation for self-sovereign digital memories to solve the digital-analog memory divide. In a recent thread on the VRM mailing list, StJohn Deakins of Citizen Me shared a formulation of interactions that I thought was helpful in unpacking the discussions about data, identity, and ownership. H

Summary: Digital memories are an important component of our digital embodiment. SSI provides a foundation for self-sovereign digital memories to solve the digital-analog memory divide.

In a recent thread on the VRM mailing list, StJohn Deakins of Citizen Me shared a formulation of interactions that I thought was helpful in unpacking the discussions about data, identity, and ownership. His framework concerns analog vs digital memories.

In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and use cash to buy a coke there is no exchange of identity information. If I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life.

In this situation, the cashier, others in the store, and I all share a common experience. Even so, we each retain our own memories of the interaction. No one participating would claim to "own" the interaction. But we all retain memories which are our own. There is no single "true" record of the interaction. Every participant will have a different perspective (literally).

On the other hand, the store, as an institution, retains a memory of having sold a coke and the credit card transaction. This digital memory of the transaction can easily persist longer than any of the analog memories of the event. Because it is a digital record we trust it and tend to think of it as "true." For some purposes, say proving in a court of law that I was in the store at a particular time, this is certainly true. But for other purposes (e.g. was the cashier friendly or rude?) the digital memory is woefully anemic.

Online, we only have digital memories. And people have very few tools for saving, managing, recalling, and using them. I think digital memories are one of the primary features of digital embodiment—giving people a place to stand in the digital world, their own perspective, memories, and capacity to act. We can't be peers online without having our own digital memories.

StJohn calls this the "analog-digital memory divide." This divide is one source of the power imbalance between people and administrative entities (i.e. anyone who has a record of you in an account). CitizenMe provides tools for people to manage digital memories. People retain their own digital memory of the event. While every participant has a similar digital memory of the event, they can all be different, reflecting different vantage points.

One of the recent trends in application development is microservices, with an attendant denormalization of data. The realization that there doesn't have to be, indeed often can't be, a single source of truth for data has freed application development from the strictures of centralization and led to more easily built and operated distributed applications that are resilient and scale. I think this same idea applies to digital interactions generally. Freeing ourselves from the mindset that digital systems can and should provide a single record that is "true" will lead to more autonomy and richer interactions.

Self-sovereign identity (SSI) provides a foundation for our digital personhood and allows us to not only taking charge of our digital memories but operationalize all of our digital relationships. Enriching the digital memories of events by allowing everyone their own perspective (i.e. making them self-sovereign) will lead to a digital world that is more like real life.

Related:

Ephemeral Relationships—Many of the relationships we have online don’t have to be long-lived. Ephemeral relationships, offering virtual anonymity, are technically possible without a loss of functionality or convenience. Why don’t they exist? Surveillance is profitable. Fluid Multi-Pseudonymity—Fluid multi-pseudonymity perfectly describes the way we live our lives and the reality that identity systems must realize if we are to live authentically in the digital sphere. Can the Digital Future Be Our Home?—This post features three fantastic books from three great, but quite different, authors on the subject of Big Tech, surveillance capitalism, and what's to be done about it.

Photo Credit: Counter Supermarket Product Shopping Shop Grocery from maxpixel (CC0)

Tags: identity ssi relationships pseudonymity


John Philpin : Lifestream

I’m not a fan of Gary Vaynerchuk’s style of oral communicati

I’m not a fan of Gary Vaynerchuk’s style of oral communication - but I don’t think there’s any doubt that he has a knack to see possibilities, learns and acts. What is an NFT? is a case in point - and it isn’t ‘oral’.

I’m not a fan of Gary Vaynerchuk’s style of oral communication - but I don’t think there’s any doubt that he has a knack to see possibilities, learns and acts.

What is an NFT? is a case in point - and it isn’t ‘oral’.


Yesterday Once More. Ever wonder why recommendation engin

Yesterday Once More. Ever wonder why recommendation engines work so well - yet still fail to find ‘new’ stuff for you? I mean really new stuff.

Yesterday Once More.

Ever wonder why recommendation engines work so well - yet still fail to find ‘new’ stuff for you?

I mean really new stuff.


Think carefully when you rent from Hertz. More Than 100 H

Think carefully when you rent from Hertz. More Than 100 Hertz Customers Are Suing The Company For Falsely Reporting Rented Vehicles As Stolen.

Statues Have Started To Attack People! Funny - Not Seriou

Statues Have Started To Attack People! Funny - Not Serious

reb00ted

Facebook's metaverse pivot is a Hail Mary pass

The more I think about Facebook’s Meta’s pivot to the metaverse, the less it appears like they do this voluntarily. I think they have no other choice: their existing business is running out of steam. Consider: At about 3.5 billion month active users of at least one of their products (Facebook, Instagram, Whatsapp etc), they are running out of more humans to sign up. People say they use

The more I think about Facebook’s Meta’s pivot to the metaverse, the less it appears like they do this voluntarily. I think they have no other choice: their existing business is running out of steam. Consider:

At about 3.5 billion month active users of at least one of their products (Facebook, Instagram, Whatsapp etc), they are running out of more humans to sign up.

People say they use Facebook to stay in touch with family and friends. But there is now one ad in my feed for each three or four posts that I actually want to see. Add more ads than this, and users will turn their backs: Facebook doesn’t help them with what they want help with any more, it’s all ads.

While their ARPU is much higher in the US than in Europe, where in turn it is much higher than the rest of the world – hinting that international growth should be possible – their distribution of ARPU is not all that different from the whole ad market’s distribution of ad revenues in different regions. Convincing, say, Africa to spend much more on ads does not sound like a growth story.

And between the regulators in the EU and elsewhere, moves to effectively ban further Instagram-like acquisitions, lawsuits left and right, and Apple’s privacy moves, their room to manoever is getting tighter, not wider.

Their current price/sales ratio of just under 10 is hard to be justified for long under these constraints. They must also be telling themselves that relying on an entirely ad-based business model is not good long-term strategy any more, given the backlash against surveillance capitalism.

So what do you do?

I think you change the fundamentals of your business at the same time you change the conversation, leveraging the technology you own. And you end up with:

Oculus as the replacement for the mobile phone;

Headset and app store sales, for Oculus, as an entirely new business model that’s been proven (by the iPhone) to be highly profitable and is less under attack by regulators and the public; it also supports potentially much higher ARPU than just ads;

Renaming the company to something completely harmless and bland sounding; that will also let you drop the Facebook brand should it become too toxic down the road.

The risks are immense, starting with: how many hours a day do you hold your mobile phone in your hand, in comparison to how many hours a day you are willing to wear a bucket on your head, ahem, a headset? Even fundamental interaction questions, architecture questions and use case questions for the metaverse are still completely up in the air.

Credit to Mark Zuckerberg for pulling off a move as substantial as this for an almost trillion dollar company. I can’t think of any company which has ever done anything similar at this scale. When Intel pivoted from memory to CPUs, back in the 1980’s and at a much smaller scale, at least it was clear that there was going to be significant, growing demand for CPUs. This is not clear at all about headsets beyond niches such as gaming. So they are really jumping into the unknown with both feet.

But I don’t think any more they had a choice.

Sunday, 28. November 2021

Altmode

Sussex Day 4: Hastings

Sunday, November 7, 2021 Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip […]

Sunday, November 7, 2021

Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip along the coast. Unfortunately, it seems like it’s an hour train ride to most sights outside Brighton.

Hastings is an attractive and somewhat touristy town, along the Channel and in a narrow valley surrounded by substantial hills. We walked through the town, stopping for a fish and chips lunch along the way, and admiring the small shops in the Old Town. We took a funicular up one of the two hills and had an excellent view of the surrounding terrain. Unfortunately, the ruins of the castle at Hastings were closed for the season.

Funicular and view from the top

After returning via funicular, we continued through the town to the Hastings Museum, a well curated (and free!) small museum that was thorough in its coverage of the history of the area, from the Iron Age to the present. It also included an extensive collection from a local family that sailed around the world in the 1800s.

Taking the train back, we had a change of trains in Lewes, which Celeste had visited and enjoyed previously. We stopped at the Lewes Arms pub, but unfortunately (since it was Sunday evening) the kitchen had closed so we couldn’t get food. So Celeste returned to campus and got dinner there, while Kenna and I got take-out chicken sandwiches to eat in our hotel.

Our weekly family Zoom conference is on Sunday evening, England time, so we ate our sandwiches while chatting with other family members back home. It’s so much easier to stay in close touch with family while traveling than it was just a few years ago.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Just a Theory

Accelerate Perl Github Workflows with Caching

A quick tip for speeding up Perl builds in GitHub workflows by caching dependencies.

I’ve spent quite a few hours evenings and weekends recently building out a comprehensive suite of GitHub Actions for Sqitch. They cover a dozen versions of Perl, nearly 70 database versions amongst nine database engines, plus a coverage test and a release workflow. A pull request can expect over 100 actions to run. Each build requires over 100 direct dependencies, plus all their dependencies. Installing them for every build would make any given run untenable.

Happily, GitHub Actions include a caching feature, and thanks to a recent improvement to shogo82148/actions-setup-perl, it’s quite easy to use in a version-independent way. Here’s an example:

name: Test on: [push, pull_request] jobs: OS: strategy: matrix: os: [ ubuntu, macos, windows ] perl: [ 'latest', '5.34', '5.32', '5.30', '5.28' ] name: Perl ${{ matrix.perl }} on ${{ matrix.os }} runs-on: ${{ matrix.os }}-latest steps: - name: Checkout Source uses: actions/checkout@v2 - name: Setup Perl id: perl uses: shogo82148/actions-setup-perl@v1 with: { perl-version: "${{ matrix.perl }}" } - name: Cache CPAN Modules uses: actions/cache@v2 with: path: local key: perl-${{ steps.perl.outputs.perl-hash }} - name: Install Dependencies run: cpm install --verbose --show-build-log-on-failure --no-test --cpanfile cpanfile - name: Run Tests env: { PERL5LIB: "${{ github.workspace }}/local/lib/perl5" } run: prove -lrj4

This workflow tests every permutation of OS and Perl version specified in jobs.OS.strategy.matrix, resulting in 15 jobs. The runs-on value determines the OS, while the steps section defines steps for each permutation. Let’s take each step in turn:

“Checkout Source” checks the project out of GitHub. Pretty much required for any project. “Setup Perl” sets up the version of Perl using the value from the matrix. Note the id key set to perl, used in the next step. “Cache CPAN Modules” uses the cache action to cache the directory named local with the key perl-${{ steps.perl.outputs.perl-hash }}. The key lets us keep different versions of the local directory based on a unique key. Here we’ve used the perl-hash output from the perl step defined above. The actions-setup-perl action outputs this value, which contains a hash of the output of perl -V, so we’re tying the cache to a very specific version and build of Perl. This is important since compiled modules are not compatible across major versions of Perl. “Install Dependencies” uses cpm to quickly install Perl dependencies. By default, it puts them into the local subdirectory of the current directory — just where we configured the cache. On the first run for a given OS and Perl version, it will install all the dependencies. But on subsequent runs it will find the dependencies already present, thank to the cache, and quickly exit, reporting “All requirements are satisfied.” In this Sqitch job, it takes less than a second. “Run Tests” runs the tests that require the dependencies. It requires the PERL5LIB environment variable to point to the location of our cached dependencies.

That’s the whole deal. The first run will be the slowest, depending on the number of dependencies, but subsequent runs will be much faster, up to the seven-day caching period. For a complex project like Sqitch, which uses the same OS and Perl version for most of its actions, this results in a tremendous build time savings. CI configurations we’ve used in the past often took an hour or more to run. Today, most builds take only a few minutes to test, with longer times determined not by dependency installation but by container and database latency.

More about… Perl GitHub GitHub Actions GitHub Workflows Caching

John Philpin : Lifestream

Has my Mac really slowed down since Apple Silicon got releas

Has my Mac really slowed down since Apple Silicon got released - or am I looking for an excuse?

Has my Mac really slowed down since Apple Silicon got released - or am I looking for an excuse?

Saturday, 27. November 2021

John Philpin : Lifestream

A Christmas Tree

A Christmas Tree

A Christmas Tree


Altmode

Sussex Day 3: Chichester and Fishbourne

Saturday, November 6, 2021 After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and […]

Saturday, November 6, 2021

After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and then found lunch at a small restaurant on a side road as many of the major restaurants in town were quite crowded (it is a Saturday, after all).



One of the main attractions in the area is the Fishbourne Roman Palace, one village to the west. We set out on foot, through a bit of rain, for a walk of a couple of miles. But when we arrived it was well worth the trip. This is an actual Roman palace, constructed in about 79AD, that had been uncovered starting in the 1960s, along with many coins, implements, and other artifacts. The mosaic floors were large and particularly impressive. As a teenager, I got to visit the ruins in Pompeii; these were of a similar nature. This palace and surrounding settlements were key to the Roman development of infrastructure in England.

Returning from Fishbourne to Chichester, we made a short visit to Chichester Cathedral. Unfortunately, the sun had set and it was difficult to see most of the stained glass. At the time of our visit, there was a large model of the Moon, traveling to several locations in Europe, that was hanging from the ceiling in the middle of the church. It was a striking thing to see, especially as we first entered.

After our train trip back from Chichester, we parted with Celeste who returned to campus. Since it was a Saturday night, restaurants were crowded, but we were able to get dinner at a large chain pub, Wetherspoons. The pub was noisy and table service was minimal. We ordered via their website and they only cleared the previous patrons’ dirty dishes when they delivered our food. The food was acceptable, but nothing to blog about.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


John Philpin : Lifestream

Definitely a problem @help - see email

Definitely a problem @help - see email

Definitely a problem @help - see email

Friday, 26. November 2021

John Philpin : Lifestream

If you’re so smart, why aren’t you rich? Turns out it’s just

If you’re so smart, why aren’t you rich? Turns out it’s just chance. While wealth distribution follows a power law, the distribution of human skills generally follows a normal distribution that is symmetric about an average value. For example, intelligence, as measured by IQ tests, follows this pattern. Average IQ is 100, but nobody has an IQ of 1,000 or 10,000. Wealth distribution is a

If you’re so smart, why aren’t you rich? Turns out it’s just chance.

While wealth distribution follows a power law, the distribution of human skills generally follows a normal distribution that is symmetric about an average value. For example, intelligence, as measured by IQ tests, follows this pattern. Average IQ is 100, but nobody has an IQ of 1,000 or 10,000.

Wealth distribution is a recurring topic - this is a neat summary of the inequality.


Houston - we have a problem. But there is another unseen

Houston - we have a problem. But there is another unseen problem - I am posting this to micro blog - but one of the problems is that my posts don’t reach the timeline - so @manton won’t see it … or will he - after all - I can see this post in my posts - just not on the timeline.

Houston - we have a problem.

But there is another unseen problem - I am posting this to micro blog - but one of the problems is that my posts don’t reach the timeline - so @manton won’t see it … or will he - after all - I can see this post in my posts - just not on the timeline.


Altmode

Sussex Day 2: Guy Fawkes Day

Friday, November 5, 2021 These days, “day 2” after arriving in the UK has a special implication: it is on this day that you must take and report the results of a COVID-19 test. As required, we ordered the tests and had them delivered to our hotel. After breakfast (in our room/apartment, with groceries we […]

Friday, November 5, 2021

These days, “day 2” after arriving in the UK has a special implication: it is on this day that you must take and report the results of a COVID-19 test. As required, we ordered the tests and had them delivered to our hotel. After breakfast (in our room/apartment, with groceries we picked up previously), we very methodically followed the instructions and 15 minutes later happily had negative test results. You send an image of the test stick next to the information page from your passport and a little while later they send a certificate of your test results. Presumably the results were sent to the authorities as well so they don’t come looking for us.

Celeste had classes at various times (including an 8:30 pm meeting with the Theater Department in Colorado) so Kenna and I were on our own today. We set out to explore Brighton, a city that bears resemblance to both Santa Cruz and Berkeley, California. We took a rather long walk, beginning with the shore area. We walked out to the end of Brighton Palace Pier, which includes a sizable game arcade and a small amusement park with rides at the end.

We decided to continue eastward along the shore and walked a long path toward Brighton Marina. Along the way were a variety of activities, including miniature golf, a sauna, an outdoor studio including a yoga class in session, and an electric railway. There was also quite a bit of construction, which makes sense since it’s off-season.

We arrived at the marina not entirely clear on how to approach it by foot: the major building we saw was a large parking garage. So we continued along the Undercliff Trail, the cliffs being primarily chalk. This is how we picture the White Cliffs of Dover must be (although Dover probably has higher cliffs). At the far end of the marina we found a pedestrian entrance, and walked back through the marina to find some lunch. We ate outdoors at Taste Sussex, which was quite good, although our seating area got a little chilly once the sun fell behind a nearby building.

Brighton cliffs and marina

Our return took us through the areas of the marina that didn’t look very pedestrian-friendly, but were actually OK. We took a different route back to the hotel through the Kemptown District. We’re not sure we found the main part of Kemptown but we did walk past the Royal Sussex Hospital.

We had heard about the Brighton Toy and Model Museum, and had a little time so we went looking for it. The maps indicated that it is located adjacent to the train station, so we went to the train station and wandered around for quite a while before discovering that it’s sort of under the station, accessed through a door on the side of an underpass. The museum is physically small, but they have a very extensive collection of classic toys and model trains, primarily from the early 20th century. The staff was helpful and friendly, even offering suggestions for what else to see while in the area.

We had a late lunch, so instead of going out for dinner we opted for wine/beer and a charcuterie plate from the small bar in the hotel. It included a couple of unusual cheeses (a black-colored cheddar, for example) and met our needs well.

Guy Fawkes Day is traditionally celebrated in the UK with bonfires and fireworks displays to commemorate his 1605 attempt to blow up Parliament. Although our room faces the Channel, we heard but had limited visibility to various fireworks being set off on the shore. So we again set out on foot and saw a few, but since the fireworks are unofficial (set off by random people on the beach), they were widely dispersed and unorganized.

It has been a good day for walking, with over 10 miles traveled.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


John Philpin : Lifestream

Things moving slow - so test time - posting this at 10:34am

Things moving slow - so test time - posting this at 10:34am local time. Nothing in the timeline since 10:24am local time.

Things moving slow - so test time - posting this at 10:34am local time.

Nothing in the timeline since 10:24am local time.


Drawing Parralels Between The U.S.A. and The Middle East

Maps that Show the Historical Roots of Current US Political Faultlines. Interesting piece even of itself. But even more so if you read it and draw parallels with other parts of the world. My interpretation is that the boundaries on this map are ‘cultural boundaries’, reflecting what people feel and how they relate to each other in the U.S.A.     Now imagine if instead of mapping

Maps that Show the Historical Roots of Current US Political Faultlines.

Interesting piece even of itself. But even more so if you read it and draw parallels with other parts of the world.

My interpretation is that the boundaries on this map are ‘cultural boundaries’, reflecting what people feel and how they relate to each other in the U.S.A.
    Now imagine if instead of mapping the U.S.A., you mapped The Middle East.
and Rather than using state lines, you used the borders of countries.

In all honesty, I have seen maps like this in the past. Things like how the Kurdish culture spreads across at least 4 contiguous countries.

One difference between the States and the Middle East is that moving between states in the U.S.A. is a whole lot simpler than between countries in the Middle East.

I think this starts to expose the Middle East ‘fault lines’ in a very clear and explainable way. That is the historical imposition of political boundaries over people might make short-term sense and ‘bring order to the world’ - in the long term, it is ‘cultural power’ that drives the different regions of the world - and will be the final driver/decider.

This ‘culture trumps politics’ model is something that Venkatesh Rao touches on in this article from 2011. It is a long but fascinating read.

Thursday, 25. November 2021

Altmode

Sussex Day 1: University of Sussex

Thursday, November 4, 2021 As is usual for the first day following a big time change, we did not sleep well last night. The room was still too warm and we needed to open windows and get the ceiling fans to run at a reasonable (not hurricane) speed. We got up around 7:30. The plan […]

Thursday, November 4, 2021

As is usual for the first day following a big time change, we did not sleep well last night. The room was still too warm and we needed to open windows and get the ceiling fans to run at a reasonable (not hurricane) speed.

We got up around 7:30. The plan was for us to take the bus to the University of Sussex and meet Celeste at 11:00, following her morning class. We grabbed breakfast at a nearby fast-food restaurant, Leon, which was good but a bit spicier than we expected. We found that we had time to stop at a nearby Marks & Spencer store for groceries; visiting grocery stores in different countries is one of our favorite tourist activities. We purchased some groceries for our kitchen, dropped them off at our room, and caught a 10:30 bus to Falmer, where the University of Sussex is located. Bus fare, as with just about everything else, is paid for by tapping a credit card (or ApplePay in our case) on the reader. But pricing is distance sensitive, so you should tap again when you get off so you don’t pay maximum fare. We didn’t tap off, so we got overcharged a bit.

A short walk took us to Falmer House, where we met Celeste. Falmer House is the Students’ Union building, although here Students’ Union has a more literal meaning than we are accustomed to: it is also an organization that advocates on behalf of the student body, even calling strikes if necessary.

University of Sussex buildings

We walked through campus, visiting several buildings on the way to Celeste’s dormitory at the far north end. University of Sussex was founded in 1961, and all of the buildings were of similar architecture, mostly brick. Celeste’s dormitory room is relatively spacious, with an en-suite restroom and shower. It is part of a 6-room flat that shares a kitchen and common room.

Returning to the main part of campus, we had lunch at the Falmer House deli before all three of us caught the bus back to Brighton. Kenna and Celeste explored a nearby shopping mall while I had a work videoconference. After the meeting I rejoined them and we purchased dinner from a nearby Greek food truck, the first place that we had to actually use cash. We ate that in our hotel room before Celeste returned to the University to get some school work done.

We were able to watch a TV show from our home DVR on my iPad using a VPN to our home network. Thus far we haven’t run into any of the regional restrictions on viewing video content, but we may yet. We managed to stay awake until 10:30 or 11 to try to establish somewhat of a normal sleep schedule.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Simon Willison

PHP 8.1 release notes

PHP 8.1 release notes PHP is gaining "Fibers" for lightweight cooperative concurrency - very similar to Python asyncio. Interestingly you don't need to use separate syntax like "await fn()" to call them - calls to non-blocking functions are visually indistinguishable from calls to blocking functions. Considering how much additional library complexity has emerged in Python world from having these

PHP 8.1 release notes

PHP is gaining "Fibers" for lightweight cooperative concurrency - very similar to Python asyncio. Interestingly you don't need to use separate syntax like "await fn()" to call them - calls to non-blocking functions are visually indistinguishable from calls to blocking functions. Considering how much additional library complexity has emerged in Python world from having these two different colours of functions it's noteworthy that PHP has chosen to go in a different direction here.

Via Hacker News

Wednesday, 24. November 2021

Mike Jones: self-issued

JWK Thumbprint URI Specification

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This […]

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This enables JWK Thumbprints to be communicated in contexts requiring URIs, including in specific JSON Web Token (JWT) [RFC 7519] claims.

Use cases for this specification were developed in the OpenID Connect Working Group of the OpenID Foundation. Specifically, its use is planned in future versions of the Self-Issued OpenID Provider v2 specification.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-00.html

Altmode

Sussex Day 0: To Brighton

As has become somewhat traditional over the past several years, I am blogging a journal of our recent trip to the United Kingdom, our first overseas trip since the pandemic. Posts are delayed by 3 weeks from the actual events. November 2-3, 2021 It isn’t every day that we travel to a place that the […]

As has become somewhat traditional over the past several years, I am blogging a journal of our recent trip to the United Kingdom, our first overseas trip since the pandemic. Posts are delayed by 3 weeks from the actual events.



November 2-3, 2021

It isn’t every day that we travel to a place that the US State Department has rated as Level 4: Do Not Travel. But that’s what we’re doing today, even if that place is “only” the United Kingdom. Kenna and I are on our way to visit our daughter Celeste in Brighton, UK, where she is taking a semester abroad at the University of Sussex.

Usually we expect travel of this sort to fully occupy our day, but with the nonstop flight to London leaving San Francisco at 5:15 PM, our day got off to a leisurely start. Each of us did our usual Tuesday exercise classes, I did a bit of business work, and we got the house cleaned up a bit for our return. Our ride to the airport picked us up at 2, and we arrived in plenty of time to a virtually empty check-in counter.

One adjustment for me was that, unlike every other vacation I have been on, I did not bring a camera. I’ll be using my phone (and perhaps iPad for a few things); we’ll see how that works out. I’m sure I’ll miss my zoom lens, but I’m not sure that’s essential for this particular trip. Phone cameras are seriously good these days.

Due to some equipment problem, perhaps the entertainment system, our flight’s departure was delayed for about an hour. But once we got going, our flight proceeded smoothly (and with a minimum of turbulence, as well). Both Kenna and I got some sleep on the flight.

Our arrival at Heathrow featured the usual long walk to immigration, baggage claim, and customs. One thing that was, I think, a first for me was that the immigration process was entirely automated. Our passports were read electronically, our pictures were taken, and the gate opened without ever talking to anyone. We had our COVID-19 passenger locator forms ready, but didn’t need to present them; presumably the airline had forwarded the information when they looked at them in San Francisco or they were able to correlate the information we entered against our passport numbers.

Another brisk walk took us to the Heathrow Express station. There we purchased through tickets to Brighton, using the Two Together Railcard we had purchased prior to departure. That gave us a 30% discount for £30 a year, quite a good deal.

Heathrow Express took us to Paddington Station, from which we took the Underground to Victoria Station, which required us to lug our bags up and down a couple of flights of stairs. At Victoria we had to wait a half hour or so for our train, which allowed us to buy some sandwiches for lunch and visit a nearby bank ATM for some cash “just in case” (just about everything seems to run on credit cards these days).

We texted Celeste to let her know what train we were on, and the trip to Brighton took about an hour. We found Celeste immediately, and long-awaited hugs were exchanged before the 10 minute walk to our hotel. We are staying in a fairly new hotel, the Q Square Aparthotel (note that Q stands for Queen, not the Anon-thing). Our accommodation for the next 9 nights has a kitchen and separate bedroom, which makes it easy for us to make some of our own meals (probably breakfasts) and for me to remotely attend some of my usual work meetings. The room is great but has a few quirks: the lighting controls are unusual and the room is rather warm. But it has a balcony, and we were able to open the door for a while to mitigate the heat.

View to English Channel from our room

Kenna, Celeste, and I went for a walk to get acclimated, first to the beach, then to Brighton Palace (a local landmark) and the “Lanes”, a district with narrow, twisty roads reminiscent of the Shambles in York. We grabbed dinner at a local pub before seeing Celeste off to return to the University and returning to the hotel. Light showers had developed while we were eating, and we returned wet but not seriously soaked.


Identity Woman

Quoted in Consumer Reports article on COVID Certificates

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […] The post Quoted in Consumer Reports article on COVID C

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […]

The post Quoted in Consumer Reports article on COVID Certificates appeared first on Identity Woman.

Tuesday, 23. November 2021

Identity Woman

Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […] The post Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […]

The post Is it all change for identity? appeared first on Identity Woman.

Tuesday, 23. November 2021

Identity Woman

Cohere: Podcast

I had the pleasure of talking with Bill Johnston who I met many years ago via Forum One and their online community work. It was fun to chat again and to share for the community management audience some of the latest thinking on Self-Sovereign Identity. Kaliya Young is many things: an advocate for open Internet […] The post Cohere: Podcast appeared first on Identity Woman.

I had the pleasure of talking with Bill Johnston who I met many years ago via Forum One and their online community work. It was fun to chat again and to share for the community management audience some of the latest thinking on Self-Sovereign Identity. Kaliya Young is many things: an advocate for open Internet […]

The post Cohere: Podcast appeared first on Identity Woman.

Monday, 22. November 2021

Simon Willison

Quoting Rasmus Lerdorf

htmlspecialchars was a very early function. Back when PHP had less than 100 functions and the function hashing mechanism was strlen(). In order to get a nice hash distribution of function names across the various function name lengths names were picked specifically to make them fit into a specific length bucket. This was circa late 1994 when PHP was a tool just for my own personal use and I wasn'

htmlspecialchars was a very early function. Back when PHP had less than 100 functions and the function hashing mechanism was strlen(). In order to get a nice hash distribution of function names across the various function name lengths names were picked specifically to make them fit into a specific length bucket. This was circa late 1994 when PHP was a tool just for my own personal use and I wasn't too worried about not being able to remember the few function names.

Rasmus Lerdorf


Introduction to heredocs in Dockerfiles

Introduction to heredocs in Dockerfiles This is a fantastic upgrade to Dockerfile syntax, enabled by BuildKit and a new frontend for executing the Dockerfile that can be specified with a #syntax= directive. I often like to create a standalone Dockerfile that works without needing other files from a directory, so being able to use <<EOF syntax to populate configure files from inline blocks

Introduction to heredocs in Dockerfiles

This is a fantastic upgrade to Dockerfile syntax, enabled by BuildKit and a new frontend for executing the Dockerfile that can be specified with a #syntax= directive. I often like to create a standalone Dockerfile that works without needing other files from a directory, so being able to use <<EOF syntax to populate configure files from inline blocks of code is really handy.

Via @mwarkentin


Damien Bod

Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

This article shows how an ASP.NET Core API can be setup to require certificates for authentication. The API is used to implement an Azure B2C API connector service. The API connector client uses a certificate to request profile data from the Azure App Service API implementation, which is validated using the certificate thumbprint. Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims […]

This article shows how an ASP.NET Core API can be setup to require certificates for authentication. The API is used to implement an Azure B2C API connector service. The API connector client uses a certificate to request profile data from the Azure App Service API implementation, which is validated using the certificate thumbprint.

Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims

Blogs in this series

Securing ASP.NET Core Razor Pages, Web APIs with Azure B2C external and Azure AD internal identities Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

Setup Azure App Service

An Azure App Service was created which uses .NET and 64 bit configurations. The Azure App Service is configured to require incoming client certificates and will forward this to the application. By configuring this, any valid certificate will work. The certificate still needs to be validated inside the application. You need to check that the correct client certificate is being used.

Implement the API with certificate authentication for deployment

The AddAuthentication sets the default scheme to CertificateAuthentication. The AddCertificate method adds the required configuration to validate the client certificates used with each request. We use a self signed certificate for the authentication. If a valid certificate is used, the MyCertificateValidationService is used to validate that it is also the correct certificate.

var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddSingleton<MyCertificateValidationService>(); builder.Services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme) .AddCertificate(options => { // https://docs.microsoft.com/en-us/aspnet/core/security/authentication/certauth options.AllowedCertificateTypes = CertificateTypes.SelfSigned; options.Events = new CertificateAuthenticationEvents { OnCertificateValidated = context => { var validationService = context.HttpContext.RequestServices.GetService<MyCertificateValidationService>(); if (validationService != null && validationService.ValidateCertificate(context.ClientCertificate)) { var claims = new[] { new Claim(ClaimTypes.NameIdentifier, context.ClientCertificate.Subject, ClaimValueTypes.String, context.Options.ClaimsIssuer), new Claim(ClaimTypes.Name, context.ClientCertificate.Subject, ClaimValueTypes.String, context.Options.ClaimsIssuer) }; context.Principal = new ClaimsPrincipal(new ClaimsIdentity(claims, context.Scheme.Name)); context.Success(); } else { context.Fail("invalid cert"); } return Task.CompletedTask; } }; }); builder.Host.UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration .ReadFrom.Configuration(hostingContext.Configuration) .Enrich.FromLogContext() .MinimumLevel.Debug() .WriteTo.Console() .WriteTo.File( //$@"../certauth.txt", $@"D:\home\LogFiles\Application\{Environment.UserDomainName}.txt", fileSizeLimitBytes: 1_000_000, rollOnFileSizeLimit: true, shared: true, flushToDiskInterval: TimeSpan.FromSeconds(1)));

The middleware services are setup so that in development no authentication is used and the requests are validated using basic authentication. If the environment in not development, certificate authentication is used and all API calls require authorization.

var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); if (!app.Environment.IsDevelopment()) { app.UseAuthentication(); app.UseAuthorization(); app.MapControllers().RequireAuthorization(); } else { app.UseAuthorization(); app.MapControllers(); } app.Run();

The MyCertificateValidationService validates the certificate. This checks if the certificate used has the correct thumbprint and is the same as the certificate used in the client application, in these case the Azure B2C API connector.

public class MyCertificateValidationService { private readonly ILogger<MyCertificateValidationService> _logger; public MyCertificateValidationService(ILogger<MyCertificateValidationService> logger) { _logger = logger; } public bool ValidateCertificate(X509Certificate2 clientCertificate) { return CheckIfThumbprintIsValid(clientCertificate); } private bool CheckIfThumbprintIsValid(X509Certificate2 clientCertificate) { var listOfValidThumbprints = new List<string> { // add thumbprints of your allowed clients "15D118271F9AE7855778A2E6A00A575341D3D904" }; if (listOfValidThumbprints.Contains(clientCertificate.Thumbprint)) { _logger.LogInformation($"Custom auth-success for certificate {clientCertificate.FriendlyName} {clientCertificate.Thumbprint}"); return true; } _logger.LogWarning($"auth failed for certificate {clientCertificate.FriendlyName} {clientCertificate.Thumbprint}"); return false; } }

Setup Azure B2C API connector with certification authentication

The Azure B2C API connector is setup to use a certificate. You can create the certificate anyway you want. I used the CertificateManager Nuget package to create a RSA 512 certificate with a 3072 key size. The thumbprint from this certificate needs to be validated in the ASP.NET Core API application.

The Azure B2C API connector is added to the Azure B2C user flow. The use flow requires all the custom claims to be defined and the values can be set in the API Connector service. See the first post in this blog group for details.

Creating an RSA 512 with a 3072 size key

You can create certificates using .NET Core using the CertificateManager Nuget package which provides some helper methods for creating the X509 certificates as required.

class Program { static CreateCertificates _cc; static void Main(string[] args) { var builder = new ConfigurationBuilder() .AddUserSecrets<Program>(); var configuration = builder.Build(); var sp = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); _cc = sp.GetService<CreateCertificates>(); var rsaCert = CreateRsaCertificateSha512KeySize2048("localhost", 10); string password = configuration["certificateSecret"]; var iec = sp.GetService<ImportExportCertificate>(); var rsaCertPfxBytes = iec.ExportSelfSignedCertificatePfx(password, rsaCert); File.WriteAllBytes("cert_rsa512.pfx", rsaCertPfxBytes); Console.WriteLine("created"); } public static X509Certificate2 CreateRsaCertificateSha512KeySize2048(string dnsName, int validityPeriodInYears) { var basicConstraints = new BasicConstraints { CertificateAuthority = false, HasPathLengthConstraint = false, PathLengthConstraint = 0, Critical = false }; var subjectAlternativeName = new SubjectAlternativeName { DnsName = new List<string> { dnsName, } }; var x509KeyUsageFlags = X509KeyUsageFlags.DigitalSignature; // only if certification authentication is used var enhancedKeyUsages = new OidCollection { OidLookup.ClientAuthentication, // OidLookup.ServerAuthentication // OidLookup.CodeSigning, // OidLookup.SecureEmail, // OidLookup.TimeStamping }; var certificate = _cc.NewRsaSelfSignedCertificate( new DistinguishedName { CommonName = dnsName }, basicConstraints, new ValidityPeriod { ValidFrom = DateTimeOffset.UtcNow, ValidTo = DateTimeOffset.UtcNow.AddYears(validityPeriodInYears) }, subjectAlternativeName, enhancedKeyUsages, x509KeyUsageFlags, new RsaConfiguration { KeySize = 3072, HashAlgorithmName = HashAlgorithmName.SHA512 }); return certificate; } }

Running the applications

I setup two user flows for running and testing the applications. One is using ngrok and local development with basic authentication. The second is using certification authentication and the deployed Azure App service. I published the API to the App service and run the UI application. When the user signs in, the API connector is used to get the extra custom claims from the deployed API and is returned.

Links:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-user-flow

https://docs.microsoft.com/en-us/azure/active-directory-b2c/

https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate/

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-customize-properties?pivots=dotnet-6-0

https://github.com/AzureAD/microsoft-identity-web/wiki

Securing Azure Functions using certificate authentication


Simon Willison

Weeknotes: Apache proxies in Docker containers, refactoring Datasette

Updates to six major projects this week, plus finally some concrete progress towards Datasette 1.0. Fixing Datasette's proxy bugs Now that Datasette has had its fourth birthday I've decided to really push towards hitting the 1.0 milestone. The key property of that release will be a stable JSON API, stable plugin hooks and a stable, documented context for custom templates. There's quite a lot o

Updates to six major projects this week, plus finally some concrete progress towards Datasette 1.0.

Fixing Datasette's proxy bugs

Now that Datasette has had its fourth birthday I've decided to really push towards hitting the 1.0 milestone. The key property of that release will be a stable JSON API, stable plugin hooks and a stable, documented context for custom templates. There's quite a lot of mostly unexciting work needed to get there.

As I work through the issues in that milestone I'm encountering some that I filed more than two years ago!

Two of those made it into the Datasette 0.59.3 bug fix release earlier this week.

The majority of the work in that release though related to Datasette's base_url feature, designed to help people who run Datasette behind a proxy.

base_url lets you run Datasette like this:

datasette --setting base_url=/prefix/ fixtures.db

When you do this, Datasette will change its URLs to start with that prefix - so the hompage will live at /prefix/, the database index page at /prefix/fixtures/, tables at /prefix/fixtures/facetable etc.

The reason you would want this is if you are running a larger website, and you intend to proxy traffic to /prefix/ to a separate Datasette instance.

The Datasette documentation includes suggested nginx and Apache configurations for doing exactly that.

This feature has been a magnet for bugs over the years! People keep finding new parts of the Datasette interface that fail to link to the correct pages when run in this mode.

The principle cause of these bugs is that I don't use Datasette in this way myself, so I wasn't testing it nearly as thoroughly as it needed.

So the first step in finally solving these issues once and for all was to get my own instance of Datasette up and running behind an Apache proxy.

Since I like to deploy live demos to Cloud Run, I decided to try and run Apache and Datasette in the same container. This took a lot of figuring out. You can follow my progress on this in these two issue threads:

#1521: Docker configuration for exercising Datasette behind Apache mod_proxy #1522: Deploy a live instance of demos/apache-proxy

The short version: I got it working! My Docker implementation now lives in the demos/apache-proxy directory and the live demo itself is deployed to datasette-apache-proxy-demo.fly.dev/prefix/.

(I ended up deploying it to Fly after running into a bug when deployed to Cloud Run that I couldn't replicate on my own laptop.)

My final implementation uses a Debian base container with Supervisord to manage the two processes.

With a working live environment, I was finally able to track down the root cause of the bugs. My notes on #1519: base_url is omitted in JSON and CSV views document how I found and solved them, and updated the associated test to hopefully avoid them ever coming back in the future.

The big Datasette table refactor

The single most complicated part of the Datasette codebase is the code behind the table view - the page that lets you browse, facet, search, filter and paginate through the contents of a table (this page here).

It's got very thorough tests, but the actual implementation is mostly a 600 line class method.

It was already difficult to work with, but the changes I want to make for Datasette 1.0 have proven too much for it. I need to refactor.

Apart from making that view easier to change and maintain, a major goal I have is for it to support a much more flexible JSON syntax. I want the JSON version to default to just returning minimal information about the table, then allow ?_extra=x parameters to opt into additional information - like facets, suggested facets, full counts, SQL schema information and so on.

This means I want to break up that 600 line method into a bunch of separate methods, each of which can be opted-in-to by the calling code.

The HTML interface should then build on top of the JSON, requesting the extras that it knows it will need and passing the resulting data through to the template. This helps solve the challenge of having a stable template context that I can document in advance of Datasette 1.0

I've been putting this off for over a year now, because it's a lot of work. But no longer! This week I finally started to get stuck in.

I don't know if I'll stick with it, but my initial attempt at this is a little unconventional. Inspired by how pytest fixtures work I'm experimenting with a form of dependency injection, in a new (very alpha) library I've released called asyncinject.

The key idea behind asyncinject is to provide a way for class methods to indicate their dependencies as named parameters, in the same way as pytest fixtures do.

When you call a method, the code can spot which dependencies have not yet been resolved and execute them before executing the method.

Crucially, since they are all async def methods they can be executed in parallel. I'm cautiously excited about this - Datasette has a bunch of opportunities for parallel queries - fetching a single page of table rows, calculating a count(*) for the entire table, executing requested facets and calculating suggested facets are all queries that could potentially run in parallel rather than in serial.

What about the GIL, you might ask? Datasette's database queries are handled by the sqlite3 module, and that module releases the GIL once it gets into SQLite C code. So theoretically I should be able to use more than one core for this all.

The asyncinject README has more details, including code examples. This may turn out to be a terrible idea! But it's really fun to explore, and I'll be able to tell for sure if this is a useful, maintainable and performant approach once I have Datasette's table view running on top of it.

git-history and sqlite-utils

I made some big improvements to my git-history tool, which automates the process of turning a JSON (or other) file that has been version-tracked in a GitHub repository (see Git scraping) into a SQLite database that can be used to explore changes to it over time.

The biggest was a major change to the database schema. Previously, the tool used full Git SHA hashes as foreign keys in the largest table.

The problem here is that a SHA hash string is 40 characters long, and if they are being used as a foreign key that's a LOT of extra weight added to the largest table.

sqlite-utils has a table.lookup() method which is designed to make creating "lookup" tables - where a string is stored in a unique column but an integer ID can be used for things like foreign keys - as easy as possible.

That method was previously quite limited, but in sqlite-utils 3.18 and 3.19 - both released this week - I expanded it to cover the more advanced needs of my git-history tool.

The great thing about building stuff on top of your own libraries is that you can discover new features that you need along the way - and then ship them promptly without them blocking your progress!

Some other highlights s3-credentials 0.6 adds a --dry-run option that you can use to show what the tool would do without making any actual changes to your AWS account. I found myself wanting this while continuing to work on the ability to specify a folder prefix within S3 that the bucket credentials should be limited to. datasette-publish-vercel 0.12 applies some pull requests from Romain Clement that I had left unreviewed for far too long, and adds the ability to customize the vercel.json file used for the deployment - useful for things like setting up additional custom redirects. datasette-graphql 2.0 updates that plugin to Graphene 3.0, a major update to that library. I had to break backwards compatiblity in very minor ways, hence the 2.0 version number. csvs-to-sqlite 1.3 is the first relase of that tool in just over a year. William Rowell contributed a new feature that allows you to populate "fixed" database columns on your imported records, see PR #81 for details. TIL this week Planning parallel downloads with TopologicalSorter Using cog to update --help in a Markdown README file Using build-arg variables with Cloud Run deployments Assigning a custom subdomain to a Fly app Releases this week datasette-publish-vercel: 0.12 - (18 releases total) - 2021-11-22
Datasette plugin for publishing data using Vercel git-history: 0.4 - (6 releases total) - 2021-11-21
Tools for analyzing Git history using SQLite sqlite-utils: 3.19 - (90 releases total) - 2021-11-21
Python CLI utility and library for manipulating SQLite databases datasette: 0.59.3 - (101 releases total) - 2021-11-20
An open source multi-tool for exploring and publishing data datasette-redirect-to-https: 0.1 - 2021-11-20
Datasette plugin that redirects all non-https requests to https s3-credentials: 0.6 - (6 releases total) - 2021-11-18
A tool for creating credentials for accessing S3 buckets csvs-to-sqlite: 1.3 - (13 releases total) - 2021-11-18
Convert CSV files into a SQLite database datasette-graphql: 2.0 - (32 releases total) - 2021-11-17
Datasette plugin providing an automatic GraphQL API for your SQLite databases asyncinject: 0.2a0 - (2 releases total) - 2021-11-17
Run async workflows using pytest-fixtures-style dependency injection

Hurl

Hurl Hurl is "a command line tool that runs HTTP requests defined in a simple plain text format" - written in Rust on top of curl, it lets you run HTTP requests and then execute assertions against the response, defined using JSONPath or XPath for HTML. It can even assert that responses were returned within a specified duration. Via @humphd

Hurl

Hurl is "a command line tool that runs HTTP requests defined in a simple plain text format" - written in Rust on top of curl, it lets you run HTTP requests and then execute assertions against the response, defined using JSONPath or XPath for HTML. It can even assert that responses were returned within a specified duration.

Via @humphd

Thursday, 18. November 2021

Simon Willison

Quoting Robin Sloan

Many Web3 boost­ers see them­selves as disruptors, but “tokenize all the things” is noth­ing if not an obe­di­ent con­tin­u­a­tion of “market-ize all the things”, the cam­paign started in the 1970s, hugely suc­cessful, ongoing. I think the World Wide Web was the real rupture — “Where … is the money?”—which Web 2.0 smoothed over and Web3 now attempts to seal totally. — Robin Sloan

Many Web3 boost­ers see them­selves as disruptors, but “tokenize all the things” is noth­ing if not an obe­di­ent con­tin­u­a­tion of “market-ize all the things”, the cam­paign started in the 1970s, hugely suc­cessful, ongoing. I think the World Wide Web was the real rupture — “Where … is the money?”—which Web 2.0 smoothed over and Web3 now attempts to seal totally.

Robin Sloan


Cookiecutter Data Science

Cookiecutter Data Science Some really solid thinking in this documentation for the DrivenData cookiecutter template. They emphasize designing data science projects for repeatability, such that just the src/ and data/ folders can be used to recreate all of the other analysis from scratch. I like the suggestion to give each project a dedicated S3 bucket for keeping immutable copies of the original

Cookiecutter Data Science

Some really solid thinking in this documentation for the DrivenData cookiecutter template. They emphasize designing data science projects for repeatability, such that just the src/ and data/ folders can be used to recreate all of the other analysis from scratch. I like the suggestion to give each project a dedicated S3 bucket for keeping immutable copies of the original raw data that might be too large for GitHub.

Via Paige Bailey


Ally Medina - Blockchain Advocacy

Initial Policy Offerings

A Reader’s Guide How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here. Three large companies have all come out with policy

A Reader’s Guide

How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here.

Three large companies have all come out with policy suggestions: FTX, Coinbase and and A16z. While these proposals differ in approach, they all seek to address a few central policy questions. I’ll break down the proposals based on key subject areas:

Consumer Protection

A16: Suggests a framework for DAO’s to provide disclosures.

Coinbase: Sets a goal to “​​Enhance transparency through appropriate disclosure requirements. Protect against fraud and market manipulation”

FTX: Similarly suggests framework for “disclosure and transparency standards”

All three of these make ample mention of consumer protections that seem to begin and end at disclosures. Regulators might want something with a little more teeth. FTX provides a more robust outline for combating fraud, suggesting the use of on-chain analytics tools. This is a smart and concrete suggestion of how to improve existing regulation that relies on SARS reports filed AFTER suspicious activity.

Exactly how Decentralized?

A16: Seeks to create a definition and entity status for DAO’s, which would ostensibly require a different kind of regulation than more custodial services.

Coinbase: ​Platforms and services that do not custody or otherwise control the assets of a customer — including miners, stakers and developers — would need to be treated differently

FTX- Doesn’t mention decentralization.

These are really varied approaches. I’m not criticizing FTX here, they are focusing on consumer protections and combating fraud which are good things to highlight. However the core regulatory issues is- can we differentiate between decentralized and centralized products and does that create a fundamentally conflict with existing law. A16z’s approach is novel, a new designation without a new agency.

The Devil You Know vs The Devil you Don’t

A16- Suggests the Government Office of Accountability “assess the current state of regulatory jurisdiction over cryptocurrency, digital assets, and decentralized technology, and to compare the costs and benefits of harmonizing jurisdiction among agencies against vesting supervision and oversight with a federally chartered self-regulatory organization or one or more nonprofit corporations.”

Coinbase- Argues that this technology needs a new regulatory agency and that all digital assets should be under a single regulatory authority. Also suggests coordination with a Self Regulatory Organization.

FTX- Doesn’t step into that morass.

Coinbase has the most aggressive position here. I personally am not convinced of the need for a new regulatory agency. We haven’t tried it the old fashioned way yet, where existing agencies offer clarity about what would bring a digital asset into their jurisdiction and what would exclude it. Creating a new agency is a slow and expensive process. And then that agency would need to justify its existence by aggressively cracking down. It’s a bit like creating a hammer and then inevitably complaining that it sees everything as a nail.

How to achieve regulatory change in the US for crypto:

Stop tweeting aggressively at the people who regulate you. Negative points if you are a billionaire complaining about taxes. Spend some time developing relationships with policymakers and working collaboratively with communities you want to support. Lotta talk of unbanked communities- any stats on how they are being served by this tech? (Seriously please share) Consider looking at who is already doing the work you want to accelerate and consider working with them/learning from and supporting existing efforts rather than whipping out your proposal and demanding attention. Examples: CoinCenter, Blockchain Association. At the state level: Blockchain Advocacy Coalition of course, Cascadia Blockchain Council, Texas Blockchain Council etc.

A16: https://int.nyt.com/data/documenttools/2021-09-27-andreessen-horowitz-senate-banking-proposals/ec055eb0ce534033/full.pdf#page=9

Coinbase: https://blog.coinbase.com/digital-asset-policy-proposal-safeguarding-americas-financial-leadership-ce569c27d86c

FTX: https://blog.ftx.com/policy/policy-goals-market-regulation/

Wednesday, 17. November 2021

Phil Windley's Technometria

NFTs, Verifiable Credentials, and Picos

Summary: The hype over NFTs and collectibles is blinding us to their true usefulness as trustworthy persistent data objects. How do they sit in the landscape with verifiable credentials and picos? Listening to this Reality 2.0 podcast about NFTs with Doc Searls, Katherine Druckman, and their guest Greg Bledsoe got me thinking about NFTs. I first wrote about NFTs in 2018 regarding w

Summary: The hype over NFTs and collectibles is blinding us to their true usefulness as trustworthy persistent data objects. How do they sit in the landscape with verifiable credentials and picos?

Listening to this Reality 2.0 podcast about NFTs with Doc Searls, Katherine Druckman, and their guest Greg Bledsoe got me thinking about NFTs. I first wrote about NFTs in 2018 regarding what was perhaps the first popular NFT: Cryptokitties. I bought a few and played with them, examined the contract code, and was thinking about how they might enable self-sovereignty, or not. I wrote:

[E]ach kitty has some interesting properties:

Each Kitty is distinguishable from all the rest and has a unique identity and existence Each kitty is owned by someone. Specifically, it is controlled by whoever has the private keys associated with the address that the kitty is tied to.

This is a good description of the properties of NFTs in general. Notice that nothing here says that NFTs have to be about art, or collectibles, although that's the primary use case right now that's generating so much hype. Cryptokitties were more interesting than most of the NFT use cases right now because the smart contract allowed them to be remixed to produce new kitties (for a fee).

Suppose I rewrote the quote from my post on Cryptokitties like this:

[E]ach verifiable credential (VC) has some interesting properties:

Each VC is distinguishable from all the rest and has a unique identity and existence Each VC is owned by someone. Specifically, it is controlled by whoever has the private keys associated with the address that the VC was issued to.

Interesting, no? So, if these properties are true for both NFTs and verifiable credentials, what's the difference? The primary difference is that right now, we envision VC issuers to be institutions, like the DMV, your bank, or employer. And institutions are centralized. In contrast, because NFTs are created using a smart contract on the Ethereum blockchain, we think of them as decentralized. But, not so fast. As I noted in my post on Cryptokitties, you can't assume an NFT is decentralized without examining the smart contract.

There is one problem with CryptoKitties as a model of self-sovereignty: the CryptoKitty smart contract has a "pause" function that can be executed by certain addresses. This is probably a protection against bugs—no one wants to be the next theDAO—but it does provide someone besides the owner with a way to shut it all down.

I have no idea who that someone is and can't hold them responsible for their behavior—I'd guess it's someone connected with CryptoKitties. Whoever has control of these special addresses could shutdown the entire enterprise. I do not believe, based on the contract structure, that they could take away individual kitties; it's an all or nothing proposition. Since they charged money for setting this up, there's likely some contract law that could be used for recourse.

So, without looking at the code for the smart contract, it's hard to say that a particular NFT is decentralized or not. They may be just as centralized as your bank1.

To examine this more closely, let's look at a property title, like a car title, as an example. The DMV could decide to issue car titles as verifiable credentials tomorrow. And the infrastructure to support it is all there: well-supported open source code, companies to provide issuing software and wallets, and specifications and protocols for interoperability2. Nothing has to change politically for that to happen.

The DMV could also issue car titles as NFTs. With an NFT, I'd prove I own the car by exercising control over the private key that controls the NFT representing the car title. The state might do this to provide more automation for car transfers. Here too, they'd have to find an infrastructure provider to help them, ensure they had a usable wallet to store the title, and interact with the smart contract. I don't know how interoperable this would be.

One of the key features of an NFT is that they can be transfered between owners or controllers. Verifiable credentials, because of their core use cases are not designed to be transferred, rather they are revoked and reissued.

Suppose I want to sell the car. With a verifiable credential, the state would still be there, revoking the VC representing my title to the car and issuing a new VC to the buyer when the title transfers. The record of who owns what is still the database at the DMV. With NFTs we can get rid of that database. So, selling my car now becomes something that might happen in a decentralized way, without the state as an intermediary. Note that they could do this and still retain a regulatory interest in car titling if they control the smart contract.

But, the real value of issuing a car title as an NFT would be if it were done using a smart contract in a way that decentralized car titles. If you imagine a world of self-driving cars that own and sell themselves, then that's interesting. You could also imagine that we want to remove the DMV from the business of titling cars altogether. That's a big political lift, but if you dream of a world with much smaller government, then NFT-based car titles might be a way to do that. But I think it's a ways off. So, we could use NFTs for car titles, but right now there's not much point besides the automation.

You can also imagine a much more decentralized future for verifiable credentials. There's no reason a smart contract couldn't issue and revoke verifiable credentials according to rules embodied in the code. Sphereon has an integration between verifiable credentials and the Digital Assets Markup Language (DAML), a smart contract language. Again, how decentralized the application is depends on the smart contract, but decentralized, institution-independent verifiable credentials are possible.

A decade ago, Lucas, Ballay, and McManus wrote Trillions: Thriving in the Emerging Information Ecology. One of the ideas they talked about was something they called a persistent data object (PDO). I was intrigued by persistent data objects because of the work we'd been doing at Kynetx on personal clouds. In applying the idea of PDOs, I quickly realized that what we were doing was much more than data because our persistent data objects also encapsulated code and the name persistent compute objects, or picos was born.

An NFT is one possible realization of Trillion's PDOs. So are verifiable credentials. Both are persistent containers for data. They are both capable of inspiring confidence that the data they contain has fidelity and, perhaps, a trustworthy provenance. A pico is an agent. Picos can:

have a wallet that holds and exchanges NFTs and credentials according to the rules encapsulated in the pico. be programmed to interact with the smart contract for an NFT to perform the legal operations. be programmed to receive, hold, and present verifiable credentials according to the proper DIDCommm protocols.3

Relationship between NFTs, verifiable credentials, and picos (click to enlarge)

NFTs are currently in their awkward, Pets.com stage. Way too much hype and many myopic use cases. But I think they'll grow to have valuable uses in creating more decentralized, trustworthy data objects. If you listen to the reality 2.0 podcast starting about 56 minutes, Greg, Katherine, and Doc get into some of those. Greg's starting with games—a good place, I think. Supply chain is another promising area. If you need decentralized, automated, trustworthy, persistent data containers, then NFTs fit the bill.

People who live in a country with a strong commitment to the rule of law, might ask why decentralizing things like titles and supply chains is a good idea. But that's not everyone's reality. Blockchains and NFTs can inspire confidence in systems that would otherwise be too costly or untrustworthy otherwise. Picos are a great way to create distributed systems of entity-oriented compute nodes that are capable of using PDOs.

Notes Note that I'm not saying that Cryptokitties is as centralized as your bank. Just that without looking at the code, you can't tell. Yeah, I know that interop is still a work in progress. But at least it's in progress, not ignored. These are not capabilities that picos presently have, but they do support DIDComm messaging. Want to help add these to picos? Contact me.

Photo Credit: Colorful Shipping Containers from frank mckenna (CC0)

Tags: verifiable+credentials non+fungible+token ssi identity picos


Identity Woman

COVID & Travel Resources for Phocuswright

I’m speaking today at the Phocuswright conference and this post is sharing key resources for folks who are watching/attending who want to get engaged with our work. The Covid Credentials Initiative where I am the Ecosystems Director is the place to start. We have a vibrant global learning community striving to solve challenge of common […] The post COVID & Travel Resources for Phocuswright a

I’m speaking today at the Phocuswright conference and this post is sharing key resources for folks who are watching/attending who want to get engaged with our work. The Covid Credentials Initiative where I am the Ecosystems Director is the place to start. We have a vibrant global learning community striving to solve challenge of common […]

The post COVID & Travel Resources for Phocuswright appeared first on Identity Woman.

Monday, 15. November 2021

reb00ted

Social Media Architectures and Their Consequences

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products. This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO. So here y

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products.

This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO.

So here you are:

Facebook-style ("centralized") Mastodon-style ("federated") IndieWeb-style ("distributed/P2P") Blockchain-style Moderation Uniform, consistent moderation policy for all users Locally different moderation policies, but consistent for all users on a node Every user decides on their own Posit - algorithmic smart contract that drives consensus Censorship easy; global one node at a time full censorship not viable full censorship not viable Software upgrades Fast, uncomplicated for all users Inconsistent across the network Inconsistent across the network Consistent, but large synchronization / management costs Money Centralized; most accumulated by "Facebook" Donations (BuyMeACoffee, LiberaPay); Patronage (Patreon) Paid to/earned by network nodes; value fluctuates due to speculation Authentication Centralized Decentralized (e.g. Solid, OpenID, SSI) Decentralized (e.g. wallets) Advertising Decided by "Facebook" Not usually Determined by user Governance Centralized, unaccountable Several components: protocol-level, code-level and instance-level Several components: protocol-level, code-level and instance-level Search & Discovery Group formation Regulation Ownership Totalitarian Individual

Phil Windley's Technometria

Zero Knowledge Proofs

Summary: Zero-knowledge proofs are a powerful cryptographic technique at the heart of self-sovereign identity (SSI). This post should help you understand what they are and how they can be used. Suppose Peggy needs to prove to Victor that she is in possession of a secret without revealing the secret. Can she do so in a way that convinces Victor that she really does know the secret? T

Summary: Zero-knowledge proofs are a powerful cryptographic technique at the heart of self-sovereign identity (SSI). This post should help you understand what they are and how they can be used.

Suppose Peggy needs to prove to Victor that she is in possession of a secret without revealing the secret. Can she do so in a way that convinces Victor that she really does know the secret? This is the question at the heart of one of the most powerful cryptographic processes we can employ in identity systems: zero-knowledge proofs (ZKPs). Suppose for example that Peggy has a digital driver's license and wants to prove to Victor, the bartender, that she's over 21 without just handing over her driver's license or even showing him her birthdate. ZKPs allow Peggy to prove her driver's license says she's at least 21 in a way that convinces Victor without Peggy having to reveal anything else (i.e., there's zero excess knowledge).

This problem was first explored by MIT researchers Shafi Goldwasser, Silvio Micali and Charles Rackoff in the 1980s as a way of combatting information leakage. The goal is to reduce the amount of extra information the verifier, Victor, can learn about the prover, Peggy.

One way to understand how ZKPs work is the story of the Cave of Alibaba, first published by cryptographers Quisquater, Guillou, and Berson1. The following diagram provides an illustration.

Peggy and Victor in Alibaba's Cave (click to enlarge)

The Cave of Alibaba has two passages, labeled A and B, that split off a single passageway connected to the entrance. Peggy possesses a secret code that allows her to unlock a door connecting A and B. Victor wants to buy the code but won't pay until he's sure Peggy knows it. Peggy won't share it with Victor until he pays.

The algorithm for Peggy proving she knows the code proceeds as follows:

Victor stands outside the cave while Peggy enters and selects one of the passages. Victor is not allowed to see which path Peggy takes. Victor enters the cave and calls out "A" or "B" at random. Peggy emerges from the correct passageway because she can easily unlock the door regardless of which choice she made when entering. Of course, Peggy could have just gotten lucky and guessed right, so Peggy and Victor repeat the experiment many times.

If Peggy can always come back by whichever passageway Victor selects, then there is an increasing probability that Peggy really knows the code. After 20 tries, there's less than one chance in a million that Peggy is simply guessing which letter Victor will call. This constitutes a probabilistic proof that Peggy knows the secret.

This algorithm not only allows Peggy to convince Victor she knows the code, but it does it in a way that ensures Victor can't convince anyone else Peggy knows the code. Suppose Victor records the entire transaction. The only thing an observer sees is Victor calling out letters and Peggy emerging from the right tunnel. The observer can't be sure Victor and Peggy didn't agree on a sequence of letters in advance to fool observers. Note that this property relies on the algorithm using a good pseudo-random number generator with a high-entropy seed so that Peggy and third-party observers can't predict Victor's choices.

Thus, while Peggy cannot deny to Victor that she knows the secret, she can deny that she knows the secret to other third parties. This ensures that anything she proves to Victor stays between them and Victor cannot leak it—at least in a cryptographic way that proves it came from Peggy. Peggy retains control of both her secret and the fact that she knows it.

When we say "zero knowledge" and talk about Victor learning nothing beyond the proposition in question, that's not perfectly true. In the cave of Alibaba, Peggy proves in zero knowledge that she knows the secret. But there are many other things that Victor learns about Peggy that ZKPs can do nothing about. For example, Victor knows that Peggy can hear him, speaks his language, walk, and is cooperative. He also might learn things about the cave, like approximately how long it takes to unlock the door. Peggy learns similar things about Victor. So, the reality is that the proof is approximately zero knowledge not perfectly zero knowledge.

ZKP Systems

The example of Alibaba's Cave is a very specific use of ZKPs, what's called a zero-knowledge proof of knowledge. Peggy is proving she knows (or possesses something). More generally, Peggy might want to prove many facts to Victor. These could include propositional phrases or even values. ZKPs can do that as well.

To understand how we can prove propositions in zero knowledge, consider a different example, sometimes called the Socialist Millionaire Problem. Suppose Peggy and Victor want to know if they're being paid a fair wage. Specifically, they want to know whether they are paid the same amount, but don't want to disclose their specific hourly rate to each other or even a trusted third party. In this instance, Peggy isn't proving she knows a secret, rather, she's proving an equality (or inequality) proposition.

For simplicity, assume that Peggy and Victor are being paid one of $10, $20, $30, or $40 per hour. The algorithm works like this:

Peggy buys four lock boxes and labels them $10, $20, $30, and $40. She throws away the keys to every box except the one labeled with her wage. Peggy gives all the locked boxes to Victor who privately puts a slip of paper with a "+" into the slot at the top of the box labeled with his salary. He puts a slip with a "-" in all the other boxes. Victor gives the boxes back to Peggy who uses her key in private to open the box with her salary on it. If she finds a "+" then they make the same amount. Otherwise, they make a different amount. She can use this to prove the fact to Victor.

This is called an oblivious transfer and proves the proposition VictorSalary = PeggySalary true or false in zero knowledge (i.e., without revealing any other information).

For this to work, Peggy and Victor must trust that the other will be forthcoming and state their real salary. Victor needs to trust that Peggy will throw away the three other keys. Peggy must trust that Victor will put only one slip with a "+" on it in the boxes.

Just like digital certificates need a PKI to establish confidence beyond what would be possible with self-issued certificates alone, ZKPs are more powerful in a system that allows Peggy and Victor to prove facts from things others say about them, not just what they say about themselves. For example, rather than Peggy and Victor self-asserting their salary, suppose they could rely on a signed document from the HR department in making their assertion so that both know that the other is stating their true salary. Verifiable Credentials provide a system for using ZKPs to prove many different facts alone or in concert, in ways that give confidence in the method and trust in the data.

Non-Interactive ZKPs

In the previous examples, Peggy was able to prove things to Victor through a series of interactions. For ZKPs to be practical, interactions between the prover and the verifier should be minimal. Fortunately, a technique called SNARK allows for non-interactive zero knowledge proofs.

SNARKs have the following properties (from whence they derive their name):

Succinct: the sizes of the messages are small compared to the length of the actual proof. Non-interactive: other than some setup, the prover sends only one message to the verifier. ARguments: this is really an argument that something is correct, not a proof as we understand it mathematically. Specifically, the prover theoretically could prove false statements given enough computational power. So, SNARKs are "computationally sound" rather than "perfectly sound". of Knowledge: the prover knows the fact in question.

You'll typically see "zk" (for zero-knowledge) tacked on the front to indicate that during this process, the verifier learns nothing other than the facts being proved.

The mathematics underlying zkSNARKs involves homomorphic computation over high-degree polynomials. But we can understand how zkSNARKs work without knowing the underlying mathematics that ensures that they're sound. If you'd like more details of the mathematics, I recommend Christian Reitwiessner's "zkSNARKs in a Nutshell".

As a simple example, suppose Victor is given a sha256 hash, H, of some value. Peggy wants to prove that she knows a value s such that sha265(s) == H without revealing s to Victor. We can define a function C that captures the relationship:

C(x, w) = ( sha256(w) == x )

So, C(H, s) == true, while other values for w will return false.

Computing a zkSNARK requires three functions G, P, and V. G is the key generator that takes a secret parameter called lambda and the function C and generates two public keys, the proving key pk and the verification key vk. They need only be generated once for a given function C. The parameter lambda must be destroyed after this step since it is not needed again and anyone who has it can generate fake proofs.

The prover function P takes as input the proving key pk, a public input x, and a private (secret) witness w. The result of executing P(pk,x,w) is a proof, prf, that the prover knows a value for w that satisfies C.

The verifier function V computes V(vk, x, prf) which is true if the proof prf is correct and false otherwise.

Returning to Peggy and Victor, Victor chooses a function C representing what he wants Peggy to prove, creates a random number lambda, and runs G to generate the proving and verification keys:

(pk, vk) = G(C, lambda)

Peggy must not learn the value of lambda. Victor shares C, pk, and vk with Peggy.

Peggy wants to prove she knows the value s that satisfies C for x = H. She runs the proving function P using these values as inputs:

prf = P(pk, H, s)

Peggy presents the proof prf to Victor who runs the verification function:

V(vk, H, prf)

If the result is true, then the Victor can be assured that Peggy knows the value s.

The function C does not need to be limited to a hash as we did in this example. Within limits of the underlying mathematics, C can be quite complicated and involve any number of values that Victor would like Peggy to prove, all at one time.

Notes Quisquater, Jean-Jacques; Guillou, Louis C.; Berson, Thomas A. (1990). How to Explain Zero-Knowledge Protocols to Your Children (PDF). Advances in Cryptology – CRYPTO '89: Proceedings. Lecture Notes in Computer Science. 435. pp. 628–631. doi:10.1007/0-387-34805-0_60. ISBN 978-0-387-97317-3.

Photo Credit: Under 25? Please be prepared to show proof of age when buying alcohol from Gordon Joly (CC BY-SA 2.0)

Tags: identity cryptography verifiable+credentials ssi


Damien Bod

Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core

This post shows how to implement an ASP.NET Core Razor Page application which authenticates using Azure B2C and uses custom claims implemented using the Azure B2C API connector. The claims provider is implemented using an ASP.NET Core API application and the Azure API connector requests the data from this API. The Azure API connector adds […]

This post shows how to implement an ASP.NET Core Razor Page application which authenticates using Azure B2C and uses custom claims implemented using the Azure B2C API connector. The claims provider is implemented using an ASP.NET Core API application and the Azure API connector requests the data from this API. The Azure API connector adds the claims after an Azure B2C sign in flow or whatever settings you configured in the Azure B2C user flow.

Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims

Blogs in this series

Securing ASP.NET Core Razor Pages, Web APIs with Azure B2C external and Azure AD internal identities Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

Setup the Azure B2C App Registration

An Azure App registration is setup for the ASP.NET Core Razor page application. A client secret is used to authenticate the client. The redirect URI is added for the app. This is a standard implementation.

Setup the API connector

The API connector is setup to add the extra claims after a sign in. This defines the API endpoint and the authentication method. Only Basic or certificate authentication is possible for this API service. Both of these are not ideal for implementing and using this service to add extra claims to the identity. I started ngrok using the cmd and used the URL from this to configure Azure B2C API connector. Maybe two separate connectors could be setup for a solution, one like this for development and a second one with the Azure App service host address and certificate authentication used.

Azure B2C user attribute

The custom claims are added to the Azure B2C user attributes. The custom claims can be add as required.

Setup to Azure B2C user flow

The Azure B2C user flow is configured to used the API connector. This flow adds the application claims to the token which it receives from the API call used in the API connector.

The custom claims are added then using the application claims blade. This is required if the custom claims are to be added.

I also added the custom claims to the Azure B2C user flow user attributes.

Azure B2C is now setup to use the custom claims and the data for these claims will be set used the API connector service.

ASP.NET Core Razor Page

The ASP.NET Core Razor Page uses Microsoft.Identity.Web to authenticate using Azure B2C. This is a standard setup for a B2C user flow.

builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAdB2C")); builder.Services.AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddRazorPages() .AddMicrosoftIdentityUI(); var app = builder.Build(); JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

The main difference between an Azure B2C user flow and an Azure AD authentication is the configuration. The SignUpSignInPolicyId is set to match the configured Azure B2C user flow and the Instance uses the b2clogin from the domain unlike the AAD configuration definition.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "ab393e93-e762-4108-a3f5-326cf8e3874b", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_ExtraClaims", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath": "/signout-callback-oidc" //"ClientSecret": "--in-user-settings--" },

The index Razor page returns the claims and displays the values in the UI.

public class IndexModel : PageModel { [BindProperty] public IEnumerable<Claim> Claims { get; set; } = Enumerable.Empty<Claim>(); public void OnGet() { Claims = User.Claims; } }

This is all the end user application requires, there is no special setup here.

ASP.NET Core API connector implementation

The API implemented for the Azure API connector uses a HTTP Post. Basic authentication is used to validate the request as well as the client ID which needs to match the configured App registration. This is weak authentication and should not be used in production especially since the API provides sensitive PII data. If the request provides the correct credentials and the correct client ID, the data is returned for the email. In this demo, the email is returned in the custom claim. Normal the data would be returned using some data store or whatever.

[HttpPost] public async Task<IActionResult> PostAsync() { // Check HTTP basic authorization if (!IsAuthorized(Request)) { _logger.LogWarning("HTTP basic authentication validation failed."); return Unauthorized(); } string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync(); var requestConnector = JsonSerializer.Deserialize<RequestConnector>(content); // If input data is null, show block page if (requestConnector == null) { return BadRequest(new ResponseContent("ShowBlockPage", "There was a problem with your request.")); } string clientId = _configuration["AzureAdB2C:ClientId"]; if (!clientId.Equals(requestConnector.ClientId)) { _logger.LogWarning("HTTP clientId is not authorized."); return Unauthorized(); } // If email claim not found, show block page. Email is required and sent by default. if (requestConnector.Email == null || requestConnector.Email == "" || requestConnector.Email.Contains("@") == false) { return BadRequest(new ResponseContent("ShowBlockPage", "Email name is mandatory.")); } var result = new ResponseContent { // use the objectId of the email to get the user specfic claims MyCustomClaim = $"everything awesome {requestConnector.Email}" }; return Ok(result); } private bool IsAuthorized(HttpRequest req) { string username = _configuration["BasicAuthUsername"]; string password = _configuration["BasicAuthPassword"]; // Check if the HTTP Authorization header exist if (!req.Headers.ContainsKey("Authorization")) { _logger.LogWarning("Missing HTTP basic authentication header."); return false; } // Read the authorization header var auth = req.Headers["Authorization"].ToString(); // Ensure the type of the authorization header id `Basic` if (!auth.StartsWith("Basic ")) { _logger.LogWarning("HTTP basic authentication header must start with 'Basic '."); return false; } // Get the the HTTP basinc authorization credentials var cred = System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(auth.Substring(6))).Split(':'); // Evaluate the credentials and return the result return (cred[0] == username && cred[1] == password); }

The ResponseContent class is used to return the data for the identity. All custom claims must be prefixed with the extension_ The data is then added to the profile data.

public class ResponseContent { public const string ApiVersion = "1.0.0"; public ResponseContent() { Version = ApiVersion; Action = "Continue"; } public ResponseContent(string action, string userMessage) { Version = ApiVersion; Action = action; UserMessage = userMessage; if (action == "ValidationError") { Status = "400"; } } [JsonPropertyName("version")] public string Version { get; } [JsonPropertyName("action")] public string Action { get; set; } [JsonPropertyName("userMessage")] public string? UserMessage { get; set; } [JsonPropertyName("status")] public string? Status { get; set; } [JsonPropertyName("extension_MyCustomClaim")] public string MyCustomClaim { get; set; } = string.Empty; } }

With this, custom claims can be added to Azure B2C identities. This can be really useful when for example implementing verifiable credentials using id_tokens. This is much more complicated to implement compared to other IDPs but at least it is possible and can be solved. The technical solution to secure the API has room for improvements.

Testing

The applications can be started and the API connector needs to be mapped to a public IP. After starting the apps, start ngrok with a matching configuration for the HTTP address of the API connector API.

ngrok http https://localhost:5002

This URL in the API connector configured on Azure needs to match this ngrok URL. all good, the applications will run and the custom claim will be displayed in the UI.

Notes

The profile data in this API is very sensitive and you should use maximal security protections which are possible. Using Basic authentication alone for this type of API is not a good idea. It would be great to see managed identities supported or something like this. I used basic authentication so that I could use ngrok to demo the feature, we need a public endpoint for testing. I would not use this in a productive deployment. I would use certificate authentication with an Azure App service deployment and the certificate created and deployed using Azure Key Vault. Certificate rotation would have to be setup. I am not sure how good API connector infrastructure automation can be implemented, I have not tried this yet. A separate security solution would need to be implemented for local development. This is all a bit messy as all these extra steps end up in costs or developers taking short cuts and deploying with less security.

Links:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-user-flow

https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate/

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-customize-properties?pivots=dotnet-6-0

https://github.com/AzureAD/microsoft-identity-web/wiki

https://ngrok.com/

Securing Azure Functions using certificate authentication

Simon Willison

Weeknotes: git-history, created for a Git scraping workshop

My main project this week was a 90 minute workshop I delivered about Git scraping at Coda.Br 2021, a Brazilian data journalism conference, on Friday. This inspired the creation of a brand new tool, git-history, plus smaller improvements to a range of other projects. git-history I still need to do a detailed write-up of this one, but on Thursday I released a brand new tool called git-history, w

My main project this week was a 90 minute workshop I delivered about Git scraping at Coda.Br 2021, a Brazilian data journalism conference, on Friday. This inspired the creation of a brand new tool, git-history, plus smaller improvements to a range of other projects.

git-history

I still need to do a detailed write-up of this one, but on Thursday I released a brand new tool called git-history, which I describe as "tools for analyzing Git history using SQLite".

This tool is the missing link in the Git scraping pattern I described here last October.

Git scraping is the technique of regularly scraping an online source of information and writing the results to a file in a Git repository... which automatically gives you a full revision history of changes made to that data source over time.

The missing piece has always been what to do next: how do you turn a commit history of changes to a JSON or CSV file into a data source that can be used to answer questions about how that file changed over time?

I've written one-off Python scripts for this a few times (here's my CDC vaccinations one, for example), but giving an interactive workshop about the technique finally inspired me to build a tool to help.

The tool has a comprehensive README, but the short version is that you can take a JSON (or CSV) file in a repository that has been tracking changes to some items over time and run the following to load all of the different versions into a SQLite database file for analysis with Datasette:

git-convert file incidents.db incidents.json --id IncidentID

This assumes that incidents.json contains a JSON array of incidents (reported fires for example) and that each incident has a IncidentID identifier key. It will then loop through the Git history of that file right from the start, creating an item_versions table that tracks every change made to each of those items - using IncidentID to decide if a row represents a new incident or an update to a previous one.

I have a few more improvements I want to make before I start more widely promoting this, but it's already really useful. I've had a lot of fun running it against example repos from the git-scraping GitHub topic (now at 202 repos and counting).

Workshop: Raspando dados com o GitHub Actions e analisando com Datasette

The workshop I gave at the conference was live-translated into Portuguese, which is really exciting! I'm looking forward to watching the video when it comes out and seeing how well that worked.

The title translates to "Scraping data with GitHub Actions and analyzing with Datasette", and it was the first time I've given a workshop that combines Git scraping and Datasette - hence the development of the new git-history tool to help tie the two together.

I think it went really well. I put together four detailed exercises for the attendees, and then worked through each one live with the goal of attendees working through them at the same time - a method I learned from the Carpentries training course I took last year.

Four exercises turns out to be exactly right for 90 minutes, with reasonable time for an introduction and some extra material and questions at the end.

The worst part of running a workshop is inevitably the part where you try and get everyone setup with a functional development environment on their own machines (see XKCD 1987). This time round I skipped that entirely by encouraging my students to use GitPod, which provides free browser-based cloud development environments running Linux, with a browser-embedded VS Code editor and terminal running on top.

(It's similar to GitHub Codespaces, but Codespaces is not yet available to free customers outside of the beta.)

I demonstrated all of the exercises using GitPod myself during the workshop, and ensured that they could be entirely completed through that environment, with no laptop software needed at all.

This worked so well. Not having to worry about development environments makes workshops massively more productive. I will absolutely be doing this again in the future.

The workshop exercises are available in this Google Doc, and I hope to extract some of them out into official tutorials for various tools later on.

Datasette 0.58.2

Yesterday was Datasette's fourth birthday - the four year anniversary of the initial release announcement! I celebrated by releasing a minor bug-fix, Datasette 0.58.2, the release notes for which are quoted below:

Column names with a leading underscore now work correctly when used as a facet. (#1506) Applying ?_nocol= to a column no longer removes that column from the filtering interface. (#1503) Official Datasette Docker container now uses Debian Bullseye as the base image. (#1497)

That first change was inspired by ongoing work on git-history, where I decided to use a _id underscoper prefix pattern for columns that were reserved for use by that tool in order to avoid clashing with column names in the provided source data.

sqlite-utils 3.18

Today I released sqlite-utils 3.18 - initially also to provide a feature I wanted for git-history (a way to populate additional columns when creating a row using table.lookup()) but I also closed some bug reports and landed some small pull requests that had come in since 3.17.

s3-credentials 0.5

Earlier in the week I released version 0.5 of s3-credentials - my CLI tool for creating read-only, read-write or write-only AWS credentials for a specific S3 bucket.

The biggest new feature is the ability to create temporary credentials, that expire after a given time limit.

This is achived using STS.assume_role(), where STS is Security Token Service. I've been wanting to learn this API for quite a while now.

Assume role comes with some limitations: tokens must live between 15 minutes and 12 hours, and you need to first create a role that you can assume. In creating those credentials you can define an additional policy document, which is how I scope down the token I'm creating to only allow a specific level of access to a specific S3 bucket.

I've learned a huge amount about AWS, IAM and S3 through developming this project. I think I'm finally overcoming my multi-year phobia of anything involving IAM!

Releases this week sqlite-utils: 3.18 - (88 releases total) - 2021-11-15
Python CLI utility and library for manipulating SQLite databases datasette: 0.59.2 - (100 releases total) - 2021-11-14
An open source multi-tool for exploring and publishing data datasette-hello-world: 0.1.1 - (2 releases total) - 2021-11-14
The hello world of Datasette plugins git-history: 0.3.1 - (5 releases total) - 2021-11-12
Tools for analyzing Git history using SQLite s3-credentials: 0.5 - (5 releases total) - 2021-11-11
A tool for creating credentials for accessing S3 buckets TIL this week Basic Datasette in Kubernetes Annotated code for a demo of WebSocket chat in Deno Deploy Using Tesseract.js to OCR every image on a page

Saturday, 13. November 2021

Simon Willison

Datasette is four years old today

Datasette is four years old today I marked the occasion with a short Twitter thread about the project so far.

Datasette is four years old today

I marked the occasion with a short Twitter thread about the project so far.

Wednesday, 10. November 2021

Simon Willison

Quoting Stephen Diehl

One could never price a thirty year mortgage in bitcoin because its volatility makes it completely unpredictable and no sensible bank could calculate the risk of covering that debt. A world in which Elon Musk can tweet two emojis and your home depreciates 80% in value is a dystopia. — Stephen Diehl

One could never price a thirty year mortgage in bitcoin because its volatility makes it completely unpredictable and no sensible bank could calculate the risk of covering that debt. A world in which Elon Musk can tweet two emojis and your home depreciates 80% in value is a dystopia.

Stephen Diehl

Tuesday, 09. November 2021

Phil Windley's Technometria

Identity and Consistent User Experience

Summary: Consistent user experience is key enabler of digital embodiment and is critical to our ability to operationalize our digital lives. The other day Lynne called me up and asked a seemingly innocuous question: "How do I share this video?" Ten years ago, the answer would have been easy: copy the URL and send it to them. Now...not so much. In order to answer that question I first had

Summary: Consistent user experience is key enabler of digital embodiment and is critical to our ability to operationalize our digital lives.

The other day Lynne called me up and asked a seemingly innocuous question: "How do I share this video?" Ten years ago, the answer would have been easy: copy the URL and send it to them. Now...not so much. In order to answer that question I first had to determine which app she was using. And, since I wasn't that familiar with it, open it and search through the user interface to find the share button.

One of the features of web browsers that we don't appreciate as much as we should is the consistent user experience that the browser provides. Tabs, address bars, the back button, reloading and other features are largely the same regardless of which browser you use. There's a reason why don't break the back button! was a common tip for web designers over the years. People depend on the web's consistent user experience.

Alas, apps have changed all that. Apps freed developers from the strictures of the web. No doubt there's been some excellent uses of this freedom, but what we've lost is consistency in core user experiences. That's unfortunate.

The web, and the internet for that matter, never had a consistent user experience for authentication. At least not one that caught on. Consequently, the user experience is very fragmented. Even so, Kim Cameron's Seven Laws of Identity speaks for consistent user experience in Law 7: Consistent Experience Across Contexts. Kim says:

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

Think about logging into various websites and apps throughout your day. You probably do it way too often. But it's also made much more complex because it's slightly different everywhere. Different locations and modalities, different rules for passwords, different methods for 2FA, and so on. It's maddening.

There's a saying in security: "Don't roll your own crypto." I think we need a corollary in identity: "Don't roll your own interface." But how do we do that? And what should the interface be? One answer is to adopt the user experience people already understand from the physical world: connections and credentials.

Kim Cameron gave us a model back in 2005 when he introduced Information Cards. Information cards are digital analogs of the credentials we all carry around in the physical world. People understand credentials. Information cards worked on a protocol-mediated identity metasystem so that anyone could use them and write software for them.

Information cards didn't make it, but the ideas underlying information cards live on in modern self-sovereign identity (SSI) systems. The user experience in SSI springs from the protocol embodied in the identity metasystem. In an SSI system, people use wallets that manage connections and credentials. They can create relationships with other people, organizations, and things. And they receive credentials from other participants and present those credentials to transfer information about themselves in a trustworthy manner. They don't see keys, passwords, authentication codes, and other artifacts of the ad hoc identity systems in widespread use today. Rather they use familiar artifacts to interact with others in ways that feel familiar because they are similar to how identity works in the physical world.

This idea feels simple and obvious, but I think that conceals its incredible power. Having a wallet I control where I manage digital relationships and credentials gives me a place to stand in the digital world and operationalize my digital life. I think of it as digital embodiment. An SSI wallet gives me an interoperable way to connect and interact with others online as me. I can create both rich, long-lived relationships and service short-lived, ephemeral relationships with whatever degree of trustworthy data is appropriate for the relationship and its context.

Relationships and Interactions in SSI (click to enlarge)

We have plenty of online relationships today, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. Consistent user experience is a key enabler of digital embodiment. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. The SSI wallet is the platform upon which people can stand and become embodied online to operationalize their digital life as full-fledged participants in the digital realm.

Photo Credit: Red leather wallet on white paper from Pikrepo (CC0)

Tags: identity ux wallets ssi

Monday, 08. November 2021

Damien Bod

ASP.NET Core scheduling with Quartz.NET and SignalR monitoring

This article shows how scheduled tasks can be implemented in ASP.NET Core using Quartz.NET and then displays the job info in an ASP.NET Core Razor page using SignalR. A concurrent job and a non concurrent job are implemented using a simple trigger to show the difference in how the jobs are run. Quartz.NET provides lots […]

This article shows how scheduled tasks can be implemented in ASP.NET Core using Quartz.NET and then displays the job info in an ASP.NET Core Razor page using SignalR. A concurrent job and a non concurrent job are implemented using a simple trigger to show the difference in how the jobs are run. Quartz.NET provides lots of scheduling features and has an easy to use API for implementing scheduled jobs.

Code: https://github.com/damienbod/AspNetCoreQuartz

A simple ASP.NET Core Razor Page web application is used to implement the scheduler and the SignalR messaging. The Quartz Nuget package and the Quartz.Extensions.Hosting Nuget package are used to implement the scheduling service. The Microsoft.AspNetCore.SignalR.Client package is used to send messages to all listening web socket clients.

<PropertyGroup> <TargetFramework>net6.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.SignalR.Client" Version="6.0.0" /> <PackageReference Include="Microsoft.Extensions.Hosting" Version="6.0.0" /> <PackageReference Include="Quartz" Version="3.3.3" /> <PackageReference Include="Quartz.Extensions.Hosting" Version="3.3.3" /> </ItemGroup>

The .NET 6 templates no longer use a Startup class, all this logic can now be implemented directly in the Program.cs file with no static main. The ConfigureServices logic can be implemented using a WebApplicationBuilder instance. The AddQuartz method is used to add the scheduling services. Two jobs are added, a concurrent job and a non concurrent job. Both jobs are triggered with a simple trigger every five seconds which runs forever. The AddQuartzHostedService method adds the service as a hosted service. The AddSignalR adds the SignalR services.

using AspNetCoreQuartz; using AspNetCoreQuartz.QuartzServices; using Quartz; var builder = WebApplication.CreateBuilder(args); builder.Services.AddRazorPages(); builder.Services.AddSignalR(); builder.Services.AddQuartz(q => { q.UseMicrosoftDependencyInjectionJobFactory(); var conconcurrentJobKey = new JobKey("ConconcurrentJob"); q.AddJob<ConconcurrentJob>(opts => opts.WithIdentity(conconcurrentJobKey)); q.AddTrigger(opts => opts .ForJob(conconcurrentJobKey) .WithIdentity("ConconcurrentJob-trigger") .WithSimpleSchedule(x => x .WithIntervalInSeconds(5) .RepeatForever())); var nonConconcurrentJobKey = new JobKey("NonConconcurrentJob"); q.AddJob<NonConconcurrentJob>(opts => opts.WithIdentity(nonConconcurrentJobKey)); q.AddTrigger(opts => opts .ForJob(nonConconcurrentJobKey) .WithIdentity("NonConconcurrentJob-trigger") .WithSimpleSchedule(x => x .WithIntervalInSeconds(5) .RepeatForever())); }); builder.Services.AddQuartzHostedService( q => q.WaitForJobsToComplete = true);

The WebApplication instance is used to add the middleware like the Startup Configure method. The SignalR JobsHub endpoint is added to send the live messages of the running jobs to the UI in the client browser..

var app = builder.Build(); if (!app.Environment.IsDevelopment()) { app.UseExceptionHandler("/Error"); app.UseHsts(); } app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapHub<JobsHub>("/jobshub"); }); app.MapRazorPages(); app.Run();

The ConconcurrentJob implements the IJob interface and logs messages before and after a time delay. A SignalR client is used to send all the job information to any listening clients. A seven second sleep was added to simulate a slow running job. The jobs are triggered every 5 seconds, so this should result in no change in behavior as the jobs can run in parallel.

using Microsoft.AspNetCore.SignalR; using Quartz; namespace AspNetCoreQuartz.QuartzServices { public class ConconcurrentJob : IJob { private readonly ILogger<ConconcurrentJob> _logger; private static int _counter = 0; private readonly IHubContext<JobsHub> _hubContext; public ConconcurrentJob(ILogger<ConconcurrentJob> logger, IHubContext<JobsHub> hubContext) { _logger = logger; _hubContext = hubContext; } public async Task Execute(IJobExecutionContext context) { var count = _counter++; var beginMessage = $"Conconcurrent Job BEGIN {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("ConcurrentJobs", beginMessage); _logger.LogInformation(beginMessage); Thread.Sleep(7000); var endMessage = $"Conconcurrent Job END {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("ConcurrentJobs", endMessage); _logger.LogInformation(endMessage); } } }

The NonConconcurrentJob class is almost like the previous job, except the DisallowConcurrentExecution attribute is used to prevent concurrent running of the job. This means that even though the trigger is set to five seconds, each job must wait until the previous job finishes.

[DisallowConcurrentExecution] public class NonConconcurrentJob : IJob { private readonly ILogger<NonConconcurrentJob> _logger; private static int _counter = 0; private readonly IHubContext<JobsHub> _hubContext; public NonConconcurrentJob(ILogger<NonConconcurrentJob> logger, IHubContext<JobsHub> hubContext) { _logger = logger; _hubContext = hubContext; } public async Task Execute(IJobExecutionContext context) { var count = _counter++; var beginMessage = $"NonConconcurrentJob Job BEGIN {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("NonConcurrentJobs", beginMessage); _logger.LogInformation(beginMessage); Thread.Sleep(7000); var endMessage = $"NonConconcurrentJob Job END {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("NonConcurrentJobs", endMessage); _logger.LogInformation(endMessage); } }

The JobsHub class implements the SignalR Hub and define methods for sending SignalR messages. Two messages are used, one for the concurrent job messages and one for the non concurrent job messages.

public class JobsHub : Hub { public Task SendConcurrentJobsMessage(string message) { return Clients.All.SendAsync("ConcurrentJobs", message); } public Task SendNonConcurrentJobsMessage(string message) { return Clients.All.SendAsync("NonConcurrentJobs", message); } }

The microsoft signalr Javascript package is used to implement the client which listens for messages.

{ "version": "1.0", "defaultProvider": "cdnjs", "libraries": [ { "library": "microsoft-signalr@5.0.11", "destination": "wwwroot/lib/microsoft-signalr/" } ] }

The Index Razor Page view uses the SignalR Javascript file and displays messages by adding html elements.

@page @model IndexModel @{ ViewData["Title"] = "Home page"; } <div class="container"> <div class="row"> <div class="col-6"> <ul id="concurrentJobs"></ul> </div> <div class="col-6"> <ul id="nonConcurrentJobs"></ul> </div> </div> </div> <script src="~/lib/microsoft-signalr/signalr.js"></script>

The SignalR client adds the two methods to listen to messages sent from the Quartz jobs.

const connection = new signalR.HubConnectionBuilder() .withUrl("/jobshub") .configureLogging(signalR.LogLevel.Information) .build(); async function start() { try { await connection.start(); console.log("SignalR Connected."); } catch (err) { console.log(err); setTimeout(start, 5000); } }; connection.onclose(async () => { await start(); }); start(); connection.on("ConcurrentJobs", function (message) { var li = document.createElement("li"); document.getElementById("concurrentJobs").appendChild(li); li.textContent = `${message}`; }); connection.on("NonConcurrentJobs", function (message) { var li = document.createElement("li"); document.getElementById("nonConcurrentJobs").appendChild(li); li.textContent = `${message}`; });

When the application is run and the hosted Quartz service runs the scheduled jobs, the concurrent jobs starts every five seconds as required and the non concurrent job runs every seven seconds due to the thread sleep. Running concurrent or non concurrent jobs by using a single attribute definition is a really powerful feature of Quartz.NET.

Quartz.NET provides great documentation and has a really simple API. By using SignalR, it would be really easy to implement a good monitoring UI.

Links:

https://www.quartz-scheduler.net/

https://andrewlock.net/using-quartz-net-with-asp-net-core-and-worker-services/

https://docs.microsoft.com/en-us/aspnet/core/signalr/introduction

Sunday, 07. November 2021

Doc Searls Weblog

On using Wikipedia in schools

In Students are told not to use Wikipedia for research. But it’s a trustworthy source, Rachel Cunneen and Mathieu O’Niel nicely unpack their case for the headline. In a online polylogue in response to that piece, I wrote, “You always have a choice: to help or to hurt.” That’s what my mom told me, a zillion years […]

In Students are told not to use Wikipedia for research. But it’s a trustworthy source, Rachel Cunneen and Mathieu O’Niel nicely unpack their case for the headline. In a online polylogue in response to that piece, I wrote,

“You always have a choice: to help or to hurt.” That’s what my mom told me, a zillion years ago. It applies to everything we do, pretty much.

The purpose of Wikipedia is to help. Almost entirely, it does. It is a work of positive construction without equal or substitute. That some use it to hurt, or to spread false information, does not diminish Wikipedia’s worth as a resource.

The trick for researchers using Wikipedia as a resource is not a difficult one: don’t cite it. Dig down in references, make sure those are good, and move on from there. It’s not complicated.

Since that topic and comment are due to slide down into the Web’s great forgettery (where Google searches do not go), I thought I’d share it here.


Simon Willison

Deno Deploy Beta 3

Deno Deploy Beta 3 I missed Deno Deploy when it first came out back in June: it's a really interesting new hosting environment for scripts written in Deno, Node.js creator Ryan Dahl's re-imagining of Node.js. Deno Deploy runs your code using v8 isolates running in 28 regions worldwide, with a clever BroadcastChannel mechanism (inspired by the browser API of the same name) that allows instances o

Deno Deploy Beta 3

I missed Deno Deploy when it first came out back in June: it's a really interesting new hosting environment for scripts written in Deno, Node.js creator Ryan Dahl's re-imagining of Node.js. Deno Deploy runs your code using v8 isolates running in 28 regions worldwide, with a clever BroadcastChannel mechanism (inspired by the browser API of the same name) that allows instances of the server-side code running in different regions to send each other messages. See the "via" link for my annotated version of a demo by Ondřej Žára that got me excited about what it can do.

Via My TILs

Saturday, 06. November 2021

Simon Willison

AWS IAM definitions in Datasette

AWS IAM definitions in Datasette As part of my ongoing quest to conquer IAM permissions, I built myself a Datasette instance that lets me run queries against all 10,441 permissions across 280 AWS services. It's deployed by a build script running in GitHub Actions which downloads a 8.9MB JSON file from the Salesforce policy_sentry repository - policy_sentry itself creates that JSON file by runnin

AWS IAM definitions in Datasette

As part of my ongoing quest to conquer IAM permissions, I built myself a Datasette instance that lets me run queries against all 10,441 permissions across 280 AWS services. It's deployed by a build script running in GitHub Actions which downloads a 8.9MB JSON file from the Salesforce policy_sentry repository - policy_sentry itself creates that JSON file by running an HTML scraper against the official AWS documentation!

Via @simonw

Friday, 05. November 2021

Simon Willison

A half-hour to learn Rust

A half-hour to learn Rust I haven't tried to write any Rust yet but I occasionally find myself wanting to read it, and I find some of the syntax really difficult to get my head around. This article helped a lot: it provides a quick but thorough introduction to most of Rust's syntax, with clearly explained snippet examples for each one.

A half-hour to learn Rust

I haven't tried to write any Rust yet but I occasionally find myself wanting to read it, and I find some of the syntax really difficult to get my head around. This article helped a lot: it provides a quick but thorough introduction to most of Rust's syntax, with clearly explained snippet examples for each one.


An oral history of Bank Python

An oral history of Bank Python Fascinating description of a very custom Python environment inside a large investment bank - where all of the code lives inside the Python environment itself, everything can be imported into the same process and a directed acyclic graph engine implements Excel-style reactive dependencies. Plenty of extra flavour from people who've worked with this (and related) Pyt

An oral history of Bank Python

Fascinating description of a very custom Python environment inside a large investment bank - where all of the code lives inside the Python environment itself, everything can be imported into the same process and a directed acyclic graph engine implements Excel-style reactive dependencies. Plenty of extra flavour from people who've worked with this (and related) Python systems in the Hacker News comments.

Via Hacker News


Weeknotes: datasette-jupyterlite, s3-credentials and a Python packaging talk

My big project this week was s3-credentials, described yesterday - but I also put together a fun expermiental Datasette plugin bundling JupyterLite and wrote up my PyGotham talk on Python packaging. datasette-jupyterlite JupyterLite is absolutely incredible: it's a full, working distribution of Jupyter that runs entirely in a browser, thanks to a Python interpreter (and various other parts of

My big project this week was s3-credentials, described yesterday - but I also put together a fun expermiental Datasette plugin bundling JupyterLite and wrote up my PyGotham talk on Python packaging.

datasette-jupyterlite

JupyterLite is absolutely incredible: it's a full, working distribution of Jupyter that runs entirely in a browser, thanks to a Python interpreter (and various other parts of the scientific Python stack) that has been compiled to WebAssembly by the Pyodide project.

Since it's just static JavaScript (and WASM modules) it's possible to host it anywhere that can run a web server.

Datasette runs a web server...

So, I built datasette-jupyterlite - a Datasette plugin that bundles JupyterLite and serves it up as part of the Datasette instance.

You can try a live demo here:

Here's some Python code that will retrieve data from the associated Datasette instance and pull it into a Pandas DataFrame:

import pandas, pyodide pandas.read_csv(pyodide.open_url( "https://latest-with-plugins.datasette.io/github/stars.csv") )

(I haven't yet found a way to do this with a relative rather than absolute URL.)

The best part of this is that it works in Datasette Desktop! You can install the plugin using the "Install and manage plugins" menu item to get a version of Jupyter running in Python running in WebAssembly running in V8 running in Chromium running in Electron.

The plugin implementation is just 30 lines of code - it uses the jupyterlite Python package which bundles a .tgz file containing all of the required static assets, then serves files directly out of that tarfile.

Also this week

My other projects from this week are already written about on the blog:

s3-credentials: a tool for creating credentials for S3 buckets introduces a new CLI tool I built that automates the process of creating new, long-lived credentials that provide read-only, read-write or write-only access to just a single specified S3 bucket. How to build, test and publish an open source Python library is a detailed write-up of the 10 minute workshop I presented at PyGotham this year showing how to create a Python library, bundle it up as a package using setup.py, publish it to PyPI and then set up GitHub Actions to test and publish future releases. Releases this week s3-credentials: 0.4 - (4 releases total) - 2021-11-04
A tool for creating credentials for accessing S3 buckets datasette-notebook: 0.2a0 - (4 releases total) - 2021-11-02
A markdown wiki and dashboarding system for Datasette datasette-jupyterlite: 0.1a0 - 2021-11-01
JupyterLite as a Datasette plugin TIL this week Using VCR and pytest with pytest-recording Quick and dirty mock testing with mock_calls

Thursday, 04. November 2021

Simon Willison

How to build, test and publish an open source Python library

At PyGotham this year I presented a ten minute workshop on how to package up a new open source Python library and publish it to the Python Package Index. Here is the video and accompanying notes, which should make sense even without watching the talk. The video PyGotham arrange for sign language interpretation for all of their talks, which is really cool. Since those take up a portion of the

At PyGotham this year I presented a ten minute workshop on how to package up a new open source Python library and publish it to the Python Package Index. Here is the video and accompanying notes, which should make sense even without watching the talk.

The video

PyGotham arrange for sign language interpretation for all of their talks, which is really cool. Since those take up a portion of the screen (and YouTube don't yet have a way to apply them as a different layer) I've also made available a copy of the original video without the sign language.

Packaging a single module

I used this code which I've been copying and pasting between my own projects for over a decade.

BaseConverter is a simple class that can convert an integer to a shortened character string and back again, for example:

>>> pid = BaseConverter("bcdfghkmpqrtwxyz") >>> pid.from_int(1234) "gxd" >>> pid.to_int("gxd") 1234

To turn this into a library, first I created a pids/ directory (I've chosen that name because it's available on PyPI).

To turn that code into a package:

mkdir pids && cd pids

Create pids.py in that directory with the contents of this file

Create a new setup.py file in that folder containing the following:

from setuptools import setup setup( name="pids", version="0.1", description="A tiny Python library for generating public IDs from integers", author="Simon Willison", url="https://github.com/simonw/...", license="Apache License, Version 2.0", py_modules=["pids"], )

Run python3 setup.py sdist to create the packaged source distribution, dist/pids-0.1.tar.gz

(I've since learned that it's better to run python3 -m build here instead, see Why you shouldn't invoke setup.py directly).

This is all it takes: a setup.py file with some metadata, then a single command to turn that into a packaged .tar.gz file.

Testing it in a Jupyter notebook

Having created that file, I demonstrated how it can be tested in a Jupyter notebook.

Jupyter has a %pip magic command which runs pip to install a package into the same environment as the current Jupyter kernel:

%pip install /Users/simon/Dropbox/Presentations/2021/pygotham/pids/dist/pids-0.1.tar.gz

Having done this, I could excute the library like so:

>>> import pids >>> pids.pid.from_int(1234) 'gxd' Uploading the package to PyPI

I used twine (pip install twine) to upload my new package to PyPI.

You need to create a PyPI account before running this command.

By default you need to paste in the PyPI account's username and password:

% twine upload dist/pids-0.1.tar.gz Uploading distributions to https://upload.pypi.org/legacy/ Enter your username: simonw Enter your password: Uploading pids-0.1.tar.gz 100%|██████████████████████████████████████| 4.16k/4.16k [00:00<00:00, 4.56kB/s] View at: https://pypi.org/project/pids/0.1/

The release is now live at https://pypi.org/project/pids/0.1/ - and anyone can run pip install pids to install it.

Adding documentation

If you visit the 0.1 release page you'll see the following message:

The author of this package has not provided a project description

To fix this I added a README.md file with some basic documentation, written in Markdown:

# pids Create short public identifiers based on integer IDs. ## Installation pip install pids ## Usage from pids import pid public_id = pid.from_int(1234) # public_id is now "gxd" id = pid.to_int("gxd") # id is now 1234

Then I modified my setup.py file to look like this:

from setuptools import setup import os def get_long_description(): with open( os.path.join(os.path.dirname(__file__), "README.md"), encoding="utf8", ) as fp: return fp.read() setup( name="pids", version="0.1.1", long_description=get_long_description(), long_description_content_type="text/markdown", description="A tiny Python library for generating public IDs from integers", author="Simon Willison", url="https://github.com/simonw/...", license="Apache License, Version 2.0", py_modules=["pids"], )

The get_long_description() function reads that README.md file into a Python string.

The following two extra arguments to setup() add that as metadata visible to PyPI:

long_description=get_long_description(), long_description_content_type="text/markdown",

I also updated the version number to 0.1.1 in preparation for a new release.

Running python3 setup.py sdist created a new file called dist/pids-0.1.1.tar.gz - I then uploaded that file using twine upload dist/pids-0.1.1.tar.gz which created a new release with a visible README at https://pypi.org/project/pids/0.1.1/

Adding some tests

I like using pytest for tests, so I added that as a test dependency by modifying setup.py to add the following line:

extras_require={"test": ["pytest"]},

Next, I created a virtual environment and installed my package and its test dependencies into an "editable" mode like so:

# Create and activate environment python3 -m venv venv source venv/bin/activate # Install editable module, plus pytest pip install -e '.[test]'

Now I can run the tests!

(venv) pids % pytest ============================= test session starts ============================== platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/simon/Dropbox/Presentations/2021/pygotham/pids collected 0 items ============================ no tests ran in 0.01s =============================

There aren't any tests yet. I created a tests/ folder and dropped in a test_pids.py file that looked like this:

import pytest import pids def test_from_int(): assert pids.pid.from_int(1234) == "gxd" def test_to_int(): assert pids.pid.to_int("gxd") == 1234

Running pytest in the project directory now runs those tests:

(venv) pids % pytest ============================= test session starts ============================== platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/simon/Dropbox/Presentations/2021/pygotham/pids collected 2 items tests/test_pids.py .. [100%] ============================== 2 passed in 0.01s =============================== Creating a GitHub repository

Time to publish the source code on GitHub.

I created a repository using the form at https://github.com/new

Having created the simonw/pids repository, I ran the following commands locally to push my code to it (mostly copy and pasted from the GitHub example):

git init git add README.md pids.py setup.py tests/test_pids.py git commit -m "first commit" git branch -M main git remote add origin git@github.com:simonw/pids.git git push -u origin main Running the tests with GitHub Actions

I copied in a .github/workflows folder from another project with two files, test.yml and publish.yml. The .github/workflows/test.yml file contained this:

name: Test on: [push] jobs: test: runs-on: ubuntu-latest strategy: matrix: python-version: [3.6, 3.7, 3.8, 3.9] steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - uses: actions/cache@v2 name: Configure pip caching with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }} restore-keys: | ${{ runner.os }}-pip- - name: Install dependencies run: | pip install -e '.[test]' - name: Run tests run: | pytest

The matrix block there causes the job to run four times, on four different versions of Python.

The action steps do the following:

Checkout the current repository Install the specified Python version Configure GitHub's action caching mechanism for the ~/.cache/pip directory - this avoids installing the same files from PyPI over the internet the next time the workflow runs Install the test dependencies Run the tests

I added and pushed these new files:

git add .github git commit -m "GitHub Actions" git push

The Actions tab in my repository instantly ran the test suite, and when it passed added a green checkmark to my commit.

Publishing a new release using GitHub

The .github/workflows/publish.yml workflow is triggered by new GitHub releases. It tests them and then, if the tests pass, publishes them up to PyPI using twine.

The workflow looks like this:

name: Publish Python Package on: release: types: [created] jobs: test: runs-on: ubuntu-latest strategy: matrix: python-version: [3.6, 3.7, 3.8, 3.9] steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - uses: actions/cache@v2 name: Configure pip caching with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }} restore-keys: | ${{ runner.os }}-pip- - name: Install dependencies run: | pip install -e '.[test]' - name: Run tests run: | pytest deploy: runs-on: ubuntu-latest needs: [test] steps: - uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 with: python-version: '3.9' - uses: actions/cache@v2 name: Configure pip caching with: path: ~/.cache/pip key: ${{ runner.os }}-publish-pip-${{ hashFiles('**/setup.py') }} restore-keys: | ${{ runner.os }}-publish-pip- - name: Install dependencies run: | pip install setuptools wheel twine - name: Publish env: TWINE_USERNAME: __token__ TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }} run: | python setup.py sdist bdist_wheel twine upload dist/*

It contains two jobs: the test job the tests again - we should never publish a package without first ensuring that the test suite passes - and then the deploy job runs python setup.py sdist bdist_wheel followed by twine upload dist/* to upload the resulting packages.

(My latest version of this uses python3 -m build here instead.)

Before publishing a package with this action, I needed to create a PYPI_TOKEN that the action could use to authenticate with my PyPI account.

I used the https://pypi.org/manage/account/token/ page to create that token:

I copied out the newly created token:

Then I used the "Settings -> Secrets" tab on the GitHub repository to add that as a secret called PYPI_TOKEN:

(I have since revoked the token that I used in the video, since it is visible on screen to anyone watching.)

I used the GitHub web interface to edit setup.py to bump the version number in that file up to 0.1.2, then I navigated to the releases tab in the repository, clicked "Draft new release" and created a release that would create a new 0.1.2 tag as part of the release process.

When I published the release, the publish.yml action started to run. After the tests had passed it pushed the new release to PyPI: https://pypi.org/project/pids/0.1.2/

Bonus: cookiecutter templates

I've published a lot of packages using this process - 143 and counting!

Rather than copy and paste in a setup.py each time, a couple of years ago I switched over to using cookiecutter templates.

I have three templates that I use today:

python-lib for standalone Python libraries datasette-plugin for Datasette plugins click-app for CLI applications built using the Click package

Back in August I figured out a way to make these available as GitHub repository templates, which I described in Dynamic content for GitHub repository templates using cookiecutter and GitHub Actions. This means you can create a new GitHub repository that implements the test.yml and publish.yml pattern described in this talk with just a few clicks on the GitHub website.


Tim Bouma's Blog

The Rise of MetaNations

Photo by Vladislav Klapin on Unsplash We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungibl
Photo by Vladislav Klapin on Unsplash

We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungible tokens . We’re not there yet, but many are seeing the trajectory where metanations, like Facebook will have metacitizens, who will have metarights to interact and transact in this new space. This is not science fiction, but is becoming a reality and the fronts are opening up on identity, currency, rights and property that exist within these digital realms but also touch upon the real world.

So what’s the imperative for us as real people and governments? To make sure that these realms are as open and inclusive as possible. Personally for me, I don’t want to have a future where certain metacitizens can exert their metarights in an unfair way within the real world; the chosen few getting to the front of the line for everything.

But we can’t just regulate and outlaw — we need to counter in an open fashion. We need open identity, open currency, open payments, and open rights

Where I am seeing the battle shape up most clearly is in the open payments space, specifically the Lightning Network. I am sure as part of Facebook’s play, they will introduce their own currency, Diem, that can only be used within their own metaverse according to their own rules. Honestly, I don’t believe we can counter this as governments and regulators, we need support open approaches such as the Lightning Network. A great backgrounder article by Nik Bhatia, author of Layered Money, here

Wednesday, 03. November 2021

Identity Praxis, Inc.

FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.” “Too many services are about leveraging consumer […] The post FTC’s Shot Across t

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.”

“Too many services are about leveraging consumer data instead of straightforwardly providing value. For even the savviest users, the price of browsing the internet is being tracked across the web.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20212

According to Slaughter, privacy issues within data-driven markets stem from surveillance capitalism, which is fueled by indiscriminate data collection practices. She suggests that the remedy to curtail these practices is to focus on “purpose and use” restrictions and limitations rather than solely relying on the notice & choice framework. In other words, in the future industry may no longer justify data practice with explanations like “they opted in” and “we got consent” or falling ba

“Collection and use limitations can help protect people’s rights. It should not be necessary to trade one’s data away as a cost of full participation in society and the modern information economy.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20213

FTC Has Concerns Other Than Privacy

So that there is no uncertainty or doubt, however, Duball4 reports that, while consumer privacy is a chief concern for the commission, it is not the primary concern to the exclusion of other concerns. The commission is also worried about algorithmic bias and “dark patterns” practices. In other words, it is not just about the data, it is about the methods used to “trick” people into giving it up and how it is processed and applied to business decision-making.

Takeaway

My takeaway is that it is time for organizations to take a serious look at revamping their end-to-end processes and tech-stacks. This is a c-suite leadership all-hands-on-deck moment. It will take years for the larger organizations to turn their flotilla in the right direction, and for the industry at large to sort everything out. However, rest asserted, the empowered person–the self-sovereign individual–is nigh and will sort it out for industry soon enough.

There is time, but not much, maybe three, five, or seven years, before the people will be equipped with the knowledge and tools to take back control of their data. It is already happening; just look at open banking in the UK. These new tools, aka personal information management systems, will enable people to granularly exchange data on their terms with their stated purpose of use, not the businesses. They will give them the power to process data in a way that protects them from bias or at least helps them know when it is happening.

Why should businesses care about all this? Well, I predict, as do many so I’m not too far out on a limb here, that in the not-too-distant future, the empowered, connected individual will walk with their wallet and only do business with those institutions that respect their sovereignty, both physically and digitally. So, to all out there, it is time, and it is time to prepare for the future.

REFERENCES Duball, Joseph. “On the Horizon: FTC’s Slaughter Maps Data Regulation’s Potential Future.” The Privacy Advisor, November 2021. https://iapp.org/news/a/on-the-horizon-ftcs-slaughter-maps-data-regulations-potential-future/.↩︎ IBID.↩︎ IBID.↩︎ IBID.↩︎

The post FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management appeared first on Identity Praxis, Inc..


Vishal Gupta

THREE core fundamental policy gaps that are clogging the courts of India

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 3+ crore cases pending in courts due to following co

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 3+ crore cases pending in courts due to following core reasons.

Policy gap 1 — Only in India there is zero deterrence to perjury

a. In India, perjury law is taken very lightly. 99% of cases are full of lies, deceit and mockery of justice. Affidavits in India do not serve much purpose. There have been various judgments and cries from all levels of judiciary but in vain.

b. Perjury makes any case 10x more complex and consumes 20x more time to adjudicate. It is akin to allowing a bullock cart in the middle of express highway. It is nothing but an attack on justice delivery system. Historically, perjury used to be punished with death sentence and considered an equally serious crime as murder.

c. This is against the best international practices and does not make economic sense for India.

Policy gap 2 — India has no equivalent laws such as the U.S. Federal Rule 11 enacted in 1983 (as amendment in 1993) to curb the explosion of frivolous litigation.

The entire provision of rule 11 of U.S. federal rules can be summarized in the following manner:

It requires that a district court mandatorily sanction attorneys or parties who submit improper pleadings like pleadings with

Improper purpose Containing frivolous arguments Facts or arguments that have no evidentiary support Omissions and errors, misleading and crafty language Baseless or unreasonable denials with non-application of mind Negligence, failure to appear or unreasonable adjournments

All developed countries like UK and Australia also have similar laws to combat frivolous litigation.

Police gap 3 — Only in India the lawyers are barred from contingent fee agreements under Bar council rules Indian lawyers are incentivized to make cases go longer, add complexity and never end. Consequently, they super specialize and become innovators in creating alibis so that the case never ends. Lawyers in India do not investigate or legally vet the case before filing. There is no incentive to apply for torts claims or prosecute perjury and therefore there is no deterrence created. Lawyers are supposed to be gate keepers to prevent frivolous litigation but in India it is actually opposite due to the perverse policy on contingent fee and torts.

Economic modelling suggests that — contingent fee arrangements reduce frivolous suits when compared to hourly fee arrangements. The reasoning is simple: When an attorney’s compensation is based solely on success, as opposed to hours billed, there is great incentive to accept and prosecute only meritorious cases.

At least one empirical analysis concludes that — “hourly fees encourage the filing of low-quality suits and increase the time to settlement (i.e., contingency fees increase legal quality and decrease the time to settlement).”

Negative impact of the 3 policy gaps

1. Conviction rates in India are abysmally low below 10% whereas internationally between 60% — 100%.[1]

a. Japan — 99.97%
b. China — 98%
c. Russia — 90%
d. UK — 80%
e. US — between 65% and 80%

2. Globally the 80%-99% cases get settled before trial but India its actually opposite because

There is absolutely no fear of law and perjury is a norm. Lawyers have no interest in driving a settlement due to per appearance fee system. There is an artificial limit of fee shifting (awarding costs), torts and there is no motivation of the part of judges to create deterrence.

3. The contingent fee and tort cannot be enabled till such time the fraternity of lawyers can be relied upon for ethical and moral conduct.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Easy solution to complex problem

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Policy 1 — Restricting perjury and making it a non-bailable offence will resolve 50% cases immediately
(i.e. 1.5 crore cases within 3 months)

India must truly embrace “Satyam ev jayate” and make perjury a non-bailable offence. All lawyers and litigants should be given 3 month’s notice to refile their pleadings or settle the cases. They would have to face the mandatory consequences of false evidence or averments later found to be untrue.

There needs to be a basic expectation reset in Indian courts litigation — the filings are correct, and the lawyer is responsible for prima facie diligence and candor before courts. The role of judiciary is not to distinguish between truth and false but to determine the sequence of events, fix accountability and award penalties. Presenting false evidence or frivolous arguments must be seen as separate offence on its own.

At a minimum IPC 195(1)(b)(i) must be removed that states the following –
“No Court shall take cognizance- of any offence punishable under any of the following sections of the IPC (45 of 1860), namely, sections 193 to 196 (both inclusive), 199, 200, 205 to 211 (both inclusive) and 228, when such offence is alleged to have been committed in, or in relation to, any proceeding in any Court”

Because:

This necessarily makes the judges party to the complaint and therefore all judges are reluctant to prosecute perjury. This estoppel opens up possibility of corruption in judiciary and public offices. The intention of this provision has clearly backfired. This restriction is unique to india and against the international norms Policy 2 — Sanctions on lawyers to streamline the perverse incentives that plague Indian justice system

The courts in USA are mandated to compulsorily sanction lawyers for various professional misconduct in litigation (Federal rule 11 of civil procedure) like:

Remedies and sanctions for lawyer’s misconduct can be categorized into three groups.

Sanctions and remedies for attorney misconduct which are available to public authorities. Such sanctions include professional discipline, criminal liability of lawyers who assist their clients in committing criminal acts, and judicially imposed sanctions such as for contempt of court. Professional discipline is generally the best known sanction for attorney misconduct. Sanctions which are available to lawyers’ clients. For example, damages for attorney malpractice, forfeiture of an attorney’s fee, and judicial nullification of gifts or business transactions that breach a lawyer’s fiduciary duty to a client. Remedies that may be available to third parties injured by a lawyer’s conduct on behalf of a client. These include injunctions against representing a client in violation of the lawyer’s duty to a third party, damages for breach of an obligation the attorney assumes to a non-client, and judicial nullification of settlements or jury verdicts obtained by attorney misconduct. Policy 3 — Contingent Fee and Tort Law making judiciary 3x efficient.
(needs perjury law as discussed above to unlock)

In India Lawyers spend more time in making the case complex and lengthy based on “dehari system” while the western counterparts earn far more money by genuinely solving cases and creating real values for the country. The market economics based on judicial policy on regulating legal profession and ethics in India is however geared towards permanently clogging the system. 3 of the 4 stakeholders benefit by making the litigation never end.

There is a dire need for the system to be re-incentivized wherein the legal profession can generate 10x more value for the country in catching and penalizing law abusers. This in turn will also attract and create more talented lawyers because then they will be investing more time in investigating and preparing the cases to win.

Making perjury a non-bailable offence and rules for mandatorily sanctioning lawyer mis-conduct will unlock the contingent fee and tort law in India.

Allowing contingent fee for lawyers have four principal policy justifications

Firstly, such arrangements

enable the impecunious (having no money) to obtain representation. Such persons cannot afford the costs of litigation unless and until it is successful. Even members of the middle- and upper-socioeconomic classes may find it difficult to pay legal fees in advance of success and collection of judgment. This is particularly so today as litigation has become more complex, often involving suits against multiple parties or multinational entities, and concerning matters requiring expert scientific and economic evidence.

Secondly,

Contingent fee arrangements can help align the interests of lawyer and client, as both will have a direct financial stake in the outcome of the litigation.

Third

By predicating an attorney’s compensation on the success of a suit, the attorney is given incentive to function as gatekeeper, screening cases for both merit and sufficiency of proof, and lodging only those likely to succeed. This provides as an important and genuine signal for litigants to understand the merit of their case.

Fourth

And more generally, all persons of sound mind should be permitted to contract freely, and restrictions on contingent fee arrangements inhibit this freedom.

Three other reasons to justify unlocking of contingent fee:-

Clients, particularly unsophisticated ones, may be unable to determine when an attorney has underperformed or acted irresponsibly;15 in these instances, an attorney’s reputation would be unaffected, and thus the risk of reputational harm would not adequately protect against malfeasance. Even when clients are aware of an attorney’s poor performance or irresponsibility, they may lack the means, media, or credibility to effectively harm the attorney’s reputation. The interests of attorney and client are more closely aligned, ceteris paribus, when fee arrangements are structured so as to minimize perverse incentives. Why contingent fees reduce caseload:

Lesser cases better filing

Complainants get genuine advice about chances of winning. They do not file unless they get lawyer’s buy-in.

Lawyers screen cases for merit and sufficiency of proof before filing. Lawyers don’t pick up bad cases to manage reputation. Comprehensive evidence gathering happens before Lawyer decides to file a case.

Lawyers simplify case and only allege charges that can be sustained.

Use simple and concise arguments for the judges. Lawyers spend more time working hard outside the courts. Thus increasing case quality.

Faster case proceedings

Less adjournments and hearings They do better preparation and take less adjournments. Multiple steps get completed in single hearings. They create urgency for clients to show up at every hearing. Lawyers do not unnecessarily appeal and stay matters because they do not get paid per hearing and want quick results.

Case withdrawals

There are less takers if a lawyer drops a case when there are surprises by client which will adversely impact the outcome. Lawyers persuade complainants to settle when appropriate. Conclusions

With 3+ crore cases pending and a dysfunctional justice delivery there is a mass exodus of Ultra High Net worth Individuals (UHNIs) from India. There is an absolute urgent need for above reforms before the situation turns into a complete banana republic.

It is an absolute embarrassment and national shame to allow Indians to be blatant liars even in courts. Business and survival in India today is a race to the bottom just because there is no bar on falsehood and having corrupt values, it is near impossible to survive without being part of the same culture.

It is prayed that even more stricter laws be enacted than just global norms such that Indians can be trusted globally to never lie. It’s not just about Indian courts but they could be counted as the most truthful race even in foreign lands and command high respect globally.

Restoring justice delivery Economic benefits of reform in perjury, contingent fee and legal ethics

1. Higher quality of litigation work.

2. Attract better talent to the profession due to higher profitability.

3. Create more jobs in the economy.

4. Offer higher pool of qualified people for judiciary.

5. Drastic reduction in corrupt or criminal activity due to fear of law.

6. Unlocking of trillions of dollars of wealth/resources stuck in litigation.

7. Better investment climate due to more reliability in business transactions.

8. Higher reliability in products and services.

9. Access to justice for all.

10. Restoring faith in judiciary and honour in being Indian.

Further reading

1. Perjury: Important Case Laws Showing How Seriously It is Taken in India! (lawyersclubindia.com)

2. LAW OF PERJURY- (Second Edition) — Indian Bar Association

[1] Comparison of the conviction rates of a few countries of the world | A wide angle view of India (wordpress.com)


Simon Willison

s3-credentials: a tool for creating credentials for S3 buckets

I've built a command-line tool called s3-credentials to solve a problem that's been frustrating me for ages: how to quickly and easily create AWS credentials (an access key and secret key) that have permission to read or write from just a single S3 bucket. The need for bucket credentials for S3 I'm an enormous fan of Amazon S3: I've been using it for fifteen years now (since the launch in 2006

I've built a command-line tool called s3-credentials to solve a problem that's been frustrating me for ages: how to quickly and easily create AWS credentials (an access key and secret key) that have permission to read or write from just a single S3 bucket.

The need for bucket credentials for S3

I'm an enormous fan of Amazon S3: I've been using it for fifteen years now (since the launch in 2006) and it's my all-time favourite cloud service: it's cheap, reliable and basically indestructible.

You need two credentials to make API calls to S3: an AWS_ACCESS_KEY_ID and a AWS_SECRET_ACCESS_KEY.

Since I often end up adding these credentials to projects hosted in different environments, I'm not at all keen on using my root-level credentials here: usually a project works against just one dedicated S3 bucket, so ideally I would like to create dedicated credentials that are limited to just that bucket.

Creating those credentials is surprisingly difficult!

Dogsheep Photos

The last time I solved this problem was for my Dogsheep Photos project. I built a tool that uploads all of my photos from Apple Photos to my own dedicated S3 bucket, and extracts the photo metadata into a SQLite database. This means I can do some really cool tricks using SQL to analyze my photos, as described in Using SQL to find my best photo of a pelican according to Apple Photos.

The photos are stored in a S3 private bucket, with a custom proxy in front of them that I can use to grant access to specific photographs via a signed URL.

For the proxy, I decided to create dedicated credentials that were allowed to make read-only requests to my private S3 bucket.

I made detailed notes along the way as I figured out to do that. It was really hard! There's one step where you literally have to hand-edit a JSON policy document that looks like this (replace dogsheep-photos-simon with your own bucket name) and paste that into the AWS web console:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::dogsheep-photos-simon/*" ] } ] }

I set myself an ambition to try and fix this at some point in the future (that was in April 2020).

Today I found myself wanting new bucket credentials, so I could play with Litestream. I decided to solve this problem once and for all.

I've also been meaning to really get my head around Amazon's IAM permission model for years, and this felt like a great excuse to figure it out through writing code.

The process in full

Here are the steps you need to take in order to get long-lasting credentials for accessing a specific S3 bucket.

Create an S3 bucket Create a new, dedicated user. You need a user and not a role because long-lasting AWS credentials cannot be created for roles - and we want credentials we can use in a project without constantly needing to update them. Assign an "inline policy" to that user granting them read-only or read-write access to the specific S3 bucket - this is the JSON format shown above. Create AWS credentials for that user.

There are plenty of other ways you can achieve this: you can add permissions to a group and assign that user to a group, or you can create a named "managed policy" and attach that to the user. But using an inline policy seems to be the simplest of the available options.

Using the boto3 Python client library for AWS this sequence converts to the following API calls:

import boto3 import json s3 = boto3.client("s3") iam = boto3.client("iam") username = "my-new-user" bucket_name = "my-new-bucket" policy_name = "user-can-access-bucket" policy_document = { "... that big JSON document ...": "" } # Create the bucket s3.create_bucket(Bucket=bucket_name) # Create the user iam.create_user(UserName=username) # Assign the policy to the user iam.put_user_policy( PolicyDocument=json.dumps(policy_document), PolicyName=policy_name, UserName=username, ) # Retrieve and print the credentials response = iam.create_access_key( UserName=username, ) print(response["AccessKey"]) Turning it into a CLI tool

I never want to have to figure out how to do this again, so I decided to build a tool around it.

s3-credentials is a Python CLI utility built on top of Click using my click-app cookicutter template.

It's available through PyPI, so you can install it using:

% pip install s3-credentials

The main command is s3-credentials create, which runs through the above sequence of steps.

To create read-only credentials for my existing static.niche-museums.com bucket I can run the following:

% s3-credentials create static.niche-museums.com --read-only Created user: s3.read-only.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess Attached policy s3.read-only.static.niche-museums.com to user s3.read-only.static.niche-museums.com Created access key for user: s3.read-only.static.niche-museums.com { "UserName": "s3.read-only.static.niche-museums.com", "AccessKeyId": "AKIAWXFXAIOZJ26NEGBN", "Status": "Active", "SecretAccessKey": "...", "CreateDate": "2021-11-03 03:21:12+00:00" }

The command shows each step as it executes, and at the end it outputs the newly created access key and secret key.

It defaults to creating a user with a username that reflects what it will be able to do: s3.read-only.static.niche-museums.com. You can pass --username something to specify a custom username instead.

If you omit the --read-only flag it will create a user with read and write access to the bucket. There's also a --write-only flag which creates a user that can write to but not read from the bucket - useful for use-cases like logging or backup scripts.

The README has full documentation on the various other options, plus details of the other s3-credentials utility commands list-users, list-buckets, list-user-policies and whoami.

Learned along the way

This really was a fantastic project for deepening my understanding of S3, IAM and how it all fits together. A few extra points I picked up:

AWS users can be created with something called a permissions boundary. This is an advanced security feature which lets a user be restricted to a set of maximum permissions - for example, only allowed to interact with S3, not any other AWS service.

Pemissions boundaries do not themselves grant permissions - a user will not be able to do anything until extra policies are added to their account. It instead acts as defense in depth, setting an upper limit to what a user can do no matter what other policies are applied to them.

There's one big catch: the value you set for a permissions boundary is a very weakly documented ARN string - the boto3 documentation simply calls it "The ARN of the policy that is used to set the permissions boundary for the user". I used GitHub code search to dig up some examples, and found arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess and arn:aws:iam::aws:policy/AmazonS3FullAccess to be the ones most relevant to my project. This random file appears to contain more.

Those JSON policy documents really are the dark secret magic that holds AWS together. Finding trustworthy examples of read-only, read-write and write-only policies for specific S3 buckets was not at all easy. I made detailed notes in this comment thread - the policies I went with are baked into the policies.py file in the s3-credentials repository. If you know your way around IAM I would love to hear your feedback on the policies I ended up using!

Writing automated tests for code that makes extensive use of boto3 - such that those tests don't make any real HTTP requests to the API - is a bit fiddly. I explored a few options for this - potential candidates included the botocore.stub.Stubber class and the VCR.py class for saving and replaying HTTP traffic (see this TIL). I ended up going with Python's Mock class, via pytest-mock - here's another TIL on the pattern I used for that. (Update: Jeff Triplett pointed me to moto which looks like a really great solution for this.)

Feedback from AWS experts wanted

The tool I've built solves my specific problem pretty well. I'm nervous about it though: I am by no means an IAM expert, and I'm somewhat paranoid that I may have made a dumb mistake and baked it into the tooling.

As such, the README currently carries a warning that you should review what the tool is doing carefully before trusting it against your own AWS account!

If you are an AWS expert, you can help: I have an open issue requesting expert feedback, and I'd love to hear from people with deep experience who can either validate that my approach is sound or help explain what I'm doing wrong and how the process can be fixed.

Tuesday, 02. November 2021

MyDigitalFootprint

Optimising for “performance” is directional #cop26

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements with achievements.  Below is from the UN. 



The reason I remain doubtful and sceptical is that the decision of 1.5 degrees is framed.  We are optimising for a goal.  In this case, we do not want to increase our global temperature beyond 1.5 degrees.  Have you ever tried to heat water and stop the heating process such that a temperature target was reached. Critically you only have one go.    Try it. Fill a pan with ice and set a target, say 38.4 degrees, use a thermometer and switch off the pan when you think the final temperature will be your target. Did you manage to get within 1.5 degrees of your target? 

The Peak Paradox framework forces us to think that we cannot optimise for one outcome, one goal, one target or for a one-dimensional framing. To do so would be to ignore other optimisations, visions, ideas or beliefs.  When we optimise, for one thing, something else will not have an optimal outcome.    

In business, we are asked to articulate a single purpose, one mission, the single justification that sets out a reason to exist.  The more we as a board or senior leadership team optimise for “performance”, the more we become directional. Performance itself, along with the thinking that drives the best allocation of resources, means we are framed to optimise for an outcome.   In social sciences and economics, this is called path dependency.   The unintended consequences of our previous decisions that drive efficiency and effectiveness might not directly impact us, but another part of an interdependent system that, through several unconnected actions, will feed into our future decisions and outcomes.  Complex system thinking highlights such causes and effects of positive and negative feedback loops. 

By example. Dishwasher tablets make me more productive but make the water system less efficient by reducing the effectiveness of the ecosystem. However, I am framed by performance and my own efficiency; therefore, I have to optimise my time, and the dishwasher is a perfect solution.  Indeed we are marketed that the dishwasher saves water and energy compared to other washing up techniques. The single narrow view of optimisation is straightforward and easy to understand.  The views we hold on battery cars to mobile phones is framed as one of being more productive.  Performance as a metric matters more than anything else, why because the story says performance creates economic activity that creates growth that means fewer people are in poverty.  

“Performance” is a one-dimensional optimisation where economic activity based on financial outcomes wins.  You and I are the agents of any increase in performance, and the losses in the equation of equilibrium are somewhere else in the system. We are framed and educated to doubt the long, complex link between the use of anti-bacterial wipes and someone else’s skin condition. Performance as a dimension for measurement creates an optimal outcome for one and a sub-optimal outcome for someone else. 

Performance as a measure creates an optimal outcome for one and a sub-optimal outcome for someone else. 

If it was just me, then the cause and effect relationship are hard to see, but when more of humanity optimises for performance, it is the scale at which we all lose that suddenly comes into effect. Perhaps it is time to question the simple linear ideas of one purpose, one measure, one mission, and try to optimise for different things simultaneously; however, that means simple political messages, tabloid headlines, and social-media-driven advertising will fail.  Are leaders ready to lead, or do they enjoy too much power to do the right thing?


Monday, 01. November 2021

Phil Windley's Technometria

Picos at the Edge

Summary: The future of computing is moving from the cloud to the edge. How can we create a decentralized, general-purpose computing mesh? Picos provide a model for exploration. Rainbow's End is one of my favorite books. A work of fiction, Rainbow's End imagines life in a near future world where augmented reality and pervasive IoT technology are the fabric within which people live th

Summary: The future of computing is moving from the cloud to the edge. How can we create a decentralized, general-purpose computing mesh? Picos provide a model for exploration.

Rainbow's End is one of my favorite books. A work of fiction, Rainbow's End imagines life in a near future world where augmented reality and pervasive IoT technology are the fabric within which people live their lives. This book, from 2006, is perhaps where I first began to understand the nature and importance of computing at the edge. A world where computing is ambient and immersive can't rely only on computers in the cloud.

We have significant edge computing now in the form of powerful mobile devices, but that computing is not shared without roundtrips to centralized cloud computing. One of the key components of 5G technology is compute and storage at the edge—on the cell towers themselves— that distributes computing and reduces latency. Akamai, CloudFront, and others have provided these services for years, but still in a data center somewhere. 5G moves it right to the pole in your backyard.

But the vision I've had since reading Rainbow's End is not just distributed, but decentralized, edge computing. Imagine your persistent compute jobs in interoperable containers moving around a mesh of compute engines that live on phones, laptops, servers, or anywhere else where spare cycles exist.

IPFS does this for storage, decentralizing file storage by putting files in shared spaces at the edge. With IPFS, people act as user-operators to host and receive content in a peer-to-peer manner. If a file gets more popular, IPFS attempts to store it in more places and closer to the need.

You can play with this first hand at NoFilter.org, which brands itself as a "the world's first unstoppable, uncensorable, undeplatformable, decentralized freedom of speech app." There's no server storing files, just a set of Javascript files that run in your browser. Identity is provided via Metamask which uses an Ethereum address as your identifier. I created some posts on NoFilter to explore how it works. If you look at the URL for that link, you'll see this:

https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84

The portion after the # is the Ethereum address I used at NoFilter. If we look at a single post, you'll see a URL like this:

https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84/ QmTn2r2e4LQ5ffh86KDcexNrTBaByyTiNP3pQDbNWiNJyt

Note that there's an additional identifier following the slash after my Ethereum address. This is the IPFS hash of the content of that post and is available on IPFS directly. What's stored on IPFS is the JSON of the post that the Javascript renders in the browser.

{ "author": "0xdbca72ed00c24d50661641bf42ad4be003a30b84", "title": "The IPFS Address", "timestamp": "2021-10-25T22:46:46-0-6:720", "body": "<p>If I go here:</p><p><a href=\"https://ipfs.io/ipfs/ QmT57jkkR2sh2i4uLRAZuWu6TatEDQdKN8HnwaZGaXJTrr\";>..." }

As far as I can tell, this is completely decentralized. The identity is just an Ethereum address that anyone can create using Metamask, a Javascript application that runs in the browser. The files are stored on IPFS, decentralized on storage providers around the net. They are rendered using Javascript that runs in the browser. So long as you have access to the Javascript files from somewhere you can write and read articles without reliance on any central server.

Decentralized Computing

My vision for picos is that they can operate on a decentralized mesh of pico engines in a similar decentralized fashion. Picos are already encapsulations of computation with isolated state and programs that control their operation. There are two primary problems with the current pico engine that have to be addressed to make picos independent of the underlying engine:

Picos are addressed by URL, so the pico engine's host name or IP address becomes part of the pico's address Picos have a persistence layer that is currently provided by the engine the pico is hosted on.

The first problem is solvable using DIDs and DIDComm. We've made progress in this area. You can create and use DIDs in a pico. But they are not, yet, the primary means of addressing and communicating with the pico.

The second problem could be addressed with IPFS. We've not done any work in this area yet. So I'm not aware of the pitfalls or problems, but it looks doable.

With these two architectural issues out of the way, implementing a way for picos to move easily between engines would be straightforward. We have import and export functionality already. I'm envisioning something that picos could control themselves, on demand, programatically. Ultimately, I want the pico to chose where it's hosted based on whatever factors the owner or programmer deems most important. That could be hosting cost, latency, availability, capacity, or other factors. The decentralized directory to discover engines advertising certain features or factors, and a means to pay them would have to be built—possibly as a smart contract.

A trickier problem is protecting picos from malevolent engines. This is the hardest problem, as far as I can tell. Initially, collections of trusted engines, possibly using staking, could be used.

There are plenty of fun, interesting problems if you'd like to help.

Use Picos

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you need help, contact me and we'll get you added to the Picolabs Slack. We'd love to help you use picos for your next distributed application.

If you're interested in the pico engine, the pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing to the engine are in the repository's README.

Bonus Material
Rainbows End by Vernor Vinge

The information revolution of the past thirty years blossoms into a web of conspiracies that could destroy Western civilisation. At the centre of the action is Robert Gu, a former Alzheimer's victim who has regained his mental and physical health through radical new therapies, and his family. His son and daughter-in-law are both in the military - but not a military we would recognise - while his middle school-age granddaughter is involved in perhaps the most dangerous game of all, with people and forces more powerful than she or her parents can imagine.

The End of Cloud Computing by Peter Levine

Photo Credit: SiO2 Fracture: Chemomechanics with a Machine Learning Hybrid QM/MM Scheme from Argonne National Laboratory (CC BY-NC-SA 2.0)

Tags: picos cloud edge mesh actors


Doc Searls Weblog

Going west

Long ago a person dear to me disappeared for what would become eight years. When this happened I was given comfort and perspective by his maternal grandfather, a professor of history whose study concentrated on the American South after the Civil War. “You know what the most common record of young men was, after the […]

Long ago a person dear to me disappeared for what would become eight years. When this happened I was given comfort and perspective by his maternal grandfather, a professor of history whose study concentrated on the American South after the Civil War.

“You know what the most common record of young men was, after the Civil War?” he asked.

“You mean census records?”

“Yes, and church records, family histories, all that.”

“I don’t know.”

“Two words: Went west.”

He went on to explain that that, except for the natives here in the U.S., nearly all of our ancestors had gone west. Literally or metaphorically, voluntarily or not, they went west.

More importantly, most were not going back. Many, perhaps most, were hardly heard from again in the places they left. The break from the past in countless places was sadly complete for those left behind. All that remained were those two words.

Went west.

This fact, he said, is at the heart of American rootlessness.

“We are the least rooted civilization on Earth,” he said. “This is why we have weaker family values than any other country.”

This is also why he also thought political talk about “family values” was especially ironic. We may have those values, but they tend not to keep us from going west anyway.

This comes to mind because I am haunted by Harry Chapin‘s song, “Cat’s in the Cradle.” It’s hard not to be moved by it.

Friday, 29. October 2021

Simon Willison

DuckDB-Wasm: Efficient Analytical SQL in the Browser

DuckDB-Wasm: Efficient Analytical SQL in the Browser First SQLite, now DuckDB: options for running database engines in the browser using WebAssembly keep on growing. DuckDB means browsers now have a fast, intuitive mechanism for querying Parquet files too. This also supports the same HTTP Range header trick as the SQLite demo from a while back, meaning it can query large databases loaded over HT

DuckDB-Wasm: Efficient Analytical SQL in the Browser

First SQLite, now DuckDB: options for running database engines in the browser using WebAssembly keep on growing. DuckDB means browsers now have a fast, intuitive mechanism for querying Parquet files too. This also supports the same HTTP Range header trick as the SQLite demo from a while back, meaning it can query large databases loaded over HTTP without downloading the whole file.

Via @hfmuehleisen

Thursday, 28. October 2021

Phil Windley's Technometria

Token-Based Identity

Summary: Token-based identity systems move us from talking about who, to thinking about what, so that people can operationalize their digital lives. Token-based identity systems support complex online interactions that are flexible, ad hoc, and cross-domain. I've spent some time thinking about this article from PeterVan on Programmable Money and Identity. Peter references a white pap

Summary: Token-based identity systems move us from talking about who, to thinking about what, so that people can operationalize their digital lives. Token-based identity systems support complex online interactions that are flexible, ad hoc, and cross-domain.

I've spent some time thinking about this article from PeterVan on Programmable Money and Identity. Peter references a white paper on central bank digital currencies and one on identity composability by Andrew Hong to lead into a discussion of account- and token-based1 identity. In his article, Peter says:

For Account-based identity, you need to be sure of the identity of the account holder (the User ID / Password of your Facebook-account, your company-network, etc.). For Token-based identity (Certified claim about your age for example) you need a certified claim about an attribute of that identity.

In other words, while account-based identity focuses on linking a person in possession of authentication factors to a trove of information, token-based identity is focused on claims about the subject's attributes. More succinctly: account-based identity focuses on who you are whereas token-based identity is focused on what you are.

One of my favorite scenarios for exploring this is meeting a friend for lunch. You arrive at the restaurant on time and she’s nowhere to be found. You go to the hostess to inquire about the reservation. She tells you that your reservation is correct, and your friend is already there. She escorts you to the table where you greet your friend. You are seated and the hostess leaves you with a menu. Within a few moments, the waitress arrives to take your order. You ask a few questions about different dishes. You both settle on your order and the waitress leaves to communicate with the kitchen. You happily settle in to chat with your friend, while your food is being prepared. Later you might get a refill on a drink, order dessert, and eventually pay.

While you, your friend, the host, and waitstaff recognized, remembered, and interacted with people, places, and things countless times during this scenario, at no time were you required to be identified as a particular person. Even paying with a credit card doesn't require that. Credit cards are a token-based identity system that says something about you rather than who you are. And while you do have an account with your bank, the brilliance of the credit card is that you no longer have to have accounts with every place you want credit. You simply present a token that gives the merchant confidence that they will be paid. Here are a few of the "whats" in this scenario:

My friend The person sitting at table 3 Over 21 Guest who ordered the medium-rare steak Someone who needs a refill Excellent tipper Person who owes $179.35 Person in possession of a MasterCard

You don't need an account at the restaurant for any of this to work. But you do need relationships. Some, like the relationship with your friend and MasterCard, are long-lived and identified. Most are ephemeral and pseudonymous. While the server at the restaurant certainly "identifies" patrons, they usually forget them as soon as the transaction is complete. And the identification is usually pseudonymous (e.g. "the couple at table three" rather than "Phillip and Lynne Windley").

In the digital realm, we suffer from the problem of not being in proximity to those we're interacting with. As a result, we need a technical means to establish a relationship. Traditionally, we've done that with accounts and identifying, using authentication factors, who is connecting. As a result, all online relationships tend to be long-lived and identified in important ways—even when they don't need to be. This has been a boon to surveillance capitalism.

In contrast, SSI establishes peer-to-peer relationships using peer DIDs (autonomic identifiers) that can be forgotten or remembered as needed. These relationships allow secure communication for issuing and presenting credentials that say something about the subject (what) without necessarily identifying the subject (who). This token-based identity system more faithfully mirrors the way identity works in the physical world.

Account- and token-based identity are not mutually exclusive. In fact, token-based identity often has its roots in an account somewhere, as we discovered about MasterCard. But the key is that you're leveraging that account to avoid being in an administrative relationship in other places. To see that, consider the interactions that happen after an automobile accident.

Account and token interactions after an automobile accident (click to enlarge)

In this scenario, two drivers, Alice and Bob, have had an accident. The highway patrol has come to the scene to make an accident report. Both Alice and Bob have a number of credentials (tokens) in their digital wallets that they control and will be important in creating the report:

Proof of insurance issued by their respective insurance companies Vehicle title issued by the state founded on a vehicle original document from the vehicle's manufacturer. Vehicle registration issued by the Department of Motor Vehicles (DMV) Driver's license issued by the Department of Public Safety (DPS) in Alice's case and the DMV in Bob's In addition, the patrol officer has a badge from the Highway Patrol.

Each of these credentials is the fruit of an account of some kind (i.e. the person was identified as part of the process). But the fact that Alice, Bob, and the patrol officer have tokens of one sort or another that stem from those accounts allows them to act autonomously from those administrative systems to participate in a complex, ad hoc, cross-domain workflow that will play out over the course of days or weeks.

Account-based and token-based identity system co-exist in any sufficiently complex ecosystem. Self-sovereign identity (SSI) doesn't replace administrative identity systems, it gives us another tool that enables better privacy, more flexible interactions, and increased autonomy. In the automobile scenario, for example, Alice and Bob will have an ephemeral relationship that lasts a few weeks. They'll likely never see the patrol officer after the initial encounter. Alice and Bob would make and sign statements that everyone would like to have confidence in. The police officer would create an accident report. All of this is so complex and unique that it is unlikely to ever happen within a single administrative identity system or on some kind of platform.

Token-based identity allows people to operationalize their digital lives by supporting online interactions that are multi-source, fluid multi-pseudonymous, and decentralized. Ensuring that the token-based identity system is also self-sovereign ensures that people can act autonomously without being within someone else's administrative identity system as they go about their online lives. I think of it as digital embodiment—giving people a way to be peers with other actors in online interactions.

Notes I'm using "token" in the general sense here. I'm not referring to either cryptocurrency or hardware authentication devices specifically.

Photo Credit: Tickets from Clint Hilbert (Pixabay)

Tags: identity ssi tokens surveillance+capitalism relationships


Hans Zandbelt

mod_auth_openidc vs. legacy Web Access Management

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.


Simon Willison

aws-lambda-adapter

aws-lambda-adapter AWS Lambda added support for Docker containers last year, but with a very weird shape: you can run anything on Lambda that fits in a Docker container, but unlike Google Cloud Run your application doesn't get to speak HTTP: it needs to run code that listens for proprietary AWS lambda events instead. The obvious way to fix this is to run some kind of custom proxy inside the cont

aws-lambda-adapter

AWS Lambda added support for Docker containers last year, but with a very weird shape: you can run anything on Lambda that fits in a Docker container, but unlike Google Cloud Run your application doesn't get to speak HTTP: it needs to run code that listens for proprietary AWS lambda events instead. The obvious way to fix this is to run some kind of custom proxy inside the container which turns AWS runtime events into HTTP calls to a regular web application. Serverlessish and re:Web are two open source projects that implemented this, and now AWS have their own implementation of that pattern, written in Rust.


Weeknotes: Learning Kubernetes, learning Web Components

I've been mainly climbing the learning curve for Kubernetes and Web Components this week. I also released Datasette 0.59.1 with Python 3.10 compatibility and an updated Docker image. Datasette 0.59.1 A few weeks ago I wrote about finding and reporting an asyncio bug in Python 3.10 that I discovered while trying to get Datasette to work on the latest release of Python. Łukasz Langa offered a

I've been mainly climbing the learning curve for Kubernetes and Web Components this week. I also released Datasette 0.59.1 with Python 3.10 compatibility and an updated Docker image.

Datasette 0.59.1

A few weeks ago I wrote about finding and reporting an asyncio bug in Python 3.10 that I discovered while trying to get Datasette to work on the latest release of Python.

Łukasz Langa offered a workaround which I submitted as a PR to the Janus library that Datasette depends on.

Andrew Svetlov landed and shipped that fix, which unblocked me from releasing Datasette 0.59.1 that works with Python 3.10.

The last step of the Datasette release process, after the package has been released to PyPI, is to build a new Docker image and publish it to Docker Hub. Here's the GitHub Actions workflow that does that.

It turns out this stopped working when I released Datasette 0.59! I was getting this cryptic error message half way through the image build process:

/usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory

I opened an issue for myself and started investigating.

The culprit was this section of the Datasette Dockerfile:

# software-properties-common provides add-apt-repository # which we need in order to install a more recent release # of libsqlite3-mod-spatialite from the sid distribution RUN apt-get update && \ apt-get -y --no-install-recommends install software-properties-common && \ add-apt-repository "deb http://httpredir.debian.org/debian sid main" && \ apt-get update && \ apt-get -t sid install -y --no-install-recommends libsqlite3-mod-spatialite && \ apt-get remove -y software-properties-common && \

This was a hack I introduced seven months ago in order to upgrade the bundled SpatiaLite to version 5.0.

SpatiaLite 5.0 wasn't yet available in Debian stable back then, so I used the above convoluted hack to install it from Debian unstable ("Sid") instead.

When the latest stable version of Debian, Debian Bullseye, came out on October 9th my hack stopped working! I guess that's what I get for messing around with unstable software.

Thankfully, Bullseye now bundles SpatiaLite 5, so the hack I was using is no longer necessary. I upgraded the Datasette base image from python:3.9.2-slim-buster to 3.9.7-slim-bullseye, installed SpatialLite the non-hacky way and fixed the issue.

Doing so also dropped the size of the compressed Datasette image from 94.37MB to 78.94MB, which is nice.

Learning Kubernetes

Datasette has been designed to run in containers from the very start. I have dozens of instances running on Google Cloud Run, and I've done a bunch of work with Docker as well, including trying out mechanisms to programatically launch new Datasette containers via the Docker API.

I've dragged my heels on really getting into Kubernetes due to the infamously tough learning curve, but I think it's time to dig in, figure out how to use it and work out what new abilities it can provide me.

I've spun up small a Kubernetes cluster on Digital Ocean, mainly because I trust their UI to help me not spend hundreds of dollars by mistake. Getting the initial cluster running was very pleasant.

Now I'm figuring out how to do things with it.

DigitalOcean's Operations-ready DigitalOcean Kubernetes (DOKS) for Developers course (which started as a webinar) starts OK and then gets quite complicated quite fast.

I got Paul Bouwer's hello-kubernetes demo app working - it introduced me to Helm, but that operates at a higher level than I'm comfortable with - learning my way around kubectl and Kubernetes YAML is enough of a mental load already without adding an extra abstraction on top.

I'm reading Kubernetes: Up and Running which is promising so far.

My current goal is to figure out how to run a Datasette instance in a Kubernetes container with an attached persistent volume, so it can handle SQLite writes as well as reads. It looks like StatefulSets will be key to getting that to work. (Update: apparently not! Graham Dumpleton and Frank Wiles assure me that I can do this with just a regular Deployment.)

I'll be sure to write this up as a TIL once I get it working.

Learning Web Components

Datasette's visualization plugins - in particular datasette-vega - are long overdue for some upgrades.

I've been trying to find a good pattern for writing plugins that avoids too much (ideally any) build tool complexity, and that takes advantage of modern JavaScript - in particular JavaScript modules, which Datasette has supported since Datasette 0.54.

As such, I'm deeply intrigued by Web Components - which had a big moment this week when it was revealed that Adobe had used them extensively for Photoshop on the web.

One of my goals for Datasette visualization plugins is for them to be usable on other external pages - since Datasette can expose JSON data over CORS, being able to drop a visualization into an HTML page would be really neat (especially for newsroom purposes).

Imagine being able to import a JavaScript module and add something like this to get a map of all of the power plants in Portugal:

<datasette-cluster-map data="https://global-power-plants.datasettes.com/global-power-plants/global-power-plants.json?country_long=Portugal"> </datasette-cluster-map>

I'm hoping to be able to build components using regular, unadorned modern JavaScript, without the complexity of a build step.

As such, I've been exploring Skypack (TIL) and Snowpack which help bridge the gap between build-tooling-dependent npm packages and the modern world of native browser ES modules.

I was also impressed this week by Tonic, a framework for building components without a build step that weighs in at just 350 lines of code and makes extremely clever use of tagged template literals and async generators.

This morning I saw this clever example of a Single File Web Component by Kristofer Joseph - I ended up creating my own annotated version of his code which I shared in this TIL.

Next step: I need to write some web components of my own!

Releases this week datasette: 0.59.1 - (99 releases total) - 2021-10-24
An open source multi-tool for exploring and publishing data datasette-hello-world: 0.1 - 2021-10-21
The hello world of Datasette plugins TIL this week Removing a git commit and force pushing to remove it from history Understanding Kristofer Joseph's Single File Web Component

Wednesday, 27. October 2021

Identity Praxis, Inc.

A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about […] The post A call for New PD&I Exchange Models, Th

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS

I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about personal data and identity (PD&I). We had an enlightening discussion (see video of the interview: 46:16 min)). The conversation took us down unexpected paths and brought several insights and recommendations to light.

In our interview, we discussed the role of personal data and identity and how enterprises use it to know and serve their customers and protect the enterprises’ interests. To my delight, we uncovered three ideas that could help us all better protect PD&I and improve the market’s efficiency.

Idea One: Build out and refine “The Trust Chain”, or “chain of trust,” a PD&I industry value chain framework envisioned by Virginie. Idea Two: Refine PD&I industry practices, optimize all of the data that mobile operators are holding on to, and ensure that appropriate technical, legal, and ethical exchange mechanisms are in place to ensure responsible use of PD&I. Idea Three: Standardize a connected individual identity scoring scheme, i.e., a scheme for identity and transaction verification, often centralized around mobile data. This scheme is analogous to credit scoring for lending and fraud detection for credit card purchases. It would help enterprises simultaneously better serve their customers, protect PD&I, mitigate fraud, and improve their regulatory compliance efforts.

According to Virginie, a commercial imperative for an enterprise is knowing their customer–verifying the customer’s identity prior to and during engagements. Knowing the customer helps enterprises not only better serve the customer, but also manage costs, reduce waste, mitigate fraud, and stay on the right side of the law and regulations. Virginie remarked that her customers often say, “I want to know who is my end user. Who am I talking to? Am I speaking to the right person in front of me?” This is hard enough in the physical realm, and in the digital realm it is even more difficult. The ideas discussed in this interview can help enterprises answer these questions.

Consumer Identity and the Enterprise

The mobile phone has become a cornerstone for digital identity management and commerce. In fact, Cameron D’Ambrosi, Managing Director of Liminal, has gone as far as to suggest mobile has an irreplaceable role in the digital identity ecosystem.1 Mobile can help enterprises be certain whom they are dealing with, and with this certainty help them, with confidence, successfully connect, communicate, and engage people in nearly any transactions.

To successfully leverage mobile as a tool for customer identity management, which is an enabler of what is known as “know your customer” or KYC, enterprises work with organizations like GMS to integrate mobile identity verification into their commercial workflow. In our interview, Virginie notes that GMS is a global messaging aggregator, the “man in the middle.” It provides messaging and related services powered by personal data and identity to enterprises and mobile operators, including KYC services.

Benefits gained from knowing your customer

There is a wide range of use cases for why an enterprise may want to use services provided by players like GMS. They can:

Improve customer experience: Knowing the customer and the context of a transaction can help improve the customer experience. Maintain data hygiene: Ensuring data in a CRM or customer system of record that is accurate can improve marketing, save money, reduce fraud, and more. Effectively manage data: Reducing duplicate records, tagging data, and more can reduce costs, create efficiency, and generate new business opportunities (side note: poor data management costs enterprises billions annually).2 Ensure regulatory compliance: Industry and government best practices, legislation, and regulation is not just nice to have; it is a business requirement. Staying compliant can mitigate risk, build trust, and help organizations differentiate themselves in the market. Mitigate cybercrime: Cybercrime is costing industry trillions of dollars a year (Morgan (2020) predicts the tally could be as much as $10.5 trillion annually by 2025).3These losses can be reduced with an effective strategy. The connected individual identity scoring scheme

When a consumer signs up for or buys a product or service, an enterprise may prompt them to provide a mobile number and other personal data as part of the maintenance of their profile and to support the transaction. An enterprise working with GSM, in real-time, can ping GMS’s network to verify if the consumer-provided mobile number is real, i.e., operational. Moreover, they can ask GMS to predict, with varying levels of accuracy, if a mobile number and PD&I being used in a transaction is associated with a real person. They can also ask if the presumed person conducting the transaction can be trusted or if they might be a fraudster looking to cheat the business. This is a decision based on relevant personal information provided by the individual prior to or during the transaction, as well as data drawn from other sources.

This type of real-time identity and trust verification is made possible by a process Virginie refers to as “scoring.” I refer to it as “the connected individual identity scoring scheme.” Scoring is an intricate and complex choreography of data management and analysis, executed by GMS in milliseconds. This dance consists of pulling together and analyzing a myriad of personal data, deterministic and probabilistic identifiers, and mobile phone signals. The actors in this dance include GMS, the enterprise, the consumer, and GMS’s strategic network of mobile network operators and PD&I aggregator partners.

When asked by an enterprise to produce a score, GMS, in real-time, combines and analyzes enterprise-provided data (e.g., customer name, addresses, phone number, presumed location, etc.), mobile operator signal data (e.g., the actual location of a phone, SIM card, and number forwarding status), and PD&I aggregator supplied data. From this information, it produces a score. This score is used to determine the likelihood a transaction being initiated by “someone” is legitimate and can be trusted, or not. A perfect score of 1 would suggest that, with one hundred percent certainty, the person is who they say they are and can be trusted, and a score of zero would suggest they are most certainly a cybercriminal.

In our interview, Virginie notes, “nothing is perfect, we need to admit that,” thus suggesting that one should never expect a perfect score. The more certain a business wants to be, i.e. the higher score they require to confirm a transaction, the more the business should expect the possibility of increased transactional costs, time, and friction in the user experience. Keeping this in mind, businesses should develop a risk tolerance matrix, based on the context of a transaction, to determine if they want to accept the current transaction or not. For example, for lower risk or lower cost transactions (e.g., an online pizza order) the business might have a lower assurance tolerance and will accept a lower score. For higher-risk or higher-cost transactions (e.g., a bank wire transfer), they might need a higher assurance tolerance and accept only higher scores.

Example: Detecting fraud in a banking experience

Virginie used a bank transaction as an example. She explained that a bank could check if a customer’s mobile phone is near the expected location of a transaction. If it was not, this might suggest there is a possibility of fraud occurring, which would negatively impact the score.

Mobile scoring happens every day, but not always by this name–others refer to it as mobile signaling or mobile device intelligence. However, Virginie alluded to a challenge. There is no industry standard for scoring, which may lead to inconsistencies in execution and bias across the industry. She suggested that more industry collaboration is needed to prevent this.

The Trust Chain

During our conversation, Virginie proposed a novel idea which frames what we in the industry could do to optimize the PD&I value and use it responsibly. Virginie said we need to build a chain of trust amongst the PD&I actors, “The Trust Chain”.

I have taken poetic license, based on our conversation, and have illustrated The Trust Chain in the figure below. The figure depicts connected individuals* at the center, resting on a bed of industry players linked to enterprises. A yellow band circles them all to illustrate the flow of personal data and identity throughout the chain.

Defining the connected individual and being phygital: It is so easy in business to get distracted by our labels. It is important to remember the terms we use to refer to the people we serve—prospect, consumer, patient, shopper, investor, user, etc.—are contrived and can distract. These terms are all referring to the same thing: a human, an individual, and more importantly, a contextual state or action at some point along the customer journey, i.e., sometimes I am a shopper considering a product, other times I am a consumer using the product. The shopper and the consumer are not always the same person. Understanding this is important to ensure effective engagement in the connected age. In the context of today’s world and this discussion, the individual is connected. They are connected with phones, tablets, smartwatches, cars, and more. These connections have made us “phygital” beings, merging the digital and physical self. Each and every one of these connections is producing data.

According to Virginie, the key to making the industry more effective and efficient is to tap into more and more of the connected individual data held and managed by mobile network operators. This is because, in her own words, “they know everything.” To tap into this data, Virginie said a number of technical, legal, and ethical complexities must be overcome. In addition, an improved model for data exchange amongst the primary actors of the industry—mobile network operators, enterprises, messaging aggregators (like GMS), and PD&I aggregators—needs to be established. In other words, “The Trust Chain” needs to be refined and built. The presumption behind all of this is that the current models of data exchange can be found wanting.

What we need to do next

In summary, the conclusion I draw from my interview with Virginie are that we should come together to tackle:

The technical, legal, and ethical complexities to enable more effective access to the treasure trove of data held by the mobile network operators The standardization of a connected individual scoring scheme The development and integrity for “The Trust Chain”

My takeaway from our discussion is simple, I agree with her ideas. These efforts and more are needed. The use of personal data and identity throughout the industry is accelerating at an exponential rate. To ensure all parties can safely engage, transact, and thrive, it is critical that industry leaders develop a sustainable and responsible, marketplace.

I encourage you to watch to the full interview here.

Becker, “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem.”↩︎ “Dark Data – Are You at Risk?”↩︎ Morgan, “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.”↩︎ REFERENCES Becker, Michael. “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem: Liminal’s Cameron D’Ambrosi Speaks to MEF – Blog.” MEF, October 2021. https://mobileecosystemforum.com/2021/10/07/mobiles-irreplaceable-role-in-the-digital-identity-ecosystem-liminals-cameron-dambrosi-speaks-to-mef/. “Dark Data – Are You at Risk?” Veritas Dark Data. Are You at Risk?, July 2019. https://www.veritas.com/form/whitepaper/dark-data-risk. Morgan, Steve. “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.” Cybercrime Magazine, November 2020. https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/.

The post A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS appeared first on Identity Praxis, Inc..

Monday, 25. October 2021

Simon Willison

Quoting Ryan Broderick

But this much is clear: Facebook knew all along. Their own employees were desperately trying to get anyone inside the company to listen as their products radicalized their own friends and family members. And as they were breaking the world, they had an army of spokespeople publicly and privately gaslighting and intimidating reporters and researchers who were trying to ring the alarm bell. They kn

But this much is clear: Facebook knew all along. Their own employees were desperately trying to get anyone inside the company to listen as their products radicalized their own friends and family members. And as they were breaking the world, they had an army of spokespeople publicly and privately gaslighting and intimidating reporters and researchers who were trying to ring the alarm bell. They knew all along and they simply did not give a shit.

Ryan Broderick


Damien Bod

Create and issue verifiable credentials in ASP.NET Core using Azure AD

This article shows how Azure AD verifiable credentials can be issued and used in an ASP.NET Core application. An ASP.NET Core Razor page application is used to implement the credential issuer. To issue credentials, the application must manage the credential subject data as well as require authenticated users who would like to add verifiable credentials […]

This article shows how Azure AD verifiable credentials can be issued and used in an ASP.NET Core application. An ASP.NET Core Razor page application is used to implement the credential issuer. To issue credentials, the application must manage the credential subject data as well as require authenticated users who would like to add verifiable credentials to their digital wallet. The Microsoft Authenticator mobile application is used as the digital wallet.

Code: https://github.com/swiss-ssi-group/AzureADVerifiableCredentialsAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI Challenges to Self Sovereign Identity

Setup

Two ASP.NET Core applications are implemented to issue and verify the verifiable credentials. The credential issuer must administrate and authenticate its identities to issue verifiable credentials. A verifiable credential issuer should never issue credentials to unauthenticated subjects of the credential. As the verifier normally only authorizes the credential, it is important to know that the credentials were at least issued correctly. We do not know as a verifier who or and mostly what sends the verifiable credentials but at least we know that the credentials are valid if we trust the issuer. It is possible to use private holder binding for a holder of a wallet which would increase the trust between the verifier and the issued credentials.

The credential issuer in this demo issues credentials for driving licenses using Azure AD verifiable credentials. The ASP.NET Core application uses Microsoft.Identity.Web to authenticate all identities. In a real application, the application would be authenticated as well requiring 2FA for all users. Azure AD supports this good. The administrators would also require admin rights, which could be implemented using Azure security groups or Azure roles which are added to the application as claims after the OIDC authentication flow.

Any authenticated identity can request credentials (A driving license in this demo) for themselves and no one else. The administrators can create data which is used as the subject, but not issue credentials for others.

Azure AD verifiable credential setup

Azure AD verifiable credentials is setup using the Azure Docs for the Rest API and the Azure verifiable credential ASP.NET Core sample application.

Following the documentation, a display file and a rules file were uploaded for the verifiable credentials created for this issuer. In this demo, two credential subjects are defined to hold the data when issuing or verifying the credentials.

{ "default": { "locale": "en-US", "card": { "title": "National Driving License VC", "issuedBy": "Damienbod", "backgroundColor": "#003333", "textColor": "#ffffff", "logo": { "uri": "https://raw.githubusercontent.com/swiss-ssi-group/TrinsicAspNetCore/main/src/NationalDrivingLicense/wwwroot/ndl_car_01.png", "description": "National Driving License Logo" }, "description": "Use your verified credential to prove to anyone that you can drive." }, "consent": { "title": "Do you want to get your Verified Credential?", "instructions": "Sign in with your account to get your card." }, "claims": { "vc.credentialSubject.name": { "type": "String", "label": "Name" }, "vc.credentialSubject.details": { "type": "String", "label": "Details" } } } }

The rules file defines the attestations for the credentials. Two standard claims are used to hold the data, the given_name and the family_name. These claims are mapped to our name and details subject claims and holds all the data. Adding custom claims to Azure AD or Azure B2C is not so easy and so I decided for the demo, it would be easier to use standard claims which works without custom configurations. The data sent from the issuer to the holder of the claims can be sent in the application. It should be possible to add credential subject properties without requiring standard AD id_token claims, but I was not able to set this up in the current preview version.

{ "attestations": { "idTokens": [ { "id": "https://self-issued.me", "mapping": { "name": { "claim": "$.given_name" }, "details": { "claim": "$.family_name" } }, "configuration": "https://self-issued.me", "client_id": "", "redirect_uri": "" } ] }, "validityInterval": 2592001, "vc": { "type": [ "MyDrivingLicense" ] } }

The rest of the Azure AD credentials are setup exactly like the documentation.

Administration of the Driving licenses

The verifiable credential issuer application uses a Razor page application which accesses a Microsoft SQL Azure database using Entity Framework Core to access the database. The administrator of the credentials can assign driving licenses to any user. The DrivingLicenseDbContext class is used to define the DBSet for driver licenses.

ublic class DrivingLicenseDbContext : DbContext { public DbSet<DriverLicense> DriverLicenses { get; set; } public DrivingLicenseDbContext(DbContextOptions<DrivingLicenseDbContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { builder.Entity<DriverLicense>().HasKey(m => m.Id); base.OnModelCreating(builder); } }

A DriverLicense entity contains the infomation we use to create verifiable credentials.

public class DriverLicense { [Key] public Guid Id { get; set; } public string UserName { get; set; } = string.Empty; public DateTimeOffset IssuedAt { get; set; } public string Name { get; set; } = string.Empty; public string FirstName { get; set; } = string.Empty; public DateTimeOffset DateOfBirth { get; set; } public string Issuedby { get; set; } = string.Empty; public bool Valid { get; set; } public string DriverLicenseCredentials { get; set; } = string.Empty; public string LicenseType { get; set; } = string.Empty; }

Issuing credentials to authenticated identities

When issuing verifiable credentials using Azure AD Rest API, an IssuanceRequestPayload payload is used to request the credentials which are to be issued to the digital wallet. Verifiable credentials are issued to a digital wallet. The credentials are issued for the holder of the wallet. The payload classes are the same for all API implementations apart from the CredentialsClaims class which contains the subject claims which match the rules file of your definition.

public class IssuanceRequestPayload { [JsonPropertyName("includeQRCode")] public bool IncludeQRCode { get; set; } [JsonPropertyName("callback")] public Callback Callback { get; set; } = new Callback(); [JsonPropertyName("authority")] public string Authority { get; set; } = string.Empty; [JsonPropertyName("registration")] public Registration Registration { get; set; } = new Registration(); [JsonPropertyName("issuance")] public Issuance Issuance { get; set; } = new Issuance(); } public class Callback { [JsonPropertyName("url")] public string Url { get; set; } = string.Empty; [JsonPropertyName("state")] public string State { get; set; } = string.Empty; [JsonPropertyName("headers")] public Headers Headers { get; set; } = new Headers(); } public class Headers { [JsonPropertyName("api-key")] public string ApiKey { get; set; } = string.Empty; } public class Registration { [JsonPropertyName("clientName")] public string ClientName { get; set; } = string.Empty; } public class Issuance { [JsonPropertyName("type")] public string CredentialsType { get; set; } = string.Empty; [JsonPropertyName("manifest")] public string Manifest { get; set; } = string.Empty; [JsonPropertyName("pin")] public Pin Pin { get; set; } = new Pin(); [JsonPropertyName("claims")] public CredentialsClaims Claims { get; set; } = new CredentialsClaims(); } public class Pin { [JsonPropertyName("value")] public string Value { get; set; } = string.Empty; [JsonPropertyName("length")] public int Length { get; set; } = 4; } /// Application specific claims used in the payload of the issue request. /// When using the id_token for the subject claims, the IDP needs to add the values to the id_token! /// The claims can be mapped to anything then. public class CredentialsClaims { /// <summary> /// attribute names need to match a claim from the id_token /// </summary> [JsonPropertyName("given_name")] public string Name { get; set; } = string.Empty; [JsonPropertyName("family_name")] public string Details { get; set; } = string.Empty; }

The GetIssuanceRequestPayloadAsync method sets the data for each identity that requested the credentials. Only a signed in user can request the credentials for themselves. The context.User.Identity is used and the data is selected from the database for the signed in user. It is important that credentials are only issued to authenticated users. Users and the application must be authenticated correctly using 2FA and so on. Per default, the credentials are only authorized on the verifier which is probably not enough for most security flows.

public async Task<IssuanceRequestPayload> GetIssuanceRequestPayloadAsync(HttpRequest request, HttpContext context) { var payload = new IssuanceRequestPayload(); var length = 4; var pinMaxValue = (int)Math.Pow(10, length) - 1; var randomNumber = RandomNumberGenerator.GetInt32(1, pinMaxValue); var newpin = string.Format("{0:D" + length.ToString() + "}", randomNumber); payload.Issuance.Pin.Length = 4; payload.Issuance.Pin.Value = newpin; payload.Issuance.CredentialsType = "MyDrivingLicense"; payload.Issuance.Manifest = _credentialSettings.CredentialManifest; var host = GetRequestHostName(request); payload.Callback.State = Guid.NewGuid().ToString(); payload.Callback.Url = $"{host}:/api/issuer/issuanceCallback"; payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey; payload.Registration.ClientName = "Verifiable Credential NDL Sample"; payload.Authority = _credentialSettings.IssuerAuthority; var driverLicense = await _driverLicenseService.GetDriverLicense(context.User.Identity.Name); payload.Issuance.Claims.Name = $"{driverLicense.FirstName} {driverLicense.Name} {driverLicense.UserName}"; payload.Issuance.Claims.Details = $"Type: {driverLicense.LicenseType} IssuedAt: {driverLicense.IssuedAt:yyyy-MM-dd}"; return payload; }

The IssuanceRequestAsync method gets the payload data and request credentials from the Azure AD verifiable credentials REST API and returns this value which can be scanned using a QR code in the Razor page. The request returns fast. Depending on how the flow continues, a web hook in the application will update the status in a cache. This cache is persisted and polled from the UI. This could be improved by using SignalR.

[HttpGet("/api/issuer/issuance-request")] public async Task<ActionResult> IssuanceRequestAsync() { try { var payload = await _issuerService.GetIssuanceRequestPayloadAsync(Request, HttpContext); try { var (Token, Error, ErrorDescription) = await _issuerService.GetAccessToken(); if (string.IsNullOrEmpty(Token)) { _log.LogError($"failed to acquire accesstoken: {Error} : {ErrorDescription}"); return BadRequest(new { error = Error, error_description = ErrorDescription }); } var defaultRequestHeaders = _httpClient.DefaultRequestHeaders; defaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", Token); HttpResponseMessage res = await _httpClient.PostAsJsonAsync( _credentialSettings.ApiEndpoint, payload); var response = await res.Content.ReadFromJsonAsync<IssuanceResponse>(); if(response == null) { return BadRequest(new { error = "400", error_description = "no response from VC API"}); } if (res.StatusCode == HttpStatusCode.Created) { _log.LogTrace("succesfully called Request API"); if (payload.Issuance.Pin.Value != null) { response.Pin = payload.Issuance.Pin.Value; } response.Id = payload.Callback.State; var cacheData = new CacheData { Status = IssuanceConst.NotScanned, Message = "Request ready, please scan with Authenticator", Expiry = response.Expiry.ToString() }; _cache.Set(payload.Callback.State, JsonSerializer.Serialize(cacheData)); return Ok(response); } else { _log.LogError("Unsuccesfully called Request API"); return BadRequest(new { error = "400", error_description = "Something went wrong calling the API: " + response }); } } catch (Exception ex) { return BadRequest(new { error = "400", error_description = "Something went wrong calling the API: " + ex.Message }); } } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The IssuanceResponse is returned to the UI.

public class IssuanceResponse { [JsonPropertyName("requestId")] public string RequestId { get; set; } = string.Empty; [JsonPropertyName("url")] public string Url { get; set; } = string.Empty; [JsonPropertyName("expiry")] public int Expiry { get; set; } [JsonPropertyName("pin")] public string Pin { get; set; } = string.Empty; [JsonPropertyName("id")] public string Id { get; set; } = string.Empty; }

The IssuanceCallback is used as a web hook for the Azure AD verifiable credentials. When developing or deploying, this web hook needs to have a public IP. I use ngrok to test this. Because the issuer authenticates the identities using an Azure App registration, everytime the ngrok URL changes, the redirect URL needs to be updated. Each callback request updates the cache. This API also needs to allow anonymous requests if the rest of the application is authenticated using OIDC. The AllowAnonymous attribute is required, if you use an authenticated ASP.NET Core application.

[AllowAnonymous] [HttpPost("/api/issuer/issuanceCallback")] public async Task<ActionResult> IssuanceCallback() { string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync(); var issuanceResponse = JsonSerializer.Deserialize<IssuanceCallbackResponse>(content); try { //there are 2 different callbacks. 1 if the QR code is scanned (or deeplink has been followed) //Scanning the QR code makes Authenticator download the specific request from the server //the request will be deleted from the server immediately. //That's why it is so important to capture this callback and relay this to the UI so the UI can hide //the QR code to prevent the user from scanning it twice (resulting in an error since the request is already deleted) if (issuanceResponse.Code == IssuanceConst.RequestRetrieved) { var cacheData = new CacheData { Status = IssuanceConst.RequestRetrieved, Message = "QR Code is scanned. Waiting for issuance...", }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } if (issuanceResponse.Code == IssuanceConst.IssuanceSuccessful) { var cacheData = new CacheData { Status = IssuanceConst.IssuanceSuccessful, Message = "Credential successfully issued", }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } if (issuanceResponse.Code == IssuanceConst.IssuanceError) { var cacheData = new CacheData { Status = IssuanceConst.IssuanceError, Payload = issuanceResponse.Error?.Code, //at the moment there isn't a specific error for incorrect entry of a pincode. //So assume this error happens when the users entered the incorrect pincode and ask to try again. Message = issuanceResponse.Error?.Message }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } return Ok(); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The IssuanceCallbackResponse is returned to the UI.

public class IssuanceCallbackResponse { [JsonPropertyName("code")] public string Code { get; set; } = string.Empty; [JsonPropertyName("requestId")] public string RequestId { get; set; } = string.Empty; [JsonPropertyName("state")] public string State { get; set; } = string.Empty; [JsonPropertyName("error")] public CallbackError? Error { get; set; } }

The IssuanceResponse method is polled from a Javascript client in the Razor page UI. This method updates the status in the UI using the cache and the database.

[HttpGet("/api/issuer/issuance-response")] public ActionResult IssuanceResponse() { try { //the id is the state value initially created when the issuance request was requested from the request API //the in-memory database uses this as key to get and store the state of the process so the UI can be updated string state = this.Request.Query["id"]; if (string.IsNullOrEmpty(state)) { return BadRequest(new { error = "400", error_description = "Missing argument 'id'" }); } CacheData value = null; if (_cache.TryGetValue(state, out string buf)) { value = JsonSerializer.Deserialize<CacheData>(buf); Debug.WriteLine("check if there was a response yet: " + value); return new ContentResult { ContentType = "application/json", Content = JsonSerializer.Serialize(value) }; } return Ok(); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The DriverLicenseCredentialsModel class is used for the credential issuing for the sign-in user. The HTML part of the Razor page contains the Javascript client code which was implemented using the code from the Microsoft Azure sample.

public class DriverLicenseCredentialsModel : PageModel { private readonly DriverLicenseService _driverLicenseService; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public DriverLicenseCredentialsModel(DriverLicenseService driverLicenseService) { _driverLicenseService = driverLicenseService; } public async Task OnGetAsync() { DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name); if (DriverLicense != null) { DriverLicenseMessage = "Add your driver license credentials to your wallet"; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } }

Testing and running the applications

Ngrok is used to provide a public callback for the Azure AD verifiable credentials callback. When the application is started, you need to create a driving license. This is done in the administration Razor page. Once a driving license exists, the View driver license Razor page can be used to issue a verifiable credential to the logged in user. A QR Code is displayed which can be scanned to begin the issue flow.

Using the Microsoft authenticator, you can scan the QR Code and add the verifiable credentials to your digital wallet. The credentials can now be used in any verifier which supports the Microsoft Authenticator wallet. The verify ASP.NET Core application can be used to verify and used the issued verifiable credential from the Wallet.

Links:

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet

https://www.microsoft.com/de-ch/security/business/identity-access-management/decentralized-identity-blockchain

https://didproject.azurewebsites.net/docs/issuer-setup.html

https://didproject.azurewebsites.net/docs/credential-design.html

https://github.com/Azure-Samples/active-directory-verifiable-credentials

https://identity.foundation/

https://www.w3.org/TR/vc-data-model/

https://daniel-krzyczkowski.github.io/Azure-AD-Verifiable-Credentials-Intro/

https://dotnetthoughts.net/using-node-services-in-aspnet-core/

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript


Kyle Den Hartog

My Take on the Misframing of the Authentication Problem

First off, the user experience of authenticating on the web has to be joyful first and foremost. Secondly, I think it's important that we recognize that the security of any authentication system is probabilistic, not deterministic.

Prelude: First off, if you haven’t already read The Quest to Replace the Password stop reading this and give that a read first. To paraphrase my computer security professor, If you haven’t read this paper before you design an authentication system you’re probably just reinventing something already created or missing a piece of the puzzle. So go ahead take a read of that paper first before you continue on. In fact, I just re-read this paper before I started working on it because it does an excellent job of framing the problem.

Over the past few years, I’ve spent a fair amount of time thinking about what the next generation of authentication and authorization systems will look like from a variety of different perspectives. I started out looking at the problem from a user’s perspective originally just looking to get rid of passwords. Then I looked at it as an attacker working as an intern penetration tester which gave me a unique eye into understanding the attacker mindset. Unfortunately, I didn’t find myself enjoying the red team side of security too much (combined with having a 2-year non-compete as an intern - that was a joke in hindsight) so by happenstance, I found myself into the standards community where the next generation of authentication (AuthN) and authorization (AuthZ) systems have been being built. Combine this with the view of a software engineer attempting to build the standards that are being written by some world-class experts with centuries of combined experience. Throughout this experience, I’ve gotten to view the state of the art while also getting to keep the naiveness that comes with being a fairly new engineer relative to some of the experts I get to work with. During this time I’ve become a bit more opinionated on what makes a good authentication system and this time around I’m going to jot down my current thoughts on what makes something useful.

However, one aspect that I think that paper lacks is that it frames the problem of authentication on the web as something where the next goal is to move to something different other than passwords, but we just haven’t found that better thing yet. While I think this is probably true for some aspects of low-security systems I think that fundamentally passwords or more generally, “Things I know” are here to stay. The problem is that we haven’t done a good enough job of understanding the requirements that the user needs to intuitively use the system as well as making sure that the system sufficiently gets out of the way of the user. As the paper does point out though, this is likely due to our specialization bias which doesn’t allow us to take into considerations the holistic view points necessary to approach the problem. What I’m proposing I think is a way in which we can jump this hurdle through the usage of hard data. Read on and let me know if you think this can solve this issue or if I’m just full of my own implicit biases.

So what’re the important insights that I’ve been thinking about lately? First off, the user experience of authenticating on the web has to be joyful first and foremost. If the user doesn’t have a better experience than what they do with passwords, which is a very low bar to beat when considering all the passwords they have to remember, then it’s simply unacceptable and won’t achieve the uptake that’s needed to overtake passwords.

Secondly, I think it’s important that we recognize that the security of any authentication system is probabilistic, not deterministic. By reframing the problem to understand that with each additional security check we do under the classifications of “what I know”, “what I am”, and “what I have” it allows us to better understand the problem we’re actually trying to solve with security. To put this idea in perspective think about the problem this way. What’s the probability that a security system will be broken for any particular user during a particular period? For example, a user who chooses to reuse their password that’s set to “#1password” for every website is a lot less likely (my intuition - happy to be proven wrong) than a user who can memorize a password like “p2D16U$nClNjqLseKTtnjw” for every website. However, there’s a significant tradeoff on the user’s experience which is why the case where a user reuses an easy to remember is a lot more likely to occur when studying users than the latter case even though we know it’s less secure.

So what gives? This all sounds obvious to a semi-well thought-out engineer, right? The difference is we simply don’t know the probability of a security system failing in an ideal scenario over a pre-determined period. To put this in perspective - can anyone point me to an academic research paper or even some user research that tells me the probability that a user’s password will be discovered by an attacker in the next year? What about the probability that the user shares their password with a trusted person because the system wasn’t deployed with a delegation system? Or how about how the probability will drop as the user reuses their password across many websites? Simply put I think we’ve been asking the wrong question here and until we can have hard data on this we can’t make rigorous choices on the acceptable UX/security tradeoffs that are so hard to decide today.

This isn’t relevant for just passwords either, this extends to many different forms of authentication that fall under the other two authentication classes as well. For example, what’s the probability that a user’s account will be breached when relying on the Open ID Connect protocol rather than a password? Furthermore, what’s the likelihood that the user prefers the Open ID Connect system rather than a password for each website, and is that likelihood worth the increase or decrease in probabilistic security under one or many attack vectors?

The best part of this framing is that it changes how we look at security on the web from the user’s perspective, but that’s not the only part that has to be considered as is rightly pointed in the paper. There’s a very important third factor that has to be considered as well which is deployability, which I like to reframe as “developer experience” or DevX.

By evaluating a constraint of a system in this way we reframe the problem into a measurable outcome that becomes far more tractable and measurable between the variety of constraints that need to be considered including the developer deploying or maintaining the system, the user who’s using it, and the resistance to common threats (Don’t worry about unrealistic threat models for now - mature them over time) which the user expects the designers of the system to protect them from.

Once we’ve got that data let’s sit down and re-evaluate what are the most important principles of designing the system. I’ll make a few predictions to wrap this up as well.

First prediction: I think once we have this data we’ll see a few different things which will be obvious in hindsight. A system that doesn’t prioritize UX over DevX over probabilistic security resilience will be dead in the water since it goes against the “user should enjoy the authentication experience principle”. Additionally, DevX has to come before security because without a good devX it’s less likely to be implemented at all let alone properly.

Second prediction: I’d venture to guess that we’ll learn a few things about the way we frame security on the web with the clear winner being that we should be designing for MFA systems by default, but “what I have” categories need to be the basis of the majority of experiences for the user with the “what I know” categories being an escalated approach with “what I am” categories only being needed in the highest assurance use cases or at the time of more red flags having been raised (e.g. new IP address, new device, etc) and being enforced on-device rather than handled by a remote server.

Final prediction: Recovery is going to be the hardest part of the system to figure out with multi-device flows being only slightly easier to solve. I wouldn’t be surprised if the solution to recovery was actually to not recover and instead make it super easy to “burn and recreate” (as John Jordan has advocated in the decentralized identity community) an account on the web because that’s how hard recovery actually is to get right.

So that’s what I’ve got for now. I’m sure I’m missing something here and I’m sure I’m wrong in a few other cases. Share your comments on this down below or feel free to send me an email and tell me I’m wrong. I do appreciate the thoughtfulness that others put into pointing these things out so let me know what you think and let’s discuss it further. Thanks for reading!

Saturday, 23. October 2021

Simon Willison

Tonic

Tonic Really interesting library for building Web Components: it's tiny (just 350 lines of code), works directly in browsers without any compile or build step and makes very creative use of modern JavaScript features such as async generators. Via Alex Russell

Tonic

Really interesting library for building Web Components: it's tiny (just 350 lines of code), works directly in browsers without any compile or build step and makes very creative use of modern JavaScript features such as async generators.

Via Alex Russell

Thursday, 21. October 2021

Simon Willison

New HTTP standards for caching on the modern web

New HTTP standards for caching on the modern web Cache-Status is a new HTTP header (RFC from August 2021) designed to provide better debugging information about which caches were involved in serving a request - "Cache-Status: Nginx; hit, Cloudflare; fwd=stale; fwd-status=304; collapsed; ttl=300" for example indicates that Nginx served a cache hit, then Cloudflare had a stale cached version so it

New HTTP standards for caching on the modern web

Cache-Status is a new HTTP header (RFC from August 2021) designed to provide better debugging information about which caches were involved in serving a request - "Cache-Status: Nginx; hit, Cloudflare; fwd=stale; fwd-status=304; collapsed; ttl=300" for example indicates that Nginx served a cache hit, then Cloudflare had a stale cached version so it revalidated from Nginx, got a 304 not modified, collapsed multiple requests (dogpile prevention) and plans to serve the new cached value for the next five minutes. Also described is $Target-Cache-Control: which allows different CDNs to respond to different headers and is already supported by Cloudflare and Akamai (Cloudflare-CDN-Cache-Control: and Akamai-Cache-Control:).

Via Hacker News


Mike Jones: self-issued

OpenID and FIDO Presentation at October 2021 FIDO Plenary

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything! The presentation […]

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything!

The presentation was:

OpenID and FIDO (PowerPoint) (PDF)

MyDigitalFootprint

When does democracy break?

We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails.
We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails. 

Today’s perception of responsible decision making is different to that our grandparents held.  Our grandchildren will look back at our “risk-informed” decisions with the advantage of hindsight and question why our risk management frameworks were so short-term-focused.  However, we need to recognise that our guardrails are established by current political, economic and societal framing.   

What are we optimising for?”

I often ask the question, “what are we optimising for?” The reason I ask this question is to draw out different viewpoints in a leadership team.  The viewpoints that drive individual optimisation is framed by their experience, ability to understand time-frames and incentives. 

Peak Paradox is a non-confrontational framework to explore our different perceptions of what we are optimising for.  We need different and diverse views to ensure that our guardrails don’t become so narrow they look like rail tracks, and we repeat the same mistakes because that is what the process determines.  Equally, we must agree together boundaries of divergence which means we can optimise as a team for something that we believe in and has a shared purpose of us and our stakeholders.  Finding this dynamic area of alignment is made easier with the Peak Paradox framework. 

However, our model of risk-informed responsible decision making is based on the idea that the majority decides, essentially democracy.  If the “majority” is a supermajority, we need 76%; for a simple majority, we need 51%, and for minority protections less than 10%.  What we get depends on how someone has previously set up the checks, balances, and control system. And then there is the idea of monarchy or the rich and powerful making the decisions for everyone else, the 0.001%.  

However, our guardrails for democratic decision making breaks down when there is a need for choices, decisions, judgement that has to be made that people do not like.  How do we enable better decisions when hard decisions mean you will lose the support of the majority? 

We do not all agree with vaccines (even before covid), eating meat, climate change, our government or authority.  Vaccines in the current global pandemic period are one such tension of divergence.  We see this every day in the news feeds; there is an equal and opposite view for every side.  Humanity at large lives at Peak Paradox, but we don’t equally value everyone views. Why do we struggle with the idea that individual liberty of choice can be taken away in the interests of everyone?  

Climate change is another.  Should the government act in the longer term and protect the future or act in the short term persevering liberty and ensure re-election?  Protestors with a cause may fight authority and are portrayed as mavericks and disruptors, but history remembers many as pioneers, martyrs and great leaders.  Those who protested in the 1960 and 1970s against nuclear energy may look back today and think that they should have been fighting fossil fuels.  Such pioneers are optimising for something outside of the normal guardrails. Whilst they appear to be living outside of the accepted guardrails, they can see they should be adopted.

Our guardrails work well when we can agree on the consequences of risk-informed and responsible decision making; however, in an age when information is untrusted and who is responsible is questioned, we find that we all have different guardrails.  Living at Peak Paradox means we have to accept we will never agree on what is a responsible decision and we are likely to see that our iteration of democracy that maintains power and control is going to end in a revolution if we could only agree on what to optimise for.   



@_Nat Zone

【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向

~オープンバンキングからGAINまで~ 来る11月… The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

~オープンバンキングからGAINまで~

来る11月12日に、『金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向と将来像~オープンバンキングからGAINまで~』と題してセミナー(有料)を行います(予定)。お客様が集まれば、ですが。2019年に行ったセミナーのフォローアップの位置付けです。以下のアジェンダからは読み取りにくいかもしれませんが、今回は特に分散アイデンティティの実現に重要な「OIDC for Identity Assurance」や、9月にドイツで発表した主に金融機関が属性プロバイダー(Identity Information Provider)としての役割を果たしていくグローバルな枠組みであるGAIN (Global Assured Identity Network) などにスポットライトを当てて行きたいと思います。

また、今回はリモートだけでなく対面もあるハイブリッド方式で、わたしは現地に入りますので、もしいらっしゃればお目にかかることができるので楽しみにしております。お申し込みはこちらのリンクから

https://seminar-info.jp/entry/seminars/view/1/5474

アジェンダ

1.金融系サービスにとっての「認証」問題とデジタルアイデンティティー
(1)認証の課題と口座の不正利用・不正送金問題~事例を交えて
(2)GAFAの中心戦略デジタルアイデンティティーとは何か
(3)アイデンティティ管理フレームワーク
(4)質疑応答

2.デジタルアイデンティティーの標準「OpenID Connect」の概要
(1)ログインモデルの変遷
(2)OpenID Connectの概要
(3)OpenID Connectの3つの属性連携モデル
(4)質疑応答

3.Open Banking と OpenIDの拡張仕様
(1)Open Bankingとは?FAPIとは?
(2)FAPIの世界的広がり
(3)CIBA, Grant Management, OIDC4IDA
(4)質疑応答

4.デジタルアイデンティティーの実現に向けて
(1)金融機関にとってのメリット
(2)何から取り組むべきか
(3)開発・実装上の留意点
(4)質疑応答

5.デジタルアイデンティティーの未来
(1)プライバシーの尊重とデジタルアイデンティティー
(2)選択的属性提供と分散アイデンティティ基盤OpenID Connect
(3)GAINトラストフレームワーク
(4)質疑応答

6.質疑応答
(1)全体を通じて
(2)書籍『デジタルアイデンティティー』についてのQ&Aセッション

The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

Wednesday, 20. October 2021

Werdmüller on Medium

Reconfiguring

A short story Continue reading on Medium »

MyDigitalFootprint

Climate impact #COP26

Are the consequence of a ½ baked decision that we created (the mess we are in)squared This article joins how the #climate outcomes we get may be related to the evidence requirements we set.  The audience for this viewpoint is those who are thinking about the long term consequences of our current decisions and the evidence we use to support those decisions. We are hoping to bring a sense of
Are the consequence of a ½ baked decision that we created (the mess we are in)squared

This article joins how the #climate outcomes we get may be related to the evidence requirements we set.  The audience for this viewpoint is those who are thinking about the long term consequences of our current decisions and the evidence we use to support those decisions. We are hoping to bring a sense of clarity to our community on why we feel frustrated and lost. You should read this because it will make you think and it will raise questions we need to debate over coffee as we search to become better versions of ourselves. 

@yaelrozencwajg @yangbo @tonyfish

The running order is: Which camp are you in for positioning of the crisis: know and accepted, still questioning or denial.   What are the early approaches to solutions?  What are policymakers doing and what is their perspective.  The action is to accept the invitation to debate at the end.


Part 1. Sustainability set up

The world appears more opinionated and divided about everything.  Climate change: real or not.  Vaccination for COVID19 conspiracy and control or in the public best interest.  Space travel for billionaires or feeding those in need. Universal basic income policy vs ignoring those aspects of society we find uncomfortable. Equality creates a more fair society or leaves us alone. So many votes are for self-interest, “it is fairer to me.”  Transparency will hold those in power to account or it will only make or worse. Open networks might create new business models on the web, but will they be sustainable? Sustainability is a false claim or it is our only option. Like books, and publications it is now complexity that is the tool that ensures power remains with the few.

We need to unpack the conflictual and tension-filled gaps in our beliefs, opinions and judgment because we depend on evidence to change our views. How evidence is presented and the systematic squeezing out of curiosity frames us and our current views. 

Evidence, in this context, is actually a problem as we have a very divided idea of what evidence is.  For some, evidence is a social media post from an influencer with 10 million followers (how can everyone else be wrong).  For others a headline on the front of a tabloid newspaper is truth (is it printed.)  For others, a statistical peer-reviewed leading journal publication that is cited 100 times is evidence.  In terms of evidence for decision making, there is a gap between the evidence requirements for research and the evidence requirements for business decisions.  To be clear, it is not that either is better; it is how we frame evidence that matters. The danger is being framed to believe something because there is a mismatch in the evidence requirements for a decision.  A single 100% influencer claim without statistical proof say about “fertility” vs a statistical trial with probability highlights both how a claim is only a claim but many will not understand what evidence is.

Why is this important? Because the evidence we see in journals, TV, media, books and publications have different criteria and credentials to those that inform business decisions.  Where is the environmental action being decided; in the board rooms!  Is this gap in evidence leading to a sustainability gap?


Part 2. Analysis from Kahan 

Here is the rub, it turns out that how scientific evidence is presented matters as its very presentation creates division. 

Dan Kahan, a Yale behavioural economist, has spent the last decade studying whether the use of reason aggravates or reduces “partisan” beliefs. His research papers are here. His research shows that aggravation and alienation easily win, irrespective of being more liberal or conservative. The more we use our faculties for scientific thought, the more likely we will take a strong position that aligns with our (original) political group (or thought).

A way through this could be to copy “solution journalism”, which reports on ways people and governments meaningfully respond to difficult problems and not on what the data says the problem is. Rather than use our best insights, analysis and thinking to reach the version of the “truth”, we use data to find ways to agree with others opinions in our communities. We help everyone to become curious. Tony Fish has created the Peak Paradox framework as an approach to remaining curious by identifying where we are aligned and where there is a delta in views without conflict.

When we use data and science in our arguments and explain the problem, the individuals will selectively credit and discredit information in patterns that reflect their commitment to certain values. They (we) (I) assimilate what they (we)(I) want.

Kahan, in 2014, asked over 1,500 respondents whether they agreed or disagreed with the following statement: “There is solid evidence of recent global warming due mostly to human activity such as burning fossil fuels.” They collected information on individuals' political beliefs and rated their science intelligence.” The analysis found that those with the least science intelligence actually have less partisan positions than those with the most. A Conservative with strong science intelligence will use their skills to find evidence against human-caused global warming, while a Liberal will find evidence for it (cognitive bias.)

In the chart above, the y-axis represents the probability of a person agreeing that human activity caused climate change. The x-axis represents the percentile a person scored on the scientific knowledge test. The width of the bars shows the confidence interval for that probability.


Part 3. Are our policies being formed by the evidence we like or evidence we have?

Governments, activists, and the media have gotten better at holding corporations accountable for the societal repercussions of their actions. A plethora of groups score businesses based on their ESG  performance, and despite often dubious methodology, these rankings are gathering a lot of attention. As a result, ESG has emerged as an unavoidable concern for corporate executives worldwide. ESG should be for boards for “purpose” or “are we doing the right thing” camp, but instead has ended up in the compliance camp, do the min, tick box.

For decades businesses have addressed sustainability as an end of pipe problem or afterthought. Rather than fundamentally altering their models to recognise that sustainability and wellbeing are critical parts for long term success, boards have typically delegated social issues to corporate social responsibility, compliance policies, or charitable foundations and associations, thus they publish their findings (which is not evidence) in annual reports. The issue becomes that neither investors nor stakeholders read these sustainability reports. Actually, they shouldn’t either.  

Although investors' thinking on sustainability has evolved substantially over the past few decades, sustainability and efficiency leaders have used strategies to pressure corporations to advance a wide range of social concerns such as the SDGs across industries and supply chains, regardless of the financial considerations. As ESG assessments, sustainability reports or guidelines have become more rigorous; this accountability raises essential biases. This “pressure” has resulted in many types of actions and raised many concerns, including in the operational efficiencies supposed to reduce the use of energy and natural resources at the expense of their profitability.

Whilst it is still unclear if most investors utilise ESGs factors in their investment selection process based on evidence, it is clear that we do side with what we want to hear and not with the science, Dan Kahan, in part 2 above, was right.


Part 4. Null Hypothesis H(0) A lack of headspace for most people to think about the complexity of these issues due to meeting performance targets means leadership has to make time. And if it does not, we become the problem.  

At what point do people care about something bigger than themselves? This means you as a person have the headspace to move from survival towards thriving. (PeakParadox.com)

If ⅓ of the world don’t know where the next meal today comes from, they will not have the headspace to worry about sustainability.

If the next additional ⅓ of the world don’t know where the food will come from for tomorrow, they will not have the headspace to worry about sustainability.

If the next ⅙ of the world will run out of food and money in 4 weeks - worrying about sustainability is not their most significant concern.

Less than ⅙ of the world can survive and think beyond four weeks - is that enough to make a difference, and are these people in roles that count?  

Between than 0.01% and 0.001%  (8m and 80m) people should be able to consider global complexity on the basis that they will never have money or food issues (over $1m in assets), but are they acting together and is their voice enough to make a difference?  

Is leadership's first priority to ensure that first ⅚ have enough to survive and worry for them, but are they able to manage this conflict?  Which group has the headspace to cope with recycling?  What is amazing and worthy of note is that the majority of those who care about the environment, sustainability, recycling have created headspace irrespective of their situation. The argument above was designed to frame your thinking, the reality is we don’t create headspace because we are too busy.

Part 5.  The imposter syndrome: followers are not followers

Politics (leadership), Business (leadership), Quango/ NGO (leadership), individuals (leadership), influencers (leadership) - all have different agendas, and demands for different outcomes as incentives drive in different directions.   We lack sustainable leadership that drives in one direction. 

As a Leader and opinion former, the most troubling finding should be that individuals with more “scientific intelligence” are the quickest to align themselves on subjects they don’t know anything about. In one experiment, Kahan analysed how people’s opinions on an unfamiliar subject are affected when given some basic scientific information, along with details about what people in their self-identified political group tend to believe about that subject. It turned out that those with the strongest scientific reasoning skills were most likely to use the information to develop partisan opinions.

Critically Kahan’s research shows that people that score well on a measure called “scientific curiosity” actually show less partisanship, and it is this aspect we need to use.

Do we need to move away from “truth”, “facts”, “data”, and “right decisions” if we want to have a board and senior team who can become aligned? We need to present ideas, concepts, how others are finding solutions and make our teams more curious. Being curious appears to be the best way to bring us together — however counterintuitive that is.  But to do that, we have to give up on filling time, productivity, efficiency, effectiveness, keeping people busy and giving more people time to escape survival and work together for the better good. 

There is a systematic squeezing out of curiosity in our current system.  Are we to blame through schooling and education, and search engines? Have we lost how to be curious if the fact and truth presented to us is one that just aligns to our natural bias or one that challenges us?   Do we spend sufficient time with others' views to be able to improve our own?  Has individualism and personalisation created and reinforced our self opinions of our own views are correct?  Does the advertising model depend on this divide?

Part 6. Conclusion, rationality and irrationality 

There is a clear message to those in leadership; stop using evidence to create division, push people away or sure up your own camp. How do we take (all) evidence in and use it to ask questions which mean we come together for a common purpose?  

Politics becomes irrational as we focus more on the individual and less on society and community. Politicians and policy formation need to be voted in, which means they have to mislead and misrepresent their populations who are acting in their own interests: therefore, we find evidence for decisions for short term gain based on individual preferences and not long term community - this is obvious but has to be said.  The same is happening in many corporations.

Anger is often seen as a rational emotion, but that is because we focus on the evidence we want to justify the action. When you feel under-represented, threatened, or in harm's way - the evidence you want will fit the glove. Understanding how the evidence frames us is what brings value to the process. 


Part 7. Call to action: The Road to Sustainability webinar series

We believe we have to communicate better, talk openly, listen to more, debate to appreciate, be curious and find a route to collaborate. The best way is to do something small.  Sign up for the sessions below and bring your evidence, but be prepared to take away different evidence so we can make better decisions together. 

The Road to Sustainability is a content and tool platform and initiative launched in October 2020 that started as a weekly email newsletter providing approaches and strategies to plan sustainability and innovation. We are launching the third edition of our webinar series, following up on the successful two previous versions, “From chaos to recovery: gateway to sustainability”.

This new series will take place every Monday through 5 meetings from October 4th to November 8th, with a possible extension of the plan.

The following schedule is based on our approach "Roadmap and product management - the new framework for sustainability conversations". The sessions have an informative purpose and constitute sets of criteria to help organisations in their operations towards sustainability:

Please register here: https://event.theroadtosustainability.com.



Identity Praxis, Inc.

Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker

I thoroughly enjoyed my interview with Nishant Garg with WhatMarketWants. Here’s the abstract for the interview: “What’s your company’s approach to Mobile Marketing in a Privacy-First World? Mobile Marketing Expert Michael Becker talks about what’s mobile marketing, the potential of mobile marketing, mobile responsive content, mobile-optimized website, marketing in a privacy-centric world, the fut

I thoroughly enjoyed my interview with Nishant Garg with WhatMarketWants.

Here’s the abstract for the interview:

“What’s your company’s approach to Mobile Marketing in a Privacy-First World? Mobile Marketing Expert Michael Becker talks about what’s mobile marketing, the potential of mobile marketing, mobile responsive content, mobile-optimized website, marketing in a privacy-centric world, the future of marketing, and much more” (Garg, 2021).

WhatMarketWants Interview with Michael Becker

REFERENCES Garg, N. (2021). Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker. Retrieved October 22, 2021, from https://www.youtube.com/watch?v=IjIIPsI5Uoc

The post Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker appeared first on Identity Praxis, Inc..

Tuesday, 19. October 2021

Simon Willison

Why you shouldn't invoke setup.py directly

Why you shouldn't invoke setup.py directly Paul Ganssle explains why you shouldn't use "python setup.py command" any more. I've mostly switched to pip and pytest and twine but I was still using "python setup.py sdist" - apparently the new replacement recipe for that is "python -m build". Via @pganssle

Why you shouldn't invoke setup.py directly

Paul Ganssle explains why you shouldn't use "python setup.py command" any more. I've mostly switched to pip and pytest and twine but I was still using "python setup.py sdist" - apparently the new replacement recipe for that is "python -m build".

Via @pganssle


Datasette 0.59: The annotated release notes

Datasette 0.59 is out, with a miscellaneous grab-bag of improvements. Here are the annotated release notes. Column metadata Columns can now have associated metadata descriptions in metadata.json, see Column descriptions. (#942) I've been wanting this for ages. Tables consist of columns, and column names very rarely give you enough information to truly understand the associated data. You

Datasette 0.59 is out, with a miscellaneous grab-bag of improvements. Here are the annotated release notes.

Column metadata

Columns can now have associated metadata descriptions in metadata.json, see Column descriptions. (#942)

I've been wanting this for ages. Tables consist of columns, and column names very rarely give you enough information to truly understand the associated data. You can now drop extra column definitions into your metadata.json like so:

{ "databases": { "la-times": { "tables": { "cdph-county-cases-deaths": { "columns": { "county": "The name of the county where the agency is based.", "fips": "The FIPS code given to the county by the federal government. Can be used to merge with other data sources.", "date": "The date when the data were retrieved in ISO 8601 format.", "confirmed_cases": "The cumulative number of coronavirus cases that were confirmed as of that time. This is sometimes called the episode date by other sources.", "reported_cases": "The cumulative number of coronavirus cases that were reported as of that time. This reflects when cases were first announced by the state.", "probable_cases": "The cumulative number of probable coronavirus cases that were confirmed as of that time. This reflects the results of antigen tests, a rapid testing technique different from the standard test.", "reported_and_probable_cases": "The cumulative number of reported and probable coronavirus cases as of that time.", "reported_deaths": "The cumulative number of deaths reported at that time." } } } } } }

The LA Times publish a meticulous data dictionary for the 46 CSV and GeoJSON files that they maintain tracking the pandemic in their california-coronavirus-data GitHub repository.

To demonstrate the new column metadata feature, I wrote a script that converts their Markdown data dictionary into Datasette's metadata format and publishes it along with imported CSV data from their repository.

You can explore the result at covid-19.datasettes.com/la-times - here's their cdcr-prison-totals table tracking Covid cases in prisons operated by the California Department of Corrections and Rehabilitation.

register_commands() plugin hook

New register_commands() plugin hook allows plugins to register additional Datasette CLI commands, e.g. datasette mycommand file.db. (#1449)

I originally built this because I thought I would need it for Datasette Desktop, then found it wasn't necessary for that project after all.

I held off on implementing this for quite a while on the basis that plugins which needed their own CLI interface could implement one entirely separately as a Click-based CLI app - dogsheep-beta implements this pattern, offering both a dogsheep-beta index command and registering itself as a Datasette plugin.

The problem with plugins implementing their own separate CLI commands is that users then need to understand where they have been installed. Datasette is designed to work well with virtual environments, but if a plugin is installed into a virtual environment I can't guarantee that any CLI tools it includes will execute when the user types their name.

Now that plugins can register datasette subcommand subcommands this problem has a solution: provided users can run datasette they'll also be able to run CLI commands provided by plugins, without needing to understand how to modify their path.

I expect this to be particularly useful for Datasette installed through Homebrew, which invisibly sets up its own virtual environment into which plugins can be installed using datasette install plugin-name.

Count unique facet values with ?_facet_size=max

Adding ?_facet_size=max to a table page now shows the number of unique values in each facet. (#1423)

When I'm using facets to explore data, I'm often interested in how many values are available in the facet - particularly if I'm faceting by a column such as country or state.

I added a ... link to show the maximim number of facets in Datasette 0.57. Clicking that link adds ?_facet_size=max to the URL, which now also adds a numeric count of the number of distinct facet values.

Here's an example using that Californian prison data.

Upgrading httpx

Upgraded dependency httpx 0.20 - the undocumented allow_redirects= parameter to datasette.client is now follow_redirects=, and defaults to False where it previously defaulted to True. (#1488)

A while ago Tom Christie requested feedback on how httpx should handle redirects.

The requests library that inspired it automatically follows 301 and 302 redirects unless you explicitly tell it not to.

I've been caught out by this many times in the past - it's not default behaviour that I want from my HTTP client library - so I chimed in as favouring a change in behaviour. I also suggested that follow_redirects=True would be a better term for it than allow_redirects=True.

Tom made that change for httpx 1.0, and then back-ported it for version 0.20 - after all, pre-1.0 you're allowed to make breaking changes like this.

... and Datasette broke, hard! Datasette embeds httpx pretty deeply inside itself, and the breaking change caused all kinds of errors and test failures.

This was the final push I needed to get 0.59 released.

(I just found out this also broke the Homebrew tests for Datasette, as those relied on datasette --get '/:memory:.json?sql=select+3*5' automatically following the redirect to /_memory.json?sql=select+3*5 instead.)

Code that figures out which named parameters a SQL query takes in order to display form fields for them is no longer confused by strings that contain colon characters. (#1421)

One of my favourite obscure features of Datasette is the way it can take a SQL query such as the following:

select * from [cdcr-prison-totals] where "zipcode" = :zip

And extract out that :zip parameter and turn it into an HTML form field, as seen here:

This used to use a regular expression, which meant that it could be confused by additional colons - the following SQL query for example:

select * from content where created_time = '07:00' and author = :author

I thought that solving this properly would require embedding a full SQLite-compatible SQL parser in Datasette.

Then I realized SQLite's explain output included exactly the data I needed, for example:

addr opcode p1 p2 p3 p4 p5 comment 0 Init 0 10 0 0 1 OpenRead 0 42 0 2 0 2 Rewind 0 9 0 0 3 Column 0 1 1 0 4 Ne 2 8 1 BINARY-8 82 5 Rowid 0 3 0 0 6 Column 0 1 4 0 7 ResultRow 3 2 0 0 8 Next 0 3 0 1 9 Halt 0 0 0 0 10 Transaction 0 0 35 0 1 11 Variable 1 2 0 :name 0 12 Goto 0 1 0 0

The trick then is to run an explain, then find any rows with an opcode of Variable and read the p4 register to find out the name of those variables.

This is risky, since SQLite makes no promises about the stability of the explain output - but it's such a useful trick that I'm now contemplating building an automated test suite around it such that if a future SQLite release breaks things I will at least know about it promptly.

Everything else

The --cors option now causes Datasette to return the Access-Control-Allow-Headers: Authorization header, in addition to Access-Control-Allow-Origin: *. (#1467)

This was a feature request from users of the datasette-auth-tokens plugin.

Renamed --help-config option to --help-settings. (#1431)

Part of my continuing goal to free up the term "config" to mean plugin configuration (which is currently mixed up with Datasette's metadata concept) rather than meaning the options that can be passed to the Datasette CLI tool (now called settings).

datasette.databases property is now a documented API. (#1443)

I've got into the habit of documenting any Datasette internals that I use in a plugin. In this case I needed it for datasette-block-robots.

The base.html template now wraps everything other than the <footer> in a <div class="not-footer"> element, to help with advanced CSS customization. (#1446)

I made this change so that Datasette Desktop could more easily implement a sticky footer that stuck to the bottom of the application window no matter how short or tall it was.

The render_cell() plugin hook can now return an awaitable function. This means the hook can execute SQL queries. (#1425)

Another example of the await me maybe pattern in action.

register_routes(datasette) plugin hook now accepts an optional datasette argument. (#1404)

This means plugins can conditionally register routes based on plugin configuration.

New hide_sql canned query option for defaulting to hiding the SQL query used by a canned query, see Additional canned query options. (#1422)

The goal of canned queries is to provide an interface for people who don't know SQL to execute queries written by other people, potentially providing their own inputs using the :parameter mechanism described above.

If it's being used for that, there's not much to be gained from making them scroll past the SQL query first!

Adding "hide_sql": true to the canned query configuration now defaults to hiding the query for them - though they can still click "Show SQL" to see it.

New --cpu option for datasette publish cloudrun. (#1420)

Google Cloud Run recently added the ability to specify 1, 2 or 4 vCPUs when running a deploy.

If Rich is installed in the same virtual environment as Datasette, it will be used to provide enhanced display of error tracebacks on the console. (#1416)

Rich is Will McGugan's phenomenal Python library for building beautiful console interfaces. One of the many tricks up its sleeve is improved display of exceptions, including more detailed tracebacks and local variables. Datasette now takes advantage of this if Rich is installed in the same virtual environment.

datasette.utils parse_metadata(content) function, used by the new datasette-remote-metadata plugin, is now a documented API. (#1405)

Another API that became documented after I used it in a plugin.

The datasette-remote-metadata plugin is pretty neat.

I sometimes find myself working on projects where I'm deploying a large database file - 1 or 2 GB - to Google Cloud Run. Each deploy can take several minutes.

If I want to tweak a canned query or a few lines of text in the metadata for that deployment, it can be frustrating to have to push an entirely new deploy just to make those changes.

The remote metadata plugin allows me to host the metadata at a separate URL, which I can then update without needing a full deploy of the underlying database files.

The last two were simple bug fixes:

Fixed bug where ?_next=x&_sort=rowid could throw an error. (#1470) Column cog menu no longer shows the option to facet by a column that is already selected by the default facets in metadata. (#1469)
Releases this week datasette-publish-vercel: 0.11 - (17 releases total) - 2021-10-18
Datasette plugin for publishing data using Vercel datasette-statistics: 0.2 - (3 releases total) - 2021-10-15
SQL statistics functions for Datasette datasette-auth-tokens: 0.3 - (7 releases total) - 2021-10-15
Datasette plugin for authenticating access using API tokens datasette: 0.59 - (98 releases total) - 2021-10-14
An open source multi-tool for exploring and publishing data TIL this week Using Fabric with an SSH public key Using the sqlite3 Python module in Pyodide - Python WebAssembly

Werdmüller on Medium

The corpus at the end of the world

A short story about machine learning, the climate crisis, and change Continue reading on Medium »

A short story about machine learning, the climate crisis, and change

Continue reading on Medium »

Monday, 18. October 2021

Simon Willison

Where does all the effort go? Looking at Python core developer activity

Where does all the effort go? Looking at Python core developer activity Łukasz Langa used Datasette to explore 28,780 pull requests made to the CPython GitHub repository, using some custom Python scripts (and sqlite-utils) to load in the data. Via @llanga

Where does all the effort go? Looking at Python core developer activity

Łukasz Langa used Datasette to explore 28,780 pull requests made to the CPython GitHub repository, using some custom Python scripts (and sqlite-utils) to load in the data.

Via @llanga


Tests aren’t enough: Case study after adding type hints to urllib3

Tests aren’t enough: Case study after adding type hints to urllib3 Very thorough write-up by Seth Michael Larson describing what it took for the urllib3 Python library to fully embrace mypy and optional typing and what they learned along the way.

Tests aren’t enough: Case study after adding type hints to urllib3

Very thorough write-up by Seth Michael Larson describing what it took for the urllib3 Python library to fully embrace mypy and optional typing and what they learned along the way.


Webistemology - John Wunderlich

The Vaccine Certificate Experience

Version 1 of the Ontario COVID Vaccine Certificate is a cumbersome experience that needs some work
"It was hard to write, it should be hard to use"

The quote above was something a programmer friend of mine used to say in the '90s and, despite all the advances in user experience or UX design, it appears to remain effectively true. Let's look at the experience over this past weekend with the newly rolled out enhanced vaccine certificate in Ontario.

Let me preface this by saying that I appreciate that this may be version 1 of a certificate with a QR code and that my comments are intended for an improved version 1.x or 2 of the proof of vaccination. I should also note that CBC has already pointed out that Ontario's enhanced vaccine certificate system is not accessible to marginalized people.

Downloading the Certificate

You will need to go to https://covid-19.ontario.ca/get-proof/ and answer the following questions:

How many doses of the COVID-19 vaccine do you currently have? (required) Did you get all your doses in Ontario? (required) Select which health card you have (required) Do you identify as First Nations, Inuit, or Métis? (required) If you are a non-Indigenous partner or household member of someone in this group, select "Yes." A couple of more clicks (get the certificate through the website or get it by mail or print it at a local library, ServiceOntario location, or call a friend and you will be asked to agree to the Terms of Service which includes the following:
By inputting your personal information and personal health information into the COVID-19 Vaccination Services you are agreeing to the ministry's collection, use and disclosure of this information for the purpose of researching, investigating, eliminating or reducing the current COVID-19 outbreak and as permitted or required by law in accordance with PHIPA as set out above. You also agree that your information will be made available to the Public Health Unit(s) in Ontario responsible for your geographic area for the same purpose.
More specifically, by using COVID-19 Vaccination Services, you consent to the ministry collecting identifying information, including personal health information, about you that you submit through the patient verification page so that the Ministry can ensure that it correctly identifies you for the purpose of administering the COVID-19 vaccination program.
Neither the ministry nor the Public Health Unit(s) in Ontario will further use or disclose your personal information or personal health information except for the purposes set out above.

When you click on the "Get a copy" button you may be asked to wait because a virtual queue is being used to throttle traffic. When you get through you will be asked to provide information from your health card. I have a green OHIP card with a photo so I get this screen:

Assuming the information is correct you will be shown your new certificate, including a QR code (more on this later) as a PDF. At this point, it is on you to print out copies of the two-page certificate and carry them with you. I took it upon myself to crop the certificate to include just my name, birth date & the QR code and print it out small enough that I could put it in a laminating pouch and carry it in my wallet. This enables me to go to my wallet, pull out the laminate and my driving license, and have both ready for presentation. For me, that is the simplest and easiest way. Your mileage may vary since my approach requires being comfortable with basic pdf or graphic editing and having access to a printer. I also put electronic copies on my phone so that I could use that option

Verifying the certificate

Here is what I observed at brunch (shout out to the Sunset Grill on the Danforth).

Staff person asks for Proof of Vaccine Customer digs out their phone or paper copy of the certificate The staff person looks at it (or uses the Ontario Verify App) Staff person asks for a government-issued ID Customer digs out their driver's license The staff person looks at the ID and verifies the name We can do better

What I oberved is NOT a user-friendly experience for either the customer or the business. For the experience to be improved it needs to be a single presentation operation of either a paper or digital certificate that the business can verify in one step. Here's an example that I mocked up some months ago (the picture is from https://thispersondoesnotexist.com/)

This provides the following functionality:

The existing QR code will return the same (i.e. green checkmark in the event of a good code) The verifier (staff person) can compare the photo on the card with the person in front of them rather than asking for a government ID. Digital verification may have the option of showing the verifier the picture on the verify app for increased assurance. Note that the Ministry of Health is already authorized to have pictures for the current health card. Privacy and Security

The advantage of a paper and ID card presentation ritual is that it is difficult to hack. So if we are going to improve the presentation with a single credential as above, privacy and security MUST be protected. This is why a version 1 that is paper/PDF only is not a bad security and privacy choice. On the Verify Ontario app side, both the terms of use and privacy statement are reasonably clear (although the choice to use Google Analytics could be questioned) and make the right commitments  

Recommendations Provide retailers with a verifier

It's nice that the Ontario Verify app is freely downloadable. I used it to check that the laminated cards that I made from my own certificate were readable. But this puts the burden on the retailer and their staff. When I saw someone come in with their QR code, the waitress had to dig out her personal phone and use that. Not a good solution. Either the provincial government or public health should provide retailers with a low to zero cost option to procure their own tablets for use on entry to the store.

Provide Ontarians with options

For example:

I'm relatively tech-savvy so I'd be happy with a QR code/certificate I could add to a wallet app on my phone for easy display without fumbling around. ServiceOntario should provide a service to produce laminated wallet cards WITH photos to any Ontarian who shows up at a ServiceOntario site. On a go-forward basis, ensure that people attending vaccination clinics get printouts of their QR code based certificates WHEN they get vaccinated since the certificate includes the date of vaccination and presumably the QR code won't return a "Green" until the appropriate dated

With all of the above said, I have to say I'm happy that Ontario's first steps for vaccine certificates appear to have respected Ontarians' privacy and look to be built securely. I look forward to the next couple of weeks because I'm sure that security people will be pounding on the service to find flaws. WHEN they find flaws, let's hope that the province is responsive so that we can all benefit.  


Damien Bod

Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph application permissions part 2

This article shows how to create Microsoft Teams meetings in ASP.NET Core using Microsoft Graph with application permissions. This is useful if you have a designated account to manage or create meetings, send emails or would like to provide a service for users without an office account to create meetings. This is a follow up […]

This article shows how to create Microsoft Teams meetings in ASP.NET Core using Microsoft Graph with application permissions. This is useful if you have a designated account to manage or create meetings, send emails or would like to provide a service for users without an office account to create meetings. This is a follow up post to part one in this series which creates Teams meetings using delegated permissions.

Code: https://github.com/damienbod/TeamsAdminUI

Blogs in this series

Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph (delegated)

Setup Azure App registration

A simple ASP.NET Core application with no authentication was created and implements a form which creates online meetings on behalf of a designated account using Microsoft Graph with application permissions. The Microsoft Graph client uses an Azure App registration for authorization and the client credentials flow is used to authorize the client and get an access token. No user is involved in this flow and the application requires administration permissions in the Azure App registration for Microsoft Graph.

An Azure App registration is setup to authenticate against Azure AD. The ASP.NET Core application will use application permissions for the Microsoft Graph. The listed permissions underneath are required to create the Teams meetings OBO and to send emails to the attendees using the configuration email which has access to office.

Microsoft Graph application permissions:

User.Read.All Mail.Send Mail.ReadWrite OnlineMeetings.ReadWrite.All

This is the list of permissions I have activate for this demo.

Configuration

The Azure AD configuration is used to get a new access token for the Microsoft Graph client and to define the email of the account which is used to create Microsoft Teams meetings and also used to send emails to the attendees. This account needs an office account.

"AzureAd": { "TenantId": "5698af84-5720-4ff0-bdc3-9d9195314244", "ClientId": "b9be5f88-f629-46b0-ac4c-c5a4354ac192", // "ClientSecret": "add secret to the user secrets" "MeetingOrganizer": "--your-email-for-sending--" },

Setup Client credentials flow to for Microsoft Graph

A number of different ways can be used to authorize a Microsoft Graph client and is a bit confusing sometimes. Using the DefaultCredential is not really a good idea for Graph because you need to decide if you use a delegated authorization or a application authorization and the DefaultCredential will take the first one which works and this depends on the environment. For application authorization, I use the ClientSecretCredential Identity to get the service access token. This requires the .default scope and a client secret or a client credential. Using a client secret is fine if you control both client and server and the secret is stored in an Azure Key Vault. A client certificate could also be used.

private GraphServiceClient GetGraphClient() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); return new GraphServiceClient(clientSecretCredential, scopes); }

The IConfidentialClientApplication interface could also be used to get access tokens which is used to authorize the Graph client. A simple in memory cache is used to store the access token. This token is reused until it expires or the application is restart. If using multiple instances, maybe a distributed cache would be better. The client uses the “https://graph.microsoft.com/.default&#8221; scope to get an access token for the Microsoft Graph client. A GraphServiceClient instance is returned with a value access token.

public class ApiTokenInMemoryClient { private readonly IHttpClientFactory _clientFactory; private readonly ILogger<ApiTokenInMemoryClient> _logger; private readonly IConfiguration _configuration; private readonly IConfidentialClientApplication _app; private readonly ConcurrentDictionary<string, AccessTokenItem> _accessTokens = new(); private class AccessTokenItem { public string AccessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } public ApiTokenInMemoryClient(IHttpClientFactory clientFactory, IConfiguration configuration, ILoggerFactory loggerFactory) { _clientFactory = clientFactory; _configuration = configuration; _logger = loggerFactory.CreateLogger<ApiTokenInMemoryClient>(); _app = InitConfidentialClientApplication(); } public async Task<GraphServiceClient> GetGraphClient() { var result = await GetApiToken("default"); var httpClient = _clientFactory.CreateClient(); httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", result); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); var graphClient = new GraphServiceClient(httpClient) { AuthenticationProvider = new DelegateAuthenticationProvider(async (requestMessage) => { requestMessage.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result); await Task.FromResult<object>(null); }) }; return graphClient; } private async Task<string> GetApiToken(string api_name) { if (_accessTokens.ContainsKey(api_name)) { var accessToken = _accessTokens.GetValueOrDefault(api_name); if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AccessToken; } else { // remove _accessTokens.TryRemove(api_name, out _); } } _logger.LogDebug($"GetApiToken new from STS for {api_name}"); // add var newAccessToken = await AcquireTokenSilent(); _accessTokens.TryAdd(api_name, newAccessToken); return newAccessToken.AccessToken; } private async Task<AccessTokenItem> AcquireTokenSilent() { //var scopes = "User.read Mail.Send Mail.ReadWrite OnlineMeetings.ReadWrite.All"; var authResult = await _app .AcquireTokenForClient(scopes: new[] { "https://graph.microsoft.com/.default" }) .WithAuthority(AzureCloudInstance.AzurePublic, _configuration["AzureAd:TenantId"]) .ExecuteAsync(); return new AccessTokenItem { ExpiresIn = authResult.ExpiresOn.UtcDateTime, AccessToken = authResult.AccessToken }; } private IConfidentialClientApplication InitConfidentialClientApplication() { return ConfidentialClientApplicationBuilder .Create(_configuration["AzureAd:ClientId"]) .WithClientSecret(_configuration["AzureAd:ClientSecret"]) .Build(); } }

OnlineMeetings Graph Service

The AadGraphApiApplicationClient service is used to send the Microsoft Graph requests. This uses the graphServiceClient client with the correct access token. The GetUserIdAsync method is used to get the Graph Id using the UPN. This is used in the Users API to run the requests with the application scopes. The Me property is not used as this is for delegated scopes. We have no user in this application. We run the requests as an application on behalf of the designated user.

public class AadGraphApiApplicationClient { private readonly IConfiguration _configuration; public AadGraphApiApplicationClient(IConfiguration configuration) { _configuration = configuration; } private async Task<string> GetUserIdAsync() { var meetingOrganizer = _configuration["AzureAd:MeetingOrganizer"]; var filter = $"startswith(userPrincipalName,'{meetingOrganizer}')"; var graphServiceClient = GetGraphClient(); var users = await graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); return users.CurrentPage[0].Id; } public async Task SendEmailAsync(Message message) { var graphServiceClient = GetGraphClient(); var saveToSentItems = true; var userId = await GetUserIdAsync(); await graphServiceClient.Users[userId] .SendMail(message, saveToSentItems) .Request() .PostAsync(); } public async Task<OnlineMeeting> CreateOnlineMeeting(OnlineMeeting onlineMeeting) { var graphServiceClient = GetGraphClient(); var userId = await GetUserIdAsync(); return await graphServiceClient.Users[userId] .OnlineMeetings .Request() .AddAsync(onlineMeeting); } public async Task<OnlineMeeting> UpdateOnlineMeeting(OnlineMeeting onlineMeeting) { var graphServiceClient = GetGraphClient(); var userId = await GetUserIdAsync(); return await graphServiceClient.Users[userId] .OnlineMeetings[onlineMeeting.Id] .Request() .UpdateAsync(onlineMeeting); } public async Task<OnlineMeeting> GetOnlineMeeting(string onlineMeetingId) { var graphServiceClient = GetGraphClient(); var userId = await GetUserIdAsync(); return await graphServiceClient.Users[userId] .OnlineMeetings[onlineMeetingId] .Request() .GetAsync(); } private GraphServiceClient GetGraphClient() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); return new GraphServiceClient(clientSecretCredential, scopes); } }

The startup class adds the services as required. No authentication is added for the ASP.NET Core application.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<AadGraphApiApplicationClient>(); services.AddSingleton<ApiTokenInMemoryClient>(); services.AddScoped<EmailService>(); services.AddScoped<TeamsService>(); services.AddHttpClient(); services.AddOptions(); services.AddRazorPages(); }

Azure Policy configuration

We need to allow applications to access online meetings on behalf of a user with this setup. This is implemented using the following documentation:

https://docs.microsoft.com/en-us/graph/cloud-communication-online-meeting-application-access-policy

Testing

When the application is started, you can create a new Teams meeting with the required details. The configuration email must have an account with access to Office and be on the same tenant as the Azure App registration setup for the Microsoft Graph application permissions. The Email must have a policy setup to allow the Microsoft Graph calls. The Teams meeting is organized using the identity that signed in because we used the applications permissions.

This works really well and can be used for Azure B2C solutions as well. If possible, you should only use delegated scopes in the application, if possible. By using application permissions, the ASP.NET Core is implicitly an administrator of these permissions as well. It would be better if user accounts with delegated access was used which are managed by your IT etc.

Links:

https://docs.microsoft.com/en-us/graph/api/application-post-onlinemeetings

https://github.com/AzureAD/microsoft-identity-web

Send Emails using Microsoft Graph API and a desktop client

https://www.office.com/?auth=2

https://aad.portal.azure.com/

https://admin.microsoft.com/Adminportal/Home

https://blazorhelpwebsite.com/ViewBlogPost/43

Sunday, 17. October 2021

Simon Willison

Web Browser Engineering

Web Browser Engineering In progress free online book by Pavel Panchekha and Chris Harrelson that demonstrates how a web browser works by writing one from scratch using Python, tkinter and the DukPy wrapper around the Duktape JavaScript interpreter. Via @humphd

Web Browser Engineering

In progress free online book by Pavel Panchekha and Chris Harrelson that demonstrates how a web browser works by writing one from scratch using Python, tkinter and the DukPy wrapper around the Duktape JavaScript interpreter.

Via @humphd

Friday, 15. October 2021

Doc Searls Weblog

On solving the worldwide shipping crisis

The worldwide shipping crisis is bad. Here are some reasons: “Just in time” manufacturing, shipping, delivery, and logistics. For several decades, the whole supply system has been optimized for “lean” everything. On the whole, no part of it fully comprehends breakdowns outside the scope of immediate upstream or downstream dependencies. The pandemic, which has been […]


The worldwide shipping crisis is bad. Here are some reasons:

“Just in time” manufacturing, shipping, delivery, and logistics. For several decades, the whole supply system has been optimized for “lean” everything. On the whole, no part of it fully comprehends breakdowns outside the scope of immediate upstream or downstream dependencies. The pandemic, which has been depriving nearly every sector of labor, intelligence, leadership, data, and much else, since early last year. Catastrophes. The largest of these was the 2021 Suez Canal Obstruction, which has had countless effects upstream and down. Competing narratives. Humans can’t help reducing all complex situations to stories, all of which require protagonists, problems, and movement toward resolution. It’s how our minds are built, and why it’s hard to look more deeply and broadly at any issue and why it’s here. (For more on that, see Where Journalism Fails.) Corruption. This is endemic to every complex economy: construction, online advertising, high finance, whatever. It happens here too. (And, like incompetence, it tends to worsen in a crisis.) Bureacracies & non-harmonized regulations. More about this below*. Complicating secondary and tertiary effects. The most obvious of these is inflation. Says here, “the spot rate for a 40-foot shipping container from Shanghai to Los Angeles rising from about $3,500 last year to $12,500 as of the end of September.” I’ve since heard numbers as high as $50,000. And, of course, inflation also happens for other reasons, which further complicates things.

To wrap one’s head around all of those (and more), it might help to start with Aristotle’s four “causes” (which might also be translated as “explanations”). Wikipedia illustrates these with a wooden dining table:

Its material cause is wood. Its efficient cause is carpentry. Its final cause is dining. Its formal cause (what gives it form) is design.

Of those, formal cause is what matters most. That’s because, without knowledge of what a table is, it wouldn’t get made.

But the worldwide supply chain (which is less a single chain than braided rivers spreading outward from many sources through countless deltas) is impossible to reduce to any one formal cause. Mining, manufacturing, harvesting, shipping on sea and land, distribution, wholesale and retail sales are all involved, and specialized in their own ways, dependencies withstanding.

I suggest, however, that the most formal of the supply chain problem’s causes is also what’s required to sort out and solve it: digital technology and the Internet. From What does the Internet make of us?, sourcing the McLuhans:

“People don’t want to know the cause of anything”, Marshall said (and Eric quotes, in Media and Formal Cause). “They do not want to know why radio caused Hitler and Gandhi alike. They do not want to know that print caused anything whatever. As users of these media, they wish merely to get inside…”

We are all inside a digital environment that is making each of us while also making our systems. This can’t be reversed. But it can be understood, at least to some degree. And that understanding can be applied.

How? Well, Marshall McLuhan—who died in 1980—saw in the rise of computing the retrieval of what he called “perfect memory—total and exact.” (Laws of Media, 1988.) So, wouldn’t it be nice if we could apply that power to the totality of the world’s supply chains, subsuming and transcending the scope and interests of any part, whether those parts be truckers, laws, standards, and the rest—and do it in real time? Global aviation has some of this, but it’s also a much simpler system than the braided rivers between global supply and global demand.

Is there something like that? I don’t yet know. Closest I’ve found is the UN’s IMO (International Maritime Organizaiton), and that only covers “the safety and security of shipping and the prevention of marine and atmospheric pollution by ships.” Not very encompassing, that. If any of ya’ll know more, fill us in.

[*Added 18 October] Just attended a talk by Oswald KuylerManaging Director of the International Chamber of Commerce‘s Digital Standards initiative, on an “Integrated Approach” by his and allied organizations that addresses “digital islands,” “no single view of available standards” both open and closed, “limited investments into training, change management and adoption,” “lack of enabling rules and regulations,” “outdated regulation,” “privacy law barriers,” “trade standard adoption gaps,” “costly technical integration,” “fragmentation” that “prevents paperless trade,” and other factors. Yet he also says the whole thing is “bent but not broken,” and that (says one slide) “trade and supply chain prove more resilient than imagined.”

Another relevant .org is the International Chamber of Shipping.

By the way, Heather Cox Richardson (whose newsletter I highly recommend) yesterday summarized what the Biden administration is trying to do about all this:

Biden also announced today a deal among a number of different players to try to relieve the supply chain slowdowns that have built up as people turned to online shopping during the pandemic. Those slowdowns threaten the delivery of packages for the holidays, and Biden has pulled together government officials, labor unions, and company ownership to solve the backup.

The Port of Los Angeles, which handles 40% of the container traffic coming into the U.S., has had container ships stuck offshore for weeks. In June, Biden put together a Supply Chain Disruption Task Force, which has hammered out a deal. The port is going to begin operating around the clock, seven days a week. The International Longshore and Warehouse Union has agreed to fill extra shifts. And major retailers, including Walmart, FedEx, UPS, Samsung, Home Depot, and Target, have agreed to move quickly to clear their goods out of the dock areas, speeding up operations to do it and committing to putting teams to work extra hours.

“The supply chain is essentially in the hands of the private sector,” a White House official told Donna Littlejohn of the Los Angeles Daily News, “so we need the private sector…to help solve these problems.” But Biden has brokered a deal among the different stakeholders to end what was becoming a crisis.

Hopefully helpful, but not sufficient.

Bonus link: a view of worldwide marine shipping. (Zoom in and out, and slide in any direction for a great way to spend some useful time.)

The photo is of Newark’s container port, viewed from an arriving flight at EWR, in 2009.

Thursday, 14. October 2021

Mike Jones: self-issued

Proof-of-possession (pop) AMR method added to OpenID Enhanced Authentication Profile spec

I’ve defined an Authentication Method Reference (AMR) value called “pop†to indicate that Proof-of-possession of a key was performed. Unlike the existing “hwk†(hardware key) and “swk†(software key) methods, it is intentionally unspecified whether the proof-of-possession key is hardware-secured or software-secured. Among other use cases, this AMR method is applicable whenever a WebAuthn

Iâ€ve defined an Authentication Method Reference (AMR) value called “pop†to indicate that Proof-of-possession of a key was performed. Unlike the existing “hwk†(hardware key) and “swk†(software key) methods, it is intentionally unspecified whether the proof-of-possession key is hardware-secured or software-secured. Among other use cases, this AMR method is applicable whenever a WebAuthn or FIDO authenticator are used.

The specification is available at these locations:

https://openid.net/specs/openid-connect-eap-acr-values-1_0-01.html https://openid.net/specs/openid-connect-eap-acr-values-1_0.html

Thanks to Christiaan Brand for suggesting this.

Wednesday, 13. October 2021

MyDigitalFootprint

Do shareholders have to live in a Peak Paradox?

For the past 50 years, most business leaders in free market-based economies have been taught or trained in a core ideology; "that the purpose of business is to serve only the shareholder". The source was #MiltonFriedman. Whilst shareholder primacy is simple and allows businesses to optimise for one thing, which has an advantage in terms of decision making, it has also created significant damage. I

For the past 50 years, most business leaders in free market-based economies have been taught or trained in a core ideology; "that the purpose of business is to serve only the shareholder". The source was #MiltonFriedman. Whilst shareholder primacy is simple and allows businesses to optimise for one thing, which has an advantage in terms of decision making, it has also created significant damage. It does not stand up to modern critique and challenges such as ESG. Importantly there is an assumption that a shareholder group was a united and unified collective.
There is an assumption: shareholders are united and unified
The reality is that in a public or investor-led company, the shareholders are as diverse in terms of vision, rationale, purpose and expectation as any standard group of associated parties. Shareholders as a group rarely have an entirely homogeneous objective. Given the dominance of shareholder primacy, how do we know this concept of non-homogeneous actors is genuine. Because there are conflicts among shareholders, which is so large, it constitutes a topic of high practical relevance for academic interest. Two areas dominate research in this area, board performance and board dynamics. Particular attention has been given to conflict that arises when ownership is shared between a dominant, controlling shareholder and minority shareholders.




The majority and dominant shareholders have incentives to pursue personal goals through the business as they disproportionately gain the benefits but do not fully bear the economic risks and can misuse their power to exploit minority shareholders. That is, the majority shareholder may push management and the board to pursue objectives that align with their own priorities but that are detrimental for the minority shareholder. Studies have shown that such conflicts negatively affect a firm's performance, valuation, and innovation. In listed companies, market regulators, using minority protection clauses, try to avoid this abuse of power, but in private equity markets, there is no regulator to prevent this conflict.

Surprisingly, however, far less research has examined whether and how different types of shareholders (even stakeholders) can complement each other so that mutual benefits arise.

In a recent study examining shareholder relationships in privately held firms, the outcomes of private equity investments in privately held family businesses were compared. This research hypothesis was to test if the objectives of professional investors and family owners differ. PE investors focus on maximising financial returns through a medium-term exit and generally have lower levels of risk aversion. Family shareholders, in contrast, generally have most of their wealth concentrated in a single firm, hold longer time horizons, and are often particularly concerned about non-economic benefits the firm brings to the family (e.g., reputation in the community). What is clear is that the reason to trade and have a purpose is critical for any alignment to emerge.

We know that boards and management need to be informed and review shareholders' objectives. Shareholders need to be informed and review the extent to which they are still aligned in terms of time horizons, risk preferences, need for cash, prioritising financial goals, and whether control or dominance of an agenda is constructive or destructive. Which is the segway to the Peak Paradox mapping above.






The fundamental issue is that we cannot make decisions at Peak Paradox and have to move towards a Peak to determine what we are optimising for. For shareholders who optimise for Peak Individual Purpose will want to use a dominant position or minority protection to force an agenda to their own goals. A high performing board, perceived, will optimise towards Peak Work Purpose. This included commercial and noncommercial, where commercial boards will optimise for a single objective such as shareholder primacy. What becomes evident at this point is the question about diversity. Dominant shareholders can create environments where the lack of diversity or thinking, experience, motivation, purpose and incentives serves them and their objectives. This will also be seen as a board that does not ask questions, cannot deal with conflict and avoids tension. Finally, if we move towards optimising for a better society (Peak Social Purpose), we find that we move towards serving stakeholders.

Will society be better off because this business exists?
The critical point here is that stakeholders cannot live with Peak Paradox's decision-making and have to find something to optimise for and align to. This will not be a three or five-year plan but a reason for the business to exist. At Peak Paradox, the most important questions get asked, such as "Will society be better off because this business exists?" Because the same data should present a yes and no answer, and the board needs to justify why it believes it should.

Tuesday, 12. October 2021

Mike Jones: self-issued

OpenID Connect Presentation at IIW XXXIII

I gave the following invited “101” session presentation at the 33rd Internet Identity Workshop (IIW) on Tuesday, October 12, 2021: Introduction to OpenID Connect (PowerPoint) (PDF) The session was well attended. There was a good discussion about the use of passwordless authentication with OpenID Connect.

I gave the following invited “101” session presentation at the 33rd Internet Identity Workshop (IIW) on Tuesday, October 12, 2021:

Introduction to OpenID Connect (PowerPoint) (PDF)

The session was well attended. There was a good discussion about the use of passwordless authentication with OpenID Connect.


Werdmüller on Medium

Checking in, checking out

A short story about the future of work Continue reading on Medium »

A short story about the future of work

Continue reading on Medium »

Monday, 11. October 2021

Damien Bod

Challenges to Self Sovereign Identity

The article goes through some of the challenges we face when using or implementing identity, authentication and authorization solutions using self sovereign identity. I based my findings after implementing and testing solutions and wallets with the following SSI solution providers: Trinsic MATTR.global Evernym Azure Active Directory Verifiable Credentials Different Wallets like Lissi Blogs in this

The article goes through some of the challenges we face when using or implementing identity, authentication and authorization solutions using self sovereign identity. I based my findings after implementing and testing solutions and wallets with the following SSI solution providers:

Trinsic MATTR.global Evernym Azure Active Directory Verifiable Credentials Different Wallets like Lissi

Blogs in this series:

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR Challenges to Self Sovereign Identity Create and issue verifiable credentials in ASP.NET Core using Azure AD

History

2021-10-31 Updated phishing section after feedback.

SSI (Self Sovereign Identity) is very new and a lot of its challenges will hopefully get solved and help to improve identity management solutions.

Some definitions:

Digital Identity: This is the ID which represents a user, for example an E-ID issued by the state, this could be a certificate, hardware key, verifiable credential etc. Identity: This is the user + application trying to access something which usually needs to be authenticated when using a protected user interactive resource. Authentication: verifying the “Identity” i.e. application + user for user interactive flows. Authorization: verify that the request presents the required credentials, specifying access rights/privileges to resources. This could mean no verification of who or what sent the request, although this can be built in with every request if required. Solutions exist for this in existing systems.

The following diagram from the Verifiable Credentials Data Model 1.0 specification shows a good overview of verifiable credentials with issuers, holders and verifiers. The holder is usually represented through a wallet application which can be any application type, not just mobile applications.

Level of security for user interaction authentication with SSI

Authentication using SSI credentials would have to same level of security as the authenticator apps which you have for existing systems. This is not as safe as using FIDO2 in your authentication process as FIDO2 is the only solution which protects against phishing. The SSI Authentication is also only as good as the fallback process, so if the fallback process, recovery process allows a username or password login, then the level would be passwords.

See this post for more details:

The authentication pyramid

Authentication Issuer

The authentication process is not any better than previous systems, every issuer needs to do this properly. Trust quality of the issuer depends on this. If a verifier wants to use verifiable credentials from a certain issuer, then a trust must exist between the verifier and the issuer. If the issuer of the credentials makes mistakes or does this in a bad way, then the verifier has this problem as well. It is really important that the credential issuer authenticates correctly and only issues credentials to correctly authenticated identities.

SIOP (Self-Issued OpenID Provider) provides one solution for this. With this solution, every issuer requires its own specific IDP (identity provider) clients and OIDC (OpenID Connect) profiles for the credentials which are issued.

When a credential issuer has a data leak or a possible security bug, maybe all credentials issued from this issuer would need to be revoked. This might mean that all data in any verifier applications created from this issuer also needs to be revoked or deleted. This is worse than before where we have many isolated systems. With SSI, we have a chain of trust. A super disaster would be if a government which issues verifiable credentials had to revoke credentials due to a security leak or security bug, then all data, processes created from this would have to evaluate and process the data again, worse case delete and require new proofs.

Authentication/Authorization Verifier

Every verifier application/process has a responsibility for it’s own authorization and the quality of its verification. Depending on what the verifier needs to do, a decision on the required verifiable credentials needs to be taken. A verifier must decide, if it needs only to authorize the verifiable credential it receives from the wallet application, or if it needs to authenticate the digital identity used in the subject of the verifiable credential. If you only authorize a verifiable credential when you should be authenticating the digital identity used in the verifiable credential, this will probably result in false identities in the verifier application or processes run for the wrong user. Once the verifiable credential is accepted as trustworthy, an identity could be created from the verifiable credential and the digital identity if contained in the subject.

Some solutions move the authentication of the digital identity to the wallet and the verifiers accept verifiable credentials from the wallet without authentication of the digital identity. This would mean that the wallet requires specific authentication steps and restrictions. Another solution would be to use a certificate which is attached to the digital identity for authentication on the verifier and on the wallet. SIOP and OpenID Connect would also provide a solution to this.

Access to Wallets, Authentication & Authorization of the holder of the Wallet

One of the biggest problems is to manage and define what verifiable credentials can be loaded into a wallet. Wallets also need to possibility to import and export backups and also share credentials with other wallets using DIDComm or something like this. Enterprise wallets will be used by many different people. This means that solutions which fix the identity to a specific wallet will not work if shared wallets or backups are to be supported. Wallets would also need to support or load verifiable credentials for other people who are not able to do this themselves or do not have access to a wallet. With these requirements it is very hard to prove that a verifiable credential belongs to the person using the wallet. Extra authentication is required on the wallet and also the issuers and the verifier applications cannot know that the credential issued to wallet x, really belongs to person x, which is the subject of the verifiable credential. Extra authentication would be required. Solutions with certificates, or wallet authentication can help solve this but no one solution will fit all solutions.

If a relationship between the person using the wallet, the credentials in the wallet and how the verifiers use and trust the credentials is managed badly, many security problems will exist. People are people and will lose wallets, share wallets when they should not and so on. This needs to be planned for and the issuer of verifiable credentials and the verifier of these credentials need to solve this correctly. This would probably mean when using verifiable credentials from a wallet, the issuer application and the verifier application would need to authenticate the user as well as the credentials being used. Depending on the use case, the verifier would not always need to do this.

Interoperability between providers & SSI solutions

At present it is not possible to use any wallet with any SSI solution. Each solution is locked to its own wallet. Some solutions provide a way of exporting verifiable credentials and importing this again in a different vendor wallet but not using wallet vendor 1 together solution vendor 2. Solutions are also not supporting the same specifications, standards. Each implementation I tried supports different standards and have vendor specific solutions which are not compatible to other vendors.

I have a separate wallet installed now for each solution I tried. It cannot be expected that users install many different wallets to use SSI. Also, if a government issues a verifiable credential with a certain wallet, it would be really bad if all verifiers and further credential issuers must use the same wallet.

With JSON-LD and BBS+, the APIs between the wallets and the SSI services should work together but vendor specific details seem to be used in the payloads. If SSI is to work, I believe any wallet which conforms to a standard x must work with any SSI service. Or the services, vendor implementations needs a common standard for issuing and verifying credentials with wallets and the agents used in the wallets. Some type of certification process would probably help here.

Phishing

Users authenticating on HTTPS websites using verifiable credentials stored on your wallet are still vulnerable to phishing attacks. This can be improved by using FIDO2 as a second factor to the SSI authentication. The DIDComm communication between agents has strong protection against phishing.

Self sovereign identity phishing scenario using HTTPS websites:

User opens a phishing website in the browser using HTTPS and clicks the fake sign-in Phishing service starts a correct SSI sign-in on the real website using HTTPS and gets presented with a QR Code and the start link for a digital wallet to complete. Phishing service presents this back on the phishing website to the victim on the phishing website. Victim scans the QR Code using the digital wallet and completes the authentication using the agent in the digital wallet, DIDComm etc. When the victim completes the sign-in using the out of band digital wallet and agent, the HTTPS website being used by the attacker gets updated with the session of the victim.

This can only be prevented by using the browser client-side origin as part of the flow, signing this and returning this back to the server to be validated, thus preventing this type of phishing attack. This cannot be prevented unless the origin from the client browser is used and validated in the authentication process. The browser client-side origin is not used in the SSI login.

PII databases will still be created

Credential Issuers require PII data to create a verifiable credentials. This data is persisted and can be leaked like we see every day on the internet. Due to costs/laws/charges, verifiers will also copy credentials and create central PII databases as well. The applications doing verifications also will save data to a local database. For example, if an insurance company uses verifiable credentials to create a new account, it will save the data from the verifiable credential to a local database as this is required for its requirements. This will have the PII data as well. So even with SSI solutions we will still share and have many databases of PII data.

If BBS+ ZKP is used which does not share the data with the verifier, just a verification, this is a big improvement compared to existing solutions. At the time of testing, this is not supported yet with any of the solutions I tried, but the specifications exist. Most solutions only support selective or compound proofs. If the verifier does not save this data, then SSI has added a big plus compared to existing solutions. A use case for this would be requesting a government document which only requires proof of the digital identity and this can be issued then without storing extra PII data or requiring a shared IDP with the digital identity issuer.

Complexity of the Self Sovereign Identity

I find the complexity when implementing SSI solutions still very high. Implementation of a basic issue credential, verify credential process requires many steps. Dev environments also require some type of callback, webhooks. ngrok solves this good but this is different after each start and needs to be re-configured regularly, which then requires new presentation templates which adds extra costs and so on. If integrating this in DevOps processes with integration testing or system testing, this would become more complex. Due to the overall complexity to set this up and implement this, developers will make mistakes and this results in higher costs. IDP or PKI solutions are easier to implement in a secure way.

Trust

When using the provided solutions, some of these systems are closed source and you have no idea how the solutions manage their data, or handle the secrets in their infrastructure, or how the code is implemented (closed source). For some solutions, I had to share my OpenID Connect client secret for a flow to complete the SSI SIOP credential issuer flow. This was persisted somewhere on the platform which does not feel good. You also don’t know who they share their secrets with and the company provider has different law requirements depending on your country of origin. Implementing a “full” SSI solution does not seem like a good option for most use cases. Open source software improves this trust at least but is not the only factor.

Notes:

I would love feedback on this and will correct, change anything which is incorrect or you believe I should add something extra. These challenges are based on my experiences implementing the solutions and reading the specifications. I really look forward to the new SSI world and how this develops. I think SSI is cool, but people need to know when and how to use it and it is not the silver bullet to identity but does add some super cool new possibilities.

Reviewers:

Matteo Locher @matteolocher

Links:

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

https://lissi.id/

trinsic

https://mattr.global/

Evernym

The authentication pyramid

https://w3c.github.io/did-core/

https://w3c.github.io/vc-data-model/

https://www.w3.org/TR/vc-data-model/

https://identity.foundation/

OIDF OIDC SSI

https://ngrok.com/

https://github.com/swiss-ssi-group


reb00ted

What does a personal home page look in the Metaverse? A prototype

At IndieWeb Create Day this past weekend, I created a prototype for what an IndieWeb-style personal home page could look like in the metaverse. Here’s a video of the demo: The source is here, in case you want to play with it.

At IndieWeb Create Day this past weekend, I created a prototype for what an IndieWeb-style personal home page could look like in the metaverse.

Here’s a video of the demo:

The source is here, in case you want to play with it.


It's been 15 years of Project VRM: Here's a collection of use cases and requirements identified over the years

Today’s Project VRM meeting marks the project’s 15 years anniversary. A good opportunity to list the uses cases that have emerged over the years. To make them more manageable, I categorize them by the stage of the relationship between customer and vendor: Category 1: Establishing the relationship What happens when a Customer or a Vendor wishes to initiate a relationship, or wishes to modify th

Today’s Project VRM meeting marks the project’s 15 years anniversary. A good opportunity to list the uses cases that have emerged over the years. To make them more manageable, I categorize them by the stage of the relationship between customer and vendor:

Category 1: Establishing the relationship

What happens when a Customer or a Vendor wishes to initiate a relationship, or wishes to modify the terms of the relationship.

1.1 Customer initiates a new relationship with a Vendor

“As a Customer, I want to offer initiating a new relationship with a Vendor.”

Description:

The Customer encounters the Vendor’s electronic presence (e.g. their website) The Customer performs a gesture on the Vendor’s site that indicates their interest of establishing a relationship As part of the gesture, the Customer’s proposed terms are conveyed to the Vendor In response, the Vendor provides acceptance of the proposed relationship and offered terms, or offers alternate terms in return. If the offered terms are different from the proposed terms, the Customer has the opportunity to propose alternate terms; this continues until both parties agree on the terms or abort the initiation. Once the terms have been agreed on, both sides record their agreement.

Notes:

To make this “consumer-grade”, much of the complexity of such concepts as “proposed terms” needs to be hidden behind reasonable defaults. 1.2 Vendor initiates a new relationship with a Customer

“As a Vendor, I want to offer initiating a new relationship with a Customer.”

Similar as for “Customer initiates a new relationship with a Vendor”, but with reversed roles.

1.3 Customer and Vendor agree on a closer relationship

“The Customer and the Vendor agree on a closer relationship.”

Description:

The Customer and the Vendor have been in a relationship governed by certain terms for some time. Now, either the Customer or the Vendor propose new terms to the other party, and the other party accepts. The new terms permit all activities permitted by the old terms, plus some additional ones.

Example:

The Customer has agreed with the Vendor that the Vendor may send the Customer product updates once a month. For that purpose, the Customer has provided Vendor an e-mail address (but no physical address). Now, the Customer has decided to purchase a product from Vendor. To ship it, the Vendor needs to have the Customer’s shipping address. New terms that also include the shipping address are being negotiated. 1.4 A Customer wants a more distant relationship

“As a Customer, I want to limit the Vendor to more restrictive terms.”

Description:

The Customer and the Vendor have been in a relationship governed by certain terms for some time. Now, the Customer wishes to disallow certain activities previously allowed by the terms, without terminating the relationship. The Customer offers new terms, which the Vendor may or may not accept. The Vendor may offer alternate terms in turn. This negotiation continues until either mutually acceptable terms are found, or the relationship terminates.

Examples:

The Customer has agreed with the Vendor that the Vendor may send the Customer product updates. Now the Customer decides that they do not wish to receive product updates more frequently than once a quarter. The Customer has agreed to behavioral profiling when visiting Vendor’s website. Now, while the Customer still wishes to use the website, they do no longer consent to the behavioral profiling. 2. Category: Ongoing relationship

What happens during a relationship after it has been established and while no party has the intention of either modifying the strength of, or even leaving the relationship.

2.1 Intentcasting

“As a Customer, I want to publish, to a selection of Vendors that I trust, that I wish to purchase a product with description X”

Benefits:

Convenience for Customer Potential for a non-standard deal (e.g. I am a well-known influencer, and the seller makes Customer a special deal)

Features:

It’s basically a shopping list free-form text plus “terms” (FIXME: open issue) Retailers can populate product alternatives for each item in the list might simply be a search at the retailer site but terms need to be computable

Issues:

what if a retailer lies and spams with unrelated products, or unavailable products? 2.2 Automated intentcasting on my behalf

“As a Customer, I want an ‘AI’ running on my behalf to issue Intentcasts when it is in my interest”

Description:

This is similar to functionality deployed by some retailers today: “We noticed you have not bought diapers in the last 30 days. Aren’t you about to run low? Here are some offers”. But this functionality runs on my behalf, and takes data from all sorts of places into account.

Benefits:

Convenience, time savings 2.3 Contextual product reviews and ratings

“As a Customer, I want to see reviews and ratings of the product and seller alternatives for any of my shopping list items.”

Benefits:

Same as product reviews in silos, but without the silos: have access to more reviews and ratings by more people Seller alternatives give Customer more choice 2.4 Filter offers by interest

“As a Customer, I want to receive offers that match my implicitly declared interest”

Description:

In Intentcasting, I actively publish my intent to meet a need I know I have by purchasing a product This is about offers in response to needs, or benefits, that I have not explicitly declared but that can be inferred. Example: if I have purchased a laser printer 6 months ago, and I have not purchased replacement toner cartridges nor have I declared an intent to purchase some, I would appreciate offers for such toner cartridges (but not of inkjet cartridges)

Benefits:

Convenience for Customer Better response rate for Vendor 2.5 Full contact record

“As a Customer, I want a full record of all interactions between Customer and each Vendor accessible in a single place.”

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

This should cover all modes of communication, from e-mail to trouble tickets, voice calls and home visits. 2.6 Manage trusted Vendor’s in a single place

“As a Customer, I want to see and manage my list of trusted Vendors in a single place”

Benefits:

Simplicity Transparency (to Customer)

Notes:

Probably should also have a non-trusted Vendor’s: so my banned vendors are maintained in the same place — which subset is being displayed is just a filter function Probably should have a list of all Vendor’s ever interacted with 2.7 Notify of changes about products I’m interested in

“As a Customer, I want be notified of important changes (e.g. price) in products that I’m interested in.”

Benefits:

can use my shopping cart as a “price watch list” and purchase when I think the price is right

Notes:

This should apply to items in my shopping cart, but also items in product lists that I might have created (“save for later” lists) 2.8 Personal wallet

“As a Customer, I want my own wallet that I can use with any Vendor”

Benefits:

Simplicity & Convenience Unified billing

Notes:

Unified ceremony Should be able to delegate to the payment network of my choice 2.9 Preventative maintenance

“As a Customer, I would like to be notified of my options when a product I own needs maintenance”

Description:

If I have a water heater, and it is about to fail, I would like to be notified that it is about to fail, with offers for what to do about it. It’s a kind of intentcasting but the intent is inferred from the fact that I own the product, the product is about to fail, that I don’t want it to fail and I am willing to entertain offers from the vendor I bought it from and others.

Benefits:

Convenience for Customer No service interruption 2.10 Product clouds

“Each product instance has its own cloud”

Benefits:

Collects information over the lifetime of the product instance Product instance-specific Can change ownership with the product Does not disappear with the vendor

Example:

My water heater has its own cloud. It knows usage, and maintenance events. It continues to work even if the vendor of the water heater goes out of business. 2.11 Product info in context

“As a Customer, I want to access product documentation, available instruction manuals etc in the context of the purchase that I made”

Benefits:

Simplicity & Convenience Updated documentation, new materials etc show up automatically 2.12 Set and monitor terms for relationships

“As a Customer, I want to set terms for my relationship with a Vendor in a single place”

Benefits:

Simplicity & Convenience Privacy & Agency

Notes:

Originally this was only about terms for provided personal data, but it appears this is a broader issue: I also want to set terms for, say, dispute resolution (“I never consent to arbitration”) or customer support (“must agree to never let Customer wait on the phone for more than 30min”) 2.13 Update information in a single place only

“As a Customer, I (only) want to update my personal contact information in one place that I control”

Benefits:

for Customer: convenience for Vendor: more accurate information

Issues:

Should that be a copy and update-copy process (push), or a copy and update-on-need process (pull with copy) or a fetch-and-discard process (pull without copy)?

Notes:

Originally phrased as only about contact info (name, married name, shipping address, phone number etc) this probably applies to other types of information was well, such as, say, credit card numbers, loyalty club memberships, even dietary preferences or interests (“I gave up stamp collecting”) 2.14 Single shopping cart

“As a Customer, I want to use a single shopping cart for all e-commerce sites on the web.”

Benefits:

I don’t need to create accounts on many websites, or login into many websites I decide when the collection of items in the cart expires and I don’t lose work It makes it easier for Customer to shop at more sites, and I can more easily buy from the long tail of sites

Features:

It shows product and seller It may show alternate sellers and difference in terms (e.g. price, shipping, speed) We may also want to have product lists that aren’t a shopping cart (“save for later” lists) 2.15 Unified communications/notifications preferences

“As a Customer, I want to manage my communication/notification preferences with all Vendors in a single place”

Notes:

In a single place, and in a single manner. I should not have to do things differently to unsubscribe from the product newsletter of vendors X and Y.

Benefits:

Simplicity & Convenience 2.16 Unified product feedback

“As a Customer, I want a uniform way to submit (positive and negative) product feedback (and receive responses) with any Vendor”

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

Should be easy to do this either privately or publicly 2.17 Unified purchase history

“As a Customer, I want to have a record of all my product purchases in a single place”

Benefits:

Simplicity & Convenience If I wish to re-order a product I purchased before, I can easily find it and the vendor that I got it from 2.18 Unified subscriptions management

“As a Customer, I want to manage all my ongoing product subscriptions in a single place”

Benefits:

Simplicity & Convenience Expense management

Note:

This is implied by the source, not explicitly mentioned.

Future:

Opens up possibilities for subscription bundle business models 2.19 Unified support experience

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

Should be multi-modal: trouble tickets, chat, e-mail, voice etc 3. Category: Beyond binary relationships

Use cases that involve more than one Customer, or more than one Vendor, or both.

3.1 Proven capabilities

“As a Vendor, I want to give another party (Customer or Vendor) the capabilities to perform certain operations”

Description:

This is SSI Object Capabilities Example: I want to give my customer the ability to open a locker The scenario should be robust with respect to confidentiality and accuracy. 3.2 Silo-free product reviews

“As a Customer, I want to publish my reviews and ratings about products I own so they can be used by any other Customer at any point of purchase”

Benefits:

Same as product reviews in silos, but without the silos: broader distribution of my review for more benefit by more people

Notes:

Rephrased from “express genuine loyalty” 3.3 Monitoring of terms violation

“As a Customer, I want to be notified if other Customer’s interacting with a Vendor report a violation of their terms”

Description:

I have a relationship with a Vendor, and we have agreed to certain terms. If the Vendor breaks those terms, and other Customer’s in a similar relationship with the Vendor notice that, I want to be notified.

Benefits:

Trust, security, safety

Notes:

This can of course be abused through fake reports, so suitable measures must be taken. 3.4 Unified health and wellness records

“As a Customer, I want my health and wellness records from all sources to be aggregated in a place that I control.”

Benefits:

Survives disappearance of the vendor Privacy Allows cross-data-source personal analytics and insights Integration across healthcare (regulated industry) and wellness (consumer) 3.5 Verified credentials

“As a Customer, I want to be able to tell Vendor 1 that Vendor 2 makes a claim about Customer.”

Description:

This is the SSI verified credential use case Example: I want to tell a potential employer that I have earned a certain degress from a certain institution The scenario should be robust with respect to confidentiality and accuracy. 4. Category: Ending the relationship

What happens when one of the parties wishes to end the relationship.

4.1 Banning the vendor

“As a Customer, I want to permanently ban a Vendor from doing business with Customer ever again.”

Description:

This form of ending the relationship means that I don’t want to be told of offers or responses to Intentcasts etc. by this vendor ever again. 4.2 Disassociating from the vendor

“As a Customer, I want to stop interacting with a vendor at this time but I am open to future interactions.”

Description:

This probably means that data sharing and other interaction reset to the level of how it was before the relationship was established. However, the Customer and the Vendor do not need to go through the technical introduction ceremony again when the relationship is revived. 4.3 Firing the customer

“As a Vendor, I want to stop interacting with a particular Customer.”

Description:

For customers (or non-customers) that the Vendor does not wish to serve (e.g. because of excessive support requests), the VendorVendor may unilaterally terminate the relationship with the Customer.

Sunday, 10. October 2021

Werdmüller on Medium

The first wave, an army

A short story Continue reading on Medium »

Saturday, 09. October 2021

@_Nat Zone

IT批評でクロサカタツヤさんと対談しました

テーマはデジタルアイデンティティなのですが、対談が… The post IT批評でクロサカタツヤさんと対談しました first appeared on @_Nat Zone.

テーマはデジタルアイデンティティなのですが、対談がちょうどJR東日本の駅構内カメラ設置問題の話題が盛り上がっていたときだったので、そこから話はいろいろと広く深く進んでいきました。

たいへん丁寧にまとめていただいたので、デジタルアイデンティティやプライバシーに対するクロサカさんと私の考え方や問題意識も、楽しみながら読んで理解していただけるのではないかと思います。結構ボリュームが有って、2回に分けての公開になっています。

もくじ

第1回

JR東日本の監視カメラ問題の本質

広告にはネガティブだが監視カメラには寛容な日本人

日本の企業は上下水道を整備するよも井戸を掘りたがる

マイナンバーのアイデンティティー・レジスターには必要なものが欠けている

失敗の本質を総括しなければ同じことを繰り返す

アカウンタビリティの欠如が信頼を毀損している

第2回

なぜいまだにマイナンバーへの銀行口座登録が実現しないのか

信頼性あるデータ流通の場を提供する「GAIN」という試み

アイデンティティー・マネジメントをGAFAMに依存することのリスク

個人情報・データプライバシーは構造ではなく状態

監視カメラを設置することの合理性について比較衡量すべき

アイデンティティーは第四次産業革命を推進するキャピタルである

The post IT批評でクロサカタツヤさんと対談しました first appeared on @_Nat Zone.

Friday, 08. October 2021

reb00ted

What is the metaverse? A simple definition.

Mark Zuckerberg recently dedicated a long interview to the subject, so the metaverse must be a thing. Promptly, the chatter is exploding. Personally I believe the metaverse is currently underhyped: it will be A Really Big Thing, for consumers, and for businesses, and in particular for commerce and collaboration, far beyond what we can grasp in video games and the likes today. So what is it, this

Mark Zuckerberg recently dedicated a long interview to the subject, so the metaverse must be a thing. Promptly, the chatter is exploding. Personally I believe the metaverse is currently underhyped: it will be A Really Big Thing, for consumers, and for businesses, and in particular for commerce and collaboration, far beyond what we can grasp in video games and the likes today.

So what is it, this metaverse thing? I want to share my definition, because I think it is simple, and useful.

It’s best expressed in a two-by-two table:

How you access it:
physically vs. virtually Where it is:
in physical or virtual space The virtual world, accessed physically:
Augmented Reality The virtual world, accessed virtually:
today's internet and future virtual worlds The physical world, accessed physically:
our ancestors exclusively lived here The physical world, accessed virtually:
the Internet of Things

By way of explanation:

There is physical space – the physical world around us – and virtual space, the space of pure information, that only exists on computers.

Our ancestors, and we so far, have mostly been interacting with physical space by touching it physically. But in recent decades, we have learned to access it from the information sphere, and that’s usually described as the Internet of Things: if you run an app to control your lights, or open your garage door, that’s what I’m talking about.

So far, we have mostly interacted with virtual space through special-purpose devices that form the gateway to the virtual space: first computers, now phones and in the future: headsets. I call this accessing virtual space virtually.

And when we don our smart glasses, and wave our arms, we interact with virtual space from physical space, which is the last quadrant.

In my definition, those four quadrants together form the metaverse: the metaverse is a superset of both meatspace and cyberspace, so to speak.

This definition has been quite helpful for me to understand what various projects are working on.

Thursday, 07. October 2021

Werdmüller on Medium

Productivity score

You’ve got to work, bitch. Continue reading on Medium »

You’ve got to work, bitch.

Continue reading on Medium »


MyDigitalFootprint

What are we asking the questions for?

What are we asking the questions for? This link gives you access to all the articles and archives for </Hello CDO> This article unpacks questions and framing as I tend to focus on the conflicts, tensions, and compromises that face any CDO in the first 100 days — ranging from the effects of a poor job description to how a company’s culture means that data-led decisions are not decisions.
What are we asking the questions for?

This link gives you access to all the articles and archives for </Hello CDO>

This article unpacks questions and framing as I tend to focus on the conflicts, tensions, and compromises that face any CDO in the first 100 days — ranging from the effects of a poor job description to how a company’s culture means that data-led decisions are not decisions.




I love this TED talk from Dana Kanze at LSE.  Dana’s talk builds on the research of Tory Higgins who is credited with creating the social theory “Regulatory Focus”  This is a good summary if you have not run into it before.

Essentially the idea behind “Regulatory Focus” is to explore motivations and routes to getting the outcome you want. The context in this article is to explore how a framed approach to questions creates biased outcomes. One framing in Regulatory Focus centres on a “Promotion Focus” which looks for “gain” which can be translated as finding or seeking hope, advancement and accomplishment.  The counter is a prevention focus that centres on losses that looks to find safety, responsibility and security.  

In Dana’s research, which is the basis for her talk about why women get less venture funding, she categorises “Promotion” questions as ones that will ask the following questions as they seek to focus on GAIN. When talking about: customers seeks data about acquisition, income looking for data to confirm sales, Market seeking to understand the size,  Balance Sheet will want to know data on assets, projections should focus on growth and questions on strategy will want to understand the vision. 

Dana’s research shows that “Prevention” questions will be framed as LOSSES, therefore a focus on customers would seek data on retention, income questions look for data on margin, market questions would focus on shape, balance sheet questions will centre on liability, projection data wants to confirm stability, and strategy to ensure execution capability.

Whilst Dana has collected this data as part of a noble and useful study on why women get less funding from venture capital, investors tend to bias and ask women prevention questions with a framing away from gain towards losses and downside risk.  However, we should reflect on this data and research for a little longer.

Does your executive team tend to focus on promotion or prevention questions?  Do different executive roles tend to frame their questions based on gain (upside) or loss prevention (risk)?  Is the person leading the questions critical to the decision-making unit and will their questions frame promotion or prevention for the company.  Whilst a rounded team should ask both, do they?

As the CDO, our role is to find “all data.” and not data to fit a question. Data that shows your company's approach to how they frame questions for Prevention or Promotion is just data,  but this data will help you as a team make better decisions. 

A specific challenge in the first 100 days is to determine what questions we, as a leadership team, ask ourselves and how our framing is biasing our choices and decisions.  We are unlikely to like this data and how power aligns with those who prevent or promote a proposal or recommendation.  

Based on listening to questions and those in communications, I have been building an ontology of questions and been categorising them for a while now.  I am interested both in the absolute bias, the bias to people, the bias to project types and also how bais changes depending on who is leading the project and questions.  Based on a sample size that is not statistical, I believe that Dana’s work has far wider implications than just venture funding. 

---

Whilst our ongoing agile iteration into information beings is never-ending, there are the first 100 days in the new role. But what to focus on? Well, that rose-tinted period of conflicting priorities is what </Hello, CDO!> is all about. Maintaining sanity when all else has been lost to untested data assumptions is a different problem entirely.

Wednesday, 06. October 2021

Mike Jones: self-issued

Server-contributed nonces added to OAuth DPoP

The latest version of the “OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP)†specification adds an option for servers to supply a nonce value to be included in the DPoP proof. Both authorization servers and resource servers can provide nonce values to clients. As described in the updated Security Considerations, the nonce prevents […]

The latest version of the “OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP)†specification adds an option for servers to supply a nonce value to be included in the DPoP proof. Both authorization servers and resource servers can provide nonce values to clients.

As described in the updated Security Considerations, the nonce prevents a malicious party in control of the client (who might be a legitimate end-user) from pre-generating DPoP proofs to be used in the future and exfiltrating them to a machine without the DPoP private key. When server-provided nonces are used, actual possession of the proof-of-possession key is being demonstrated — not just possession of a DPoP proof.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-04.html

Werdmüller on Medium

The mirror of infinite worlds

We can reach into the multiverse and find your happiest self. Continue reading on Medium »

We can reach into the multiverse and find your happiest self.

Continue reading on Medium »

Tuesday, 05. October 2021

Phil Windley's Technometria

Ugh! There's an App for That!

Summary: Interoperability is a fundamental property of tech systems that are generative and respect individual privacy and autonomy. And, as a bonus, it makes people's live easier! I traveled to Munich for European Identity Conference several weeks ago—my first international trip since before the pandemic. Post-pandemic travel can be confusing, international travel even more so. To get

Summary: Interoperability is a fundamental property of tech systems that are generative and respect individual privacy and autonomy. And, as a bonus, it makes people's live easier!

I traveled to Munich for European Identity Conference several weeks ago—my first international trip since before the pandemic. Post-pandemic travel can be confusing, international travel even more so. To get into Germany, I needed to prove I had been vaccinated. To get back in the US, I had to show evidence of a negative Covid-19 test conducted no more than three days prior to my arrival in the US. Figuring out how to present these, which forms to fill out, and who cared led to a bit of stress.

Once in Munich, venturing out of the hotel had its own surprises. Germany, or maybe just Munich, has implemented a contract tracing system. Apparently there's one or more apps you can use, but I had a tough time figuring that out. Restaurants were spotty in their demands and enforcement. After showing them my US vaccination card, one small place just threw up their hands and said "just come in and eat!"

You'd think all this would have been a perfect opportunity for me to use a health pass app, but whose? None of the ones I knew about had a way for me to get my vaccination status into them. Delta airlines didn't accept any that I could tell. Munich has its own—or three. In short, it's an interoperability nightmare.

Apple recently announced that you can store your Covid-19 vaccination card in the Health app. And soon you'll be able to put it in the phone's wallet app. Android users have similar options. But there's also hundreds (really) of apps for doing the same thing and each has its own tiny ecosystem. PC Magazine even wrote a guide with every US state's chosen app.

The size of Apple and Android may help solve this problem since they already represent huge ecosystems. But then we're stuck with whatever solutions they provide and we may not like what we get. You don't have to look far to see BigTech solutions that have left people scratching their head. We can't continue to count on the benevolence of our erstwhile dictators.

Writing in Communications of the ACM, Cory Doctorow discusses what can be done and hits upon a conceptually simple answer: enforce interoperability.

[Interoperability] is the better way. Instead of enshrining Google, Facebook, Amazon, Apple, and Microsoft as the Internet's permanent overlords and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future.

It's possible that people will connect tools to their BigTech accounts that do ill-advised things they come to regret. That is kind of the point, really. After all, people can plug weird things into their car's lighter receptacles, but the world is a better place when you get to decide how to use that useful, versatile ANSI/SAE J56-compliant plug—not GM or Toyota.

By enforcing interoperability, we avoid empowering Apple, Google, or anyone else as the sole trusted repositories of health data and then hoping they'll not only do the right thing, but also have all the best ideas. We can let the hundreds of Covid-19 health pass apps thrive (or not), secure in the knowledge that people can pick the app they like from a company they trust and still have it work when they travel across borders—or just across town.

People with decision making authority can help this process by choosing the interoperable solution and avoiding closed, proprietary technology. Why are organizations like NASCIO (National Association of State CIOs), for example, not doing more to help states find, chose, and use interoperable solutions to common state IT problems? Beyond health pass applications, things like mobile drivers licenses need to be interoperable to be useful.

Fortunately, we have open-source technology, with multiple commercial providers, that can provide interoperable credentials. Verifiable credential technology and SSI tick all the checkboxes for interoperable data exchange. Several production use cases exist that can inform new projects.

In Health Passes and the Design of an Ecosystem of Ecosystems, I wrote about efforts to create interoperable health passes on principles that respect individual privacy and autonomy. Those efforts have created real apps, with real interoperability. But too many decision makers jettison interoperability in search of profit, dominance, or simply time to market. The lesson of the internet was not that it was the cheapest, easiest way to build global network. We had CompuServe for that. The lesson of the internet was that it was the best way to build a network that was generative and provided the most utility for the greatest number of people. Interoperability was the key—and still is.

Photo Credit: Apps from Pixabay (pixabay)

Tags: ssi identity wallets covid interoperability


Heather Vescent

Three Governments enabling digital identity interoperability

Photo by Andrew Coop on Unsplash Since 2016, a growing number of digital identity experts have worked together to create privacy preserving, digitally native, trusted credentials, which enable the secure sharing of data from decentralized locations. U.S., Canadian, and European Governments see how this technology can provide superior data protection for its citizens, while enabling global data
Photo by Andrew Coop on Unsplash

Since 2016, a growing number of digital identity experts have worked together to create privacy preserving, digitally native, trusted credentials, which enable the secure sharing of data from decentralized locations. U.S., Canadian, and European Governments see how this technology can provide superior data protection for its citizens, while enabling global data sharing — that’s why they have invested more than $16 million USD into this space over the past several years.

On September 15, 2021, I moderated a panel with representatives from the United States Government, the Canadian Government, and the European Commission. Below is an edited excerpt from the panel that included:

Anil John, Technical Director of the Silicon Valley Innovation Program which has invested $8 million on R&D, proof of concepts, product development, refinement, and a digital wallet UI competition. Tim Bouma, Senior Policy Analyst for identity management at the Treasury Board secretariat at of the Government of Canada. The User-centric Verifiable Digital Credentials Challenge has awarded $4 million CAD in two phases. Olivier Bringer, Head of the Next-Generation Internet at the European commission and under his program, he has awarded about € 6 million euros through three open calls for EID and SSI solutions. Heather Vescent, Co-Chair of the Credentials Community Group which incubates many of the open standards in this space & an author of The Comprehensive Guide to Self Sovereign Identity. Policy and Technology

Heather Vescent: Policy tends to be squishy while technology and especially technology standards must be precise. What challenges do you face implementing policy decisions into technology?

Anil John: Our two primary work streams come out of the oldest parts of the U.S. government: U.S. Citizenship and Immigration Service, and U.S. Customs and Border Protection.

Immigration credentials must be available to anyone regardless of their technical savvy or infrastructure availability. The USCIS team is focused on leaving nobody behind, and do not have the luxury of pivoting to a digital only model. Technology has to provide credentials in electronic format, as well as on paper — each with a high degree of verification validation.

U.S. Customs has to deal with every single entity that is shipping goods into the U.S. We don’t have any choice but to be globally interoperable, because while we may be the largest customs organization on the planet, we do not want to mandate a single platform or technology stack. Interoperability is critical so that everybody has a choice in the technology that they are using.

It is easy to get pushback about why we are doing this long public process, rather than put money into a vendor and buy their technology. One of the reasons we made the decision to work in public, under the remit of a global standards organization, was to ensure that we were not repeating past mistakes where we were locked into particular vendors or platforms. In the past, we were locked into proprietary APIs, with high switching costs, left with the care and feeding of legacy systems that were very uniquely government centric. There are benefits to develop technology in the public from a solution choice and a public interest perspective.

Tim Bouma: The challenge is that you have to deal with short term exigencies, and combine that with a long-term vision to come up with requirements that are fairly timeless. Because once you develop the requirements, they have to last for a decade or more. This forces you to think of what the timeless requirements might be. In order to do that, you have to understand the technology very deeply. You have to understand the abstraction, so you can come up with language that can serve the test of time.

Olivier Bringer: I agree that it can be a challenge to implement policy choices into technology; but it can also be an opportunity. Innovators have not waited for the European General Data Protection Regulation before developing privacy preserving technologies. This is an opportunity for companies, an opportunity for administration, an opportunity for innovators to develop new technologies and new business models.

“Policy is an opportunity to implement fundamental rights into technology.” — Olivier Bringer

In NGI we try to take the policy development, the regulations that we have, as an opportunity to implement our fundamental rights, an opportunity to implement the law into technology. Our program supports innovators, the adoption of their technologies, their solutions, and their integration into standards. We do that firstly in Europe, but our ambition is to link to the global environment and work in cooperation with others.

A Benefit to Citizens

Vescent: What would you tell your citizens is the most important reason to invest in this infrastructure and businesses that use it?

Bringer: First that we are funding technologies, which are, this is our motto, human centric. So returning to your first question, going beyond pure policy development, we think it’s really important to develop the technology. So next to the regulation that we put in place in the field of electronic identity, in the field of AI, in the field of data protection, it’s important to fund the technologies that will implement these policies and regulation. This is really what we try to do when we build an internet that’s more trustworthy, that gives more control to the users in terms of the data they disclose, in terms of control of their identity online, in terms of including everyone in this increasingly important digital environment. It is technology geared towards the citizen.

John: I’m going to answer for myself, rather than speak for my organization. Having said that, I think what I would tell [citizens] would be that we want to make your life more secure, and more privacy-respecting, without leaving anybody behind. These are the first technologies that have come along that give us a hope in ensuring that we’re not trapped by nefarious or corporate or money interests. That there is a choice in the marketplace in what is available to our citizens in how they access it. And we absolutely are not ignoring the people who may not have a level of comfort with digital technologies and leaving them behind, but ensuring that there is a clear bridge with this technology to what their level of comfort is.

Bouma: We are moving to a notion of a digital economy, and it’s more than some slogan. We are building a digital infrastructure that is becoming a critical infrastructure. It is important that we understand that we develop the capabilities that ensure that the citizens actually feel safe. This is as fundamental as having safe drinking water and a regulated electricity supply. So now we need to start thinking about the digital capabilities, verifiable credentials included, as part of the national and international infrastructure.

We’re doing proof of concepts and pilots on national digital infrastructure, to understand what it means, to use a Canadian metaphor, create a “set of rails” that goes across the country. What would that look like? What are the capabilities? At the end of the day, we need to build services that can be trusted by Canadians and everyone. And there’s a lot of engineering and a lot of policy work that has to go into that.

Most Surprising Lesson

Vescent: What’s the most surprising thing you’ve learned since the inception of your investment programs.?

Bouma: I had a major shift in perspective. There are other technical ecosystems we have to take into account, like the mobile driver’s license, the digital travel credentials, and others. So, we have to figure out how to incorporate all those requirements. The mantra I’ve been using is, we’ve got these different technical ecosystems but we have to focus on the human being — the point of integration is at the digital wallet. So that’s reframed my policy thinking — there’s a multiplicity of technical ecosystems that we have to account from a policy point of view.

Bringer: I’m impressed by the quality of the innovators. We have people who are very good in technology who understand the political challenges, the policy context in which they intervene and who are able to make excellent contribution to our own policy, and who are really dedicated to our human centric vision.

John: I’ll give you one positive one, one negative. On the positive side, I am happy there is a community of people that understands there is value in working together to ensure the shared infrastructure that we are all using has a common foundation of security, privacy, and interoperability — that it is not a zero-sum game. They can compete on top of a common foundation.

On the negative side, I’m fascinated by the shenanigans being pulled by people who use the theater of interoperability in order to pedal proprietary solutions. Fortunately, I think a lot of the public sector entities are getting more educated so that they can see through the interoperability theater.

Curious for more? Watch the full panel. Or click through for a playlist with technology demos showing interoperability.

Learn more by the author

Heather Vescent is a co-chair of the W3C Credential Community Group, and an author of the Comprehensive Guide to Self-Sovereign Identity. Curious about the Credentials Community Group or want to get started with decentralized identity technology? Get in touch for personal introduction.

Three Governments enabling digital identity interoperability was originally published in In Present Tense on Medium, where people are continuing the conversation by highlighting and responding to this story.


Jon Udell

My own personal AWS S3 bucket

I’ve just rediscovered two digital assets that I’d forgotten about. 1. The Reddit username judell, which I created in 2005 and never used. When you visit the page it says “hmm… u/judell hasn’t posted anything” but also reports, in my Trophy Case, that I belong to the 15-year club. 2. The Amazon AWS S3 bucket … Continue reading My own personal AWS S3 bucket

I’ve just rediscovered two digital assets that I’d forgotten about.

1. The Reddit username judell, which I created in 2005 and never used. When you visit the page it says “hmm… u/judell hasn’t posted anything” but also reports, in my Trophy Case, that I belong to the 15-year club.

2. The Amazon AWS S3 bucket named simply jon, which I created in 2006 for an InfoWorld blog post and companion column about the birth of Amazon Web Services. As Wikipedia’s timeline shows, AWS started in March of that year.

Care to guess the odds that I could still access both of these assets after leaving them in limbo for 15 years?

Spoiler alert: it was a coin flip.

I’ve had no luck with Reddit so far. The email account I signed up with no longer exists. The support folks kindly switched me to a current email but it’s somehow linked to Educational_Elk_7869 not to judell. I guess we may still get it sorted but the point is that I was not at all surprised by this loss of continuity. I’ve lost control of all kinds of digital assets over the years, including the above-cited InfoWorld article which only Wayback (thank you as always!) now remembers.

When I turned my attention to AWS S3 I was dreading a similar outcome. I’d gone to Microsoft not long after I made that AWS developer account; my early cloud adventures were all in Azure; could I still access those long-dormant AWS resources? Happily: yes.

Here’s the backstory from that 2006 blog post:

Naming

The name of the bucket is jon. The bucket namespace is global which means that as long as jon is owned by my S3 developer account, nobody else can use that name. Will this lead to a namespace land grab? We’ll see. Meanwhile, I’ve got mine, and although I may never again top Jon Stewart as Google’s #1 Jon, his people are going to have to talk to my people if they want my Amazon bucket.

I’m not holding my breath waiting for an offer. Bucket names never mattered in the way domain names do. Still, I would love to be pleasantly surprised!

My newfound interest in AWS is, of course, because Steampipe wraps SQL around a whole bunch of AWS APIs including the one for S3 buckets. So, for example, when exactly did I create that bucket? Of course I can log into the AWS console and click my way to the answer. But I’m all about SQL lately so instead I can do this.

> select name, arn, creation_date from aws_s3_bucket +-------+--------------------+---------------------+ | name | arn | creation_date | +-------+--------------------+---------------------+ | jon | arn:aws:s3:::jon | 2006-03-16 08:16:12 | | luann | arn:aws:s3:::luann | 2007-04-26 14:47:45 | +-------+--------------------+---------------------+

Oh, and there’s the other one I made for Luann the following year. These are pretty cool ARNs (Amazon Resource Names)! I should probably do something with them; the names you can get nowadays are more like Educational_Elk_7869.

Anyway I’m about to learn a great deal about the many AWS APIs that Steampipe can query, check for policy compliance, and join with the APIs of other services. Meanwhile it’s fun to recall that I wrote one of the first reviews of the inaugural AWS product and, in the process, laid claim to some very special S3 bucket names.

Monday, 04. October 2021

Doc Searls Weblog

Where the Intention Economy Beats the Attention Economy

There’s an economic theory here: Free customers are more valuable than captive ones—to themselves, to the companies they deal with, and to the marketplace. If that’s true, the intention economy will prove it. If not, we’ll stay stuck in the attention economy, where the belief that captive customers are more valuable than free ones prevails. Let […]

There’s an economic theory here: Free customers are more valuable than captive ones—to themselves, to the companies they deal with, and to the marketplace. If that’s true, the intention economy will prove it. If not, we’ll stay stuck in the attention economy, where the belief that captive customers are more valuable than free ones prevails.

Let me explain.

The attention economy is not native to human attention. It’s native to businesses that  seek to grab and manipulate buyers’ attention. This includes the businesses themselves and their agents. Both see human attention as a “resource” as passive and ready for extraction as oil and coal. The primary actors in this economy—purveyors and customers of marketing and advertising services—typically talk about human beings not only as mere “users” and “consumers,” but as “targets” to “acquire,” “manage,” “control” and “lock in.” They are also oblivious to the irony that this is the same language used by those who own cattle and slaves.

While attention-grabbing has been around for as long as we’ve had yelling, in our digital age the fields of practice (abbreviated martech and adtech) have become so vast and varied that nobody (really, nobody) can get their head around everything that’s going on in them. (Examples of attempts are here, here and here.)

One thing we know for sure is that martech and adtech rationalize taking advantage of absent personal privacy tech in the hands of their targets. What we need there are the digital equivalents of the privacy tech we call clothing and shelter in the physical world. We also need means to signal our privacy preferences, to obtain agreements to those, and to audit compliance and resolve disputes. As it stands in the attention economy, privacy is a weak promise made separately by websites and services that are highly incentivised not to provide it. Tracking prophylaxis in browsers is some help, but itworks differently for every browser and it’s hard to tell what’s actually going on.

Another thing we know for sure is that the attention economy is thick with fraud, malware, and worse. For a view of how much worse, look at any adtech-supported website through PageXray and see the hundreds or thousands of ways sthe site and its invisible partners are trying to track you. (For example, here’s what Smithsonian Magazine‘s site does.)

We also know that lawmaking to stop adtech’s harms (e.g. GDPR and CCPA) has thus far mostly caused inconvenience for you and me (how many “consent” notices have interrupted your web surfing today?)—while creating a vast new industry devoted to making tracking as easy as legally possible. Look up GDPR+compliance and you’ll get way over 100 million results. Almost all of those will be for companies selling other companies ways to obey the letter of privacy law while violating its spirit.

Yet all that bad shit is also a red herring, misdirecting attention away from the inefficiencies of an economy that depends on unwelcome surveillance and algorithmic guesswork about what people might want.

Think about this: even if you apply all the machine learning and artificial intelligence in the world to all the personal data that might be harvested, you still can’t beat what’s possible when the targets of that surveillance have their own ways to contact and inform sellers of what they actually want and don’t want, plus ways to form genuine relationships and express genuine (rather than coerced) loyalty, and to do all of that at scale.

We don’t have that yet. But when we do, it will be an intention economy. Here are the opening paragraphs of The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012):

This book stands with the customer. This is out of necessity, not sympathy. Over the coming years, customers will be emancipated from systems built to control them. They will become free and independent actors in the marketplace, equipped to tell vendors what they want, how they want it, where and when—even how much they’d like to pay—outside of any vendor’s system of customer control. Customers will be able to form and break relationships with vendors, on customers’ own terms, and not just on the take-it-or-leave-it terms that have been pro forma since Industry won the Industrial Revolution.

Customer power will be personal, not just collective.  Each customer will come to market equipped with his or her own means for collecting and storing personal data, expressing demand, making choices, setting preferences, proffering terms of engagement, offering payments and participating in relationships—whether those relationships are shallow or deep, and whether they last for moments or years. Those means will be standardized. No vendor will control them.

Demand will no longer be expressed only in the forms of cash, collective appetites, or the inferences of crunched data over which the individual has little or no control. Demand will be personal. This means customers will be in charge of personal information they share with all parties, including vendors.

Customers will have their own means for storing and sharing their own data, and their own tools for engaging with vendors and other parties.  With these tools customers will run their own loyalty programs—ones in which vendors will be the members. Customers will no longer need to carry around vendor-issued loyalty cards and key tags. This means vendors’ loyalty programs will be based on genuine loyalty by customers, and will benefit from a far greater range of information than tracking customer behavior alone can provide.

Thus relationship management will go both ways. Just as vendors today are able to manage relationships with customers and third parties, customers tomorrow will be able to manage relationships with vendors and fourth parties, which are companies that serve as agents of customer demand, from the customer’s side of the marketplace.

Relationships between customers and vendors will be voluntary and genuine, with loyalty anchored in mutual respect and concern, rather than coercion. So, rather than “targeting,” “capturing,” “acquiring,” “managing,” “locking in” and “owning” customers, as if they were slaves or cattle, vendors will earn the respect of customers who are now free to bring far more to the market’s table than the old vendor-based systems ever contemplated, much less allowed.

Likewise, rather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers. Once customers’ expressions of intent become abundant and clear, the range of economic interplay between supply and demand will widen, and its sum will increase. The result we will call the Intention Economy.

This new economy will outperform the Attention Economy that has shaped marketing and sales since the dawn of advertising. Customer intentions, well-expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, and flooding media with messages that miss their marks. Advertising will also improve.

The volume, variety and relevance of information coming from customers in the Intention Economy will strip the gears of systems built for controlling customer behavior, or for limiting customer input. The quality of that information will also obsolete or re-purpose the guesswork mills of marketing, fed by crumb-trails of data shed by customers’ mobile gear and Web browsers. “Mining” of customer data will still be useful to vendors, though less so than intention-based data provided directly by customers.

In economic terms, there will be high opportunity costs for vendors that ignore useful signaling coming from customers. There will also be high opportunity gains for companies that take advantage of growing customer independence and empowerment.

But this hasn’t happened yet. Why?

Let’s start with supply and demand, which is roughly about price. Wikipedia: “the relationship between the price of a given good or product and the willingness of people to either buy or sell it.” But that wasn’t the original idea. “Supply and demand” was first expressed as “demand and supply” by Sir James Denham-Steuart in An Inquiry into the Principles of Political Oeconomy, written in 1767. To Sir James, demand and supply wasn’t about price. Specifically, “it must constantly appear reciprocal. If I demand a pair of shoes, the shoemaker either demands money or something else for his own use.” Also, “The nature of demand is to encourage industry.”

Nine years later, in The Wealth of Nations, Adam Smith, a more visible bulb in the Scottish Enlightenment, wrote, “The real and effectual discipline which is exercised over a workman is that of his customers. It is the fear of losing their employment which restrains his frauds and corrects his negligence.” Again, nothing about price.

But neither of those guys lived to see the industrial age take off. When that happened, demand became an effect of supply, rather than a cause of it. Supply came to run whole markets on a massive scale, with makers and distributors of goods able to serve countless customers in parallel. The industrial age also ubiquitized standard-form contracts of adhesion binding all customers to one supplier with a single “agreement.”

But, had Sir James and Adam lived into the current millennium, they would have seen that it is now possible, thanks to digital technologies and the Internet, for customers to achieve scale across many companies, with efficiencies not imaginable in the pre-digital industrial age.

For example, it should be possible for a customer to express her intentions—say, “I need a stroller for twins downtown this afternoon”—to whole markets, but without being trapped inside any one company’s walled garden. In other words, not only inside Amazon, eBay or Craigslist. This is called intentcasting, and among its virtues is what Kim Cameron calls “minimum disclosure for constrained purposes” to “justifiable parties” through a choice among a “plurality of operators.”

Likewise, there is no reason why websites and services can’t agree to your privacy policy, and your terms of engagement. In legal terms, you should be able to operate as the first party, and to proffer your own terms, to which sites and services can agree (or, as privacy laws now say, consent) as second parties. That this is barely thinkable is a legacy of a time that has sadly not yet left us: one in which only companies can enjoy that kind of scale. Yet it would clearly be a convenience to have privacy as normalized in the online world as it is in the offline one. But we’re slowly getting there; for example with Customer Commons’ P2B1, aka #NoStalking term, which readers can proffer and publishers can agree agree to. It says “Just give me ads not based on tracking me.” Also with the IEEE’s P7012 Standard for Machine Readable Personal Privacy Terms working group.

Same with subscriptions. A person should be able to keep track of all her regular payments for subscription services, to keep track of new and better deals as they come along, to express to service providers her intentions toward those new deals, and to cancel or unsubscribe. There are lots of tools for this today, for example TruebillBobbyMoney DashboardMintSubscript MeBillTracker ProTrimSubbyCard DueSiftSubMan, and Subscript Me. There are also subscription management systems offered by PaypalAmazonApple and Google (e.g. with Google Sheets and Google Doc templates). But all of them to one degree or another are based more on the felt need by those suppliers for customer captivity than for customer independence.

As Customer Commons unpacks it here, there are many largely or entirely empty market spaces that are wide open for free and independent customers: identity, shopping (e.g. with shopping carts of your own to take from site to site), loyalty (of the genuine kind), property ownership (the real Internet of Things), and payments, for example.

It is possible to fill all those spaces if we have the capacity to—as Sir James put it—encourage industry, restrain fraud and correct negligence. While there is some progress in some of those areas, the going is still slow on the global scale. After all, The Intention Economy is nine years old and we still don’t have it yet. Is it just not possible, or are we starting in the wrong places?

I think it’s the latter.

Way back in 1995, when the Internet first showed up on both of our desktops, my wife Joyce said, “The sweet spot of the Internet isn’t global. It’s local.” That was the gist of my TEDx Santa Barbara talk in 2018. It’s also why Joyce and I are now in Bloomington, Indiana, working with the Ostrom Workshop at Indiana University on deploying a new way for demand and supply to inform each other and get business rolling—and to start locally. It’s called the Byway, and it works outside of the old supply-controlled industrial model. Here’s an FAQ. Please feel free to add questions in the comments here.

The title image is by the great Hugh Macleod, and was commissioned in 2004 for a startup he and I both served and is now long gone.

 


Werdmüller on Medium

The dark flood

What I did when the world went away Continue reading on Medium »

What I did when the world went away

Continue reading on Medium »


Damien Bod

Implement a secure API and a Blazor app in the same ASP.NET Core project with Azure AD authentication

The article shows how an ASP.NET Core API and a Blazor BBF application can be implemented in the same project and secured using Azure AD with Microsoft.Identity.Web. The Blazor application is secured using the BFF pattern with its backend APIs protected using cookies with anti-forgery protection and same site. The API is protected using JWT […]

The article shows how an ASP.NET Core API and a Blazor BBF application can be implemented in the same project and secured using Azure AD with Microsoft.Identity.Web. The Blazor application is secured using the BFF pattern with its backend APIs protected using cookies with anti-forgery protection and same site. The API is protected using JWT Bearer tokens and used from a separate client from a different domain, not from the Blazor application. This API is not used for the Blazor application. When securing Blazor WASM hosted in an ASP.NET Core application, BFF architecture should be used for the Blazor application and not JWT tokens, especially in Azure where it is not possible to logout correctly.

Code: https://github.com/damienbod/AspNetCore6Experiments

Setup

The Blazor application consists of three projects. The Server project implements the OpenID Connect user interaction flow and authenticates the client as well as the user authentication. The APIs created for the Blazor WASM are protected using cookies. A second API is implemented for separate clients and the API is protected using JWT tokens. Two separate Azure App registrations are setup for the UI client and the API. If using the API, a third Azure App registration would be used for the client, for example an ASP.NET Core Razor page, or a Power App.

API

The API is implemented and protected with the MyJwtApiScheme scheme. This will be implemented later in the Startup class. The API uses swagger configurations for Open API 3 and a simple HTTP GET is implemented to validate the API security.

[Route("api/[controller]")] [ApiController] [Authorize(AuthenticationSchemes = "MyJwtApiScheme")] [Produces("application/json")] [SwaggerTag("Using to provide a public api for different clients")] public class MyApiJwtProtectedController : ControllerBase { [HttpGet] [ProducesResponseType(StatusCodes.Status200OK, Type = typeof(string))] [SwaggerOperation(OperationId = "MyApiJwtProtected-Get", Summary = "Returns string with details")] public IActionResult Get() { return Ok("yes my public api protected with Azure AD and JWT works"); } }

Blazor BFF

The Blazor applications are implemented using the backend for frontend security architecture. All security is implemented in the backend and the client requires a secret or a certificate to authenticate. The security data is stored to an encrypted cookie with same site protection. This is easier to secure than storing tokens in the browser storage, especially since Blazor does not support strong CSPs due to the generated Javascript and also that AAD does not support a proper logout for access tokens, refresh tokens stored in the browser. The following blog post explains this in more details.

Securing Blazor Web assembly using cookies

Microsoft.Identity.Web

The Microsoft.Identity.Web Nuget package is used to implement the Azure AD clients. This setup is different to the documentation. The default schemes need to be set correctly when using Cookie (App) authentication and also API Auth together. The AddMicrosoftIdentityWebApp method sets up the Blazor authentication for one Azure App registration using configuration from the AzureAd settings. The AddMicrosoftIdentityWebApi method implements the second Azure App registration for the JWT Bearer token Auth using the AzureAdMyApi settings and the MyJwtApiScheme scheme.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme; }) .AddMicrosoftIdentityWebApp(Configuration, "AzureAd", OpenIdConnectDefaults.AuthenticationScheme) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/beta", "User.ReadBasic.All user.read") .AddInMemoryTokenCaches(); services.AddAuthentication("MyJwtApischeme") .AddMicrosoftIdentityWebApi( Configuration, "AzureAdMyApi", "MyJwtApiScheme");

app.settings

The ASP.NET Core project uses app.settings and user secrets in development to configure the Azure AD clients. The two Azure App registrations values are added here.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "46d2f651-813a-4b5c-8a43-63abcb4f692c", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc" // "ClientSecret": "add secret to the user secrets" }, "AzureAdMyApi": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "b2a09168-54e2-4bc4-af92-a710a64ef1fa" },

Swagger

Swagger is added to make it easier to view and test the API. A simple UI is created so that you can paste your access token into the UI and test the APIs manually if required. You could also implement a user flow directly in the Swagger UI but then you would have to open up the security headers protection to allow this.

services.AddSwaggerGen(c => { c.EnableAnnotations(); // add JWT Authentication var securityScheme = new OpenApiSecurityScheme { Name = "JWT Authentication", Description = "Enter JWT Bearer token **_only_**", In = ParameterLocation.Header, Type = SecuritySchemeType.Http, Scheme = "bearer", // must be lower case BearerFormat = "JWT", Reference = new OpenApiReference { Id = JwtBearerDefaults.AuthenticationScheme, Type = ReferenceType.SecurityScheme } }; c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme); c.AddSecurityRequirement(new OpenApiSecurityRequirement { {securityScheme, Array.Empty<string>() } }); c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1", Description = "My API" }); });

The Swagger middleware is added after the security headers middleware. Some people only add this to dev and not production deployments.

app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "MyApi v1"); });

Testing

The UITestClientForApiTest Razor Page application can be used to login and get an access token to test the API. Before starting this application, the AzureAD configuration in the settings need to be updated to match your Azure App registration and your tenant. The access token can be used directly in the Swagger UI. The API only accepts delegated access tokens and no CC tokens etc. The configuration in the Blazor server application also needs to match the Azure App registrations in your tenant.

This setup is good for simple projects where you would like to avoid creating a second deployment or you want to re-use a small amount of business logic from the Blazor server. At some stage, it would probably make sense to split the API and the Blazor UI into two separate projects which would make this security setup more simple again but result in more infrastructure.

Links:

https://github.com/AzureAD/microsoft-identity-web

https://github.com/AzureAD/microsoft-identity-web/wiki/Mixing-web-app-and-web-api-in-the-same-ASP.NET-core-app

Securing Blazor Web assembly using cookies

https://jwt.ms/

Sunday, 03. October 2021

Randall Degges

How I Converted a REST API I Don't Control to GraphQL

I’ve been building and working with REST APIs for many years now. Recently, however, I’ve been spending more and more of my time working with (and building) GraphQL-based APIs. While GraphQL has generally made my life easier, especially as I’ve been building and consuming more data-heavy APIs, there is still one extremely annoying problem I’ve run into over and over again: lack of GraphQL s

I’ve been building and working with REST APIs for many years now. Recently, however, I’ve been spending more and more of my time working with (and building) GraphQL-based APIs.

While GraphQL has generally made my life easier, especially as I’ve been building and consuming more data-heavy APIs, there is still one extremely annoying problem I’ve run into over and over again: lack of GraphQL support for third-party APIs I need to consume.

If I’m trying to use a third-party API service that doesn’t have a GraphQL endpoint, I either need to:

Download, install, configure, and learn how to use their custom REST API clients (which takes a lot of time and means my codebase is now a bit cluttered), or Build my own GraphQL proxy for their service… But this is a big task. I’ve got to read through all their REST API docs and carefully define GraphQL schemas for everything, learning all the ins and outs of their API as I go.

In short: it’s a lot of work either way, but if I really want to use GraphQL everywhere I have to work for it.

In an ideal world, every API service would have a GraphQL endpoint, this way I could just use a single GraphQL library to query all the API services I need to talk to.

Luckily, one of my favorite developer tools, StepZen (disclaimer: I advise them), has made this problem a lot less painful.

What StepZen Does

StepZen is a platform that lets you host a GraphQL endpoint to use in your applications. But, more importantly, they’ve designed a schema system that lets you import (or build your own) GraphQL wrappers for just about anything: REST APIs, databases, etc. It’s really neat!

Using the StepZen CLI, for example, I can create a new GraphQL endpoint that allows me to talk with the Airtable and FedEx APIs, neither of which support GraphQL natively. The beautiful thing is, I don’t even need to write a GraphQL wrapper for this myself since someone else already did!

Here’s what this looks like (assuming you’ve already got the StepZen CLI installed and initialized):

$ stepzen import airtable $ stepzen import fedex $ stepzen start

Using the import command, StepZen will download the publicly available GraphQL schemas for these services (Airtable and FedEx) to your local directory. The start command then launches a GraphQL explorer on localhost, port 5000, which you can use to interactively query the Airtable and FexEx APIs using GraphQL! Pretty neat!

How to Access Public GraphQL Schemas

So, let’s say you want to use GraphQL to query an API service that doesn’t support GraphQL. If you’re using StepZen, you can find all the publicly available (official) schemas on the StepZen Schemas repo on GitHub. This repo is namespaced, so if you see a folder in the project, you can use the stepzen import command on it directly. At the time of writing, there are 24 publicly available GraphQL schemas you can instantly use.

StepZen currently has support for lots of popular developer services: Disqus, GitHub, Shopify, etc.

You can, of course, create your own GraphQL schemas as well.

The Problem I Ran Into with GraphQL

Several months ago I was working on a simple user-facing web app. The entire backend of the app was using GraphQL and I was trying to keep the codebase as pure as possible, which meant not using any REST APIs directly.

In all my years of building web apps, I’ve always used a REST API at some point. So this was a bit of an experiment to see whether or not I could build my app without cluttering my codebase with REST API clients or requests.

As I was working on the app, I went to use one of my favorite free API services, Random User. If you haven’t heard of it before, it’s a publicly available REST API you can hit to generate realistic-looking fake users. It’s incredible for building a development environment, using seed data in an app, or creating real-world-looking MVPs. I use it all the time.

I knew going into this process that the Random User API didn’t have a GraphQL endpoint, so I figured I’d spend some time creating one for them.

How to Convert a REST API to GraphQL

My goal, as I mentioned above, was to build a GraphQL endpoint for the Random User REST API.

In order to make this work, what you have to do is translate your REST API so the StepZen service can understand which endpoints you are hitting and what types of inputs and outputs they require.

Once you’ve defined all this, StepZen will be able to create a GraphQL endpoint for your app. When your app queries the StepZen GraphQL endpoint, StepZen will translate your GraphQL request into the proper REST API call, execute the request, then translate the response and send it back to your app in standard GraphQL format.

Here’s how you can convert any REST API to GraphQL using StepZen, step-by-step.

Step 1: Read Through the REST API Docs and Identify Endpoints to Convert

The first part of converting any REST API to GraphQL is to fully understand what endpoints you want to convert from REST to GraphQL.

In my case, the Random User API only has a single endpoint, so this isn’t complicated. But, let’s say you’re converting a large, popular API like Twilio that has many endpoints… This can be much more difficult.

To make things simpler, here’s what I recommend: when converting a REST API to GraphQL, identify the endpoints you actually need and ignore the rest. Trying to build an entire abstraction layer for a massive API like Twilio would be difficult, so instead, focus only on the key endpoints you need to use. In the future, if you need to use additional endpoints, you can always add support for them.

NOTE: This is especially true if you’re using StepZen. It’s incredibly easy to support additional endpoints in StepZen once you have a basic schema going, so don’t feel bad about focusing on the important ones and ignoring the rest.

Following this just-in-time pattern will help save you time and get you back to working on the important stuff.

Step 2: Understand the REST Endpoint’s Schema

Every API endpoint has some sort of schema. For example, the Random User API, when queried, returns a blob of JSON similar to the following:

{ "results": [ { "gender": "male", "name": { "title": "mr", "first": "brad", "last": "gibson" }, "location": { "street": "9278 new road", "city": "kilcoole", "state": "waterford", "postcode": "93027", "coordinates": { "latitude": "20.9267", "longitude": "-7.9310" }, "timezone": { "offset": "-3:30", "description": "Newfoundland" } }, "email": "brad.gibson@example.com", "login": { "uuid": "155e77ee-ba6d-486f-95ce-0e0c0fb4b919", "username": "silverswan131", "password": "firewall", "salt": "TQA1Gz7x", "md5": "dc523cb313b63dfe5be2140b0c05b3bc", "sha1": "7a4aa07d1bedcc6bcf4b7f8856643492c191540d", "sha256": "74364e96174afa7d17ee52dd2c9c7a4651fe1254f471a78bda0190135dcd3480" }, "dob": { "date": "1993-07-20T09:44:18.674Z", "age": 26 }, "registered": { "date": "2002-05-21T10:59:49.966Z", "age": 17 }, "phone": "011-962-7516", "cell": "081-454-0666", "id": { "name": "PPS", "value": "0390511T" }, "picture": { "large": "https://randomuser.me/api/portraits/men/75.jpg", "medium": "https://randomuser.me/api/portraits/med/men/75.jpg", "thumbnail": "https://randomuser.me/api/portraits/thumb/men/75.jpg" }, "nat": "IE" } ], "info": { "seed": "fea8be3e64777240", "results": 1, "page": 1, "version": "1.3" } }

Since the purpose of the Random User API is to return random user data, I can clearly see in the JSON data many of the fields I might be interested in working with: email, cell, username, etc.

Once you’ve clearly identified the fields you want to make use of, move on to the next step.

Step 3: Create a Corresponding GraphQL Schema

Once you’ve figured out which fields you want to make use of, you need to define a GraphQL schema that will tell the StepZen service how to understand the endpoint’s response.

You don’t need to be a wizard to do this, just pick out the important fields (and their types) from a REST API response, then build a corresponding GraphQL type.

For example, here’s the RandomUser type I defined based on the REST API output above:

type RandomUser { gender: String! title: String! firstName: String! lastName: String! streetNumber: Int! streetName: String! city: String! state: String! country: String! postcode: String! latitude: String! longitude: String! timezoneOffset: String! timezoneDescription: String! email: String! uuid: String! username: String! password: String! salt: String! md5: String! sha1: String! sha256: String! dateOfBirth: Date! age: Int! registeredDate: DateTime! registeredAge: Int! phone: String! cell: String! largePicture: String! mediumPicture: String! thumbnailPicture: String! nationality: String! }

As you can see, I simply defined the fields I care about from the REST response, along with their type.

NOTE: The exclamation point after many of these fields is to mark these fields as non-nullable. This means that no matter what, these fields should always be present in a response.

What you’ll want to do is create a new folder in your project and put this code into a file named <servicename>.graphql. In my case, I created the following directory tree and put the code into the randomuser.graphql file.

randomuser └─ randomuser.graphql Step 4: Define a GraphQL Query

Once your data modeling is complete and you’ve defined your type(s), it’s time to define a special Query type that will tell the StepZen service how to communicate with the REST API you’re converting.

To get started, edit the <servicename>.graphql file and add the following Query definition:

type Query { randomUser: RandomUser @rest( endpoint: "https://randomuser.me/api" resultroot: "results[]" ) }

This is a barebones query that essentially tells StepZen the randomUser query will return data of type RandomUser (which we defined in the previous step) and will get data by hitting the https://randomuser.me/api endpoint, then parsing the data out of the results JSON array field.

But… That’s not all! We still have more work to do.

When I defined the RandomUser type above, I decided to make all the data flat, and not nested like it was in the original REST API. This is because I don’t need any special nested fields when I generate a random user, I just want the data!

So, the next thing we need to do is explain to StepZen how to convert the nested JSON key/values into the custom schema I defined. To do this, we’ll use the setters REST directive StepZen provides:

type Query { randomUser: RandomUser @rest( setters: [ { field: "title", path: "name.title" } { field: "firstName", path: "name.first" } { field: "lastName", path: "name.last" } { field: "streetNumber", path: "location.street.number" } { field: "streetName", path: "location.street.name" } { field: "city", path: "location.city" } { field: "state", path: "location.state" } { field: "country", path: "location.country" } { field: "postcode", path: "location.postcode" } { field: "latitude", path: "location.coordinates.latitude" } { field: "longitude", path: "location.coordinates.longitude" } { field: "timezoneOffset", path: "location.timezone.offset" } { field: "timezoneDescription", path: "location.timezone.description" } { field: "uuid", path: "login.uuid" } { field: "username", path: "login.username" } { field: "password", path: "login.password" } { field: "salt", path: "login.salt" } { field: "md5", path: "login.md5" } { field: "sha1", path: "login.sha1" } { field: "sha256", path: "login.sha256" } { field: "dateOfBirth", path: "dob.date" } { field: "age", path: "dob.age" } { field: "registeredDate", path: "registered.date" } { field: "registeredAge", path: "registered.age" } { field: "largePicture", path: "picture.large" } { field: "mediumPicture", path: "picture.medium" } { field: "thumbnailPicture", path: "picture.thumbnail" } { field: "nationality", path: "nat" } ] endpoint: "https://randomuser.me/api" resultroot: "results[]" ) }

Using the setters directive, we’re able to tell StepZen which fields in our RandomUser type can be supplied by which nested paths from the raw REST API JSON data. Neat!

There are lots of other things you can define using StepZen, so even if you have an incredibly complex endpoint there are always ways to support it. To learn more about this, you might want to read through the StepZen docs on the subject.

Step 5: Define the API Metadata

Once you’ve finished defining the data model and query, the next thing you’ll want to do is prepare your GraphQL API schema for production!

In my case, I wanted to contribute my schema back to the StepZen developer community, so I forked the stepzen-schemas repo and created a new top-level folder named randomuser in which I put my query/data model from above.

I then wrote up a README and created a stepzen.config.json file which provides details about this schema:

{ "name": "RandomUser", "description": "Generates random users for testing purposes using the randomuser.me API.", "categories": ["Dev Tools"] }

Notice how all I’m providing is a name, description, and any relevant categories this schema fits into.

Finally, I opened a pull request to the StepZen repo and the maintainers tested and merged my schema into the repo so anyone could use it!

Now, if you have no interest in creating a public GraphQL schema for the REST API you’re converting, you can obviously skip this step. So long as you have a folder with your service name and your data model/queries defined, you can use StepZen to run your endpoint regardless of whether or not your schema is publicly available to the world.

Using Your Converted REST API

Once you’ve built a schema that teaches StepZen how to talk to the REST API, how do you actually use it?

Well, in the example above, I built a public-facing StepZen schema and contributed it back to the community. In this case, I can simply use this schema like I would any other:

$ stepzen import randomuser $ stepzen start

This will download and install the schema I built, then deploy the code in the current directory to my personal StepZen GraphQL endpoint. It’ll also watch the directory for changes and automatically deploy them if I tweak any of the schema code, etc.

In addition to the above, the start command will open a browser window with the StepZen Schema Explorer that will allow you to test your API by exploring the queries and types available and querying the API running on StepZen.

If you want to start querying your new GraphQL endpoint directly from an application, read through the Connecting to Your StepZen API Guide which shows some examples of how to connect/query your new GraphQL API.

Recap: How to Convert a REST API You Don’t Control to GraphQL

If you’re trying to find a way to use GraphQL on a REST API, here’s what you should do.

First, check StepZen’s list of publicly supported schemas to see if a schema already exists. This means you can use GraphQL to access the API of your choice without doing much.

If there isn’t a public schema available, go build your own! To do this:

Figure out what endpoints you need to use Figuring out what data your endpoints take as input/output Define a GraphQL type to reflect what your endpoints look like in GraphQL Define a StepZen query to explain how the StepZen service can interact with the REST API

Once you have a schema, you can easily deploy it live to the StepZen service, which will give you a dedicated GraphQL endpoint/explorer you can query. And bam, you’re now able to use GraphQL to talk to any REST API, even ones you don’t own/control/etc!

Friday, 01. October 2021

Werdmüller on Medium

Marriage therapy

He’s the best in town. Continue reading on Medium »

He’s the best in town.

Continue reading on Medium »

Wednesday, 29. September 2021

Identity Praxis, Inc.

Mobile’s Irreplaceable Role: Liminal’s Cameron D’Ambrosi Speaks to the MEF PD&I Working Group

A derived version of this article was published by the Mobile Ecosystem Forum on October 7, 2021.   Have you heard? Mobile is a big deal. The majority of the world’s population carries a mobile device, and there are billions of connected devices integrated throughout nearly every part of our daily life. Why does this […] The post Mobile’s Irreplaceable Role: Liminal’s Cameron D’Ambrosi Spea

A derived version of this article was published by the Mobile Ecosystem Forum on October 7, 2021.

 

Have you heard? Mobile is a big deal. The majority of the world’s population carries a mobile device, and there are billions of connected devices integrated throughout nearly every part of our daily life. Why does this insight matter? Putting it another way, mobile is the guiding light that leads enterprises down the path of commercial success. Commercial success requires The 3Cs: Connection, Communication, Commerce. One of the four core threads that connect these three stepping stones to commercial success is mobile; the other three are communications, content, and identity (aka personal information). Everyone in an enterprise must understand this, from the c-suite to the staffer. Why? Because today’s connected [mobile] individual is the point of everything along the customer journey. They are the point of discovery. The point of consideration. The point of sales/transaction. The point of engagement. The point of loyalty. Point of exit. And, at each point of engagement, mobile identity plays an irreplaceable role. We’re still early days in mobile identity, but the market is shaping and reshaping up quickly.

“We see the mobile device as playing an almost irreplaceable role in terms of this linkage between a physical person in live space, or meatspace, you might say, and the digital realm.” – Cameron D’Ambrosi, Managing Director, Liminal 2021 (MEF PD&I Working Group Guest Speaker, 2021)

Mobile Identity: A Pillar for Commercial Success

Mobile identity sits at the heart of most commercial experiences. Identity can help organizations know, with varying levels of assurance, who is on the other end of a device (aka “know your customer”); likewise, soon but not just yet, it will enable individuals to know the enterprise too (aka “know the sender”). The ability for enterprises and the connected individual to know, with absolute assurance (aka deterministic identity) or with varying degrees of assurance (aka probabilistic identity), who the human or entity is on the other end of a device is extremely important. When we have identity assurance, we can have transparency in our transactions, communications, and relationships. This transparency fosters trust, which in turn leads to lasting connections, effective communications, and commerce. Moreover, this transparency will allow organizations to better serve people at every stage along the customer lifecycle (acquisition, consideration, conversation, adoption, loyalty, and offboarding), while simultaneously meeting their contractual, legal and regulatory obligations and protecting themselves and their stakeholders from fraud and other cybercrime. With the systematic foundation of trust that mobile identity enables, a world of abundance can lay out our feet.

Mobile Ecosystem Forum PD&I Working Group

The Mobile Ecosystem Forum is a global big-tent trade body made of leading cross-sector organizations from around the world. Its mission is to help foster a healthy, responsible, mobile ecosystem, one where mobile can be used as an effective tool to bring social and commercial prosperity to every region, organization, and individual.

It is accomplishing this by engaging in a range of initiatives that generate insights, stimulate interaction, and make a positive impact. Much of the MEFs work happens at a grassroots level in its workgroups. For example, the Personal Data & Identity Working Group, chaired by Andrew Perkin-White, focuses on mobile identity, the emerging personal data economy, the growth and application of people-centric regulations (e.g. GDPR, CCPA), the annual MEF global trust study, and more. The working group meets monthly to discuss and prioritize the effort across all its initiatives.

MEF PD&I Working Group Guest Speaker

Periodically the working group invites a guest speaker to the monthly working group meeting. September 20, 2021, the group was fortunate enough to have Cameron D’Ambrosi, Managing Director, Liminal, join the meeting and share his thoughts on the state of the global identity infrastructure market ecosystem, both today and in the years to come.

During his prepared remarks, Cameron shared Liminal’s framework for evaluating identity infrastructure, the “Honeycomb.” He talked through the structure of the Honeycombs, how “anchor” identity solutions sit in the center, adjacent solutions fuse together to support use cases, and various other solutions specialize in the three types of data: deterministic, probabilistic, and self-managed, and their persistent or transitory states. He touched on a number of important topics, including,

The Liminal “Identity Honeycombs” framework The three data types and their two stages Digital ID: A How, not a What Use Case examples Strategies for navigating the landscape Risk/reward management Identity orchestration And more

Watch Cameron’s Remarks on YouTube, click the image below

 

An Invitation To Lead

The conversation with Cameron went on much longer than what you heard in the recording above. Why? Because much of the content is reserved for MEF members. If you’re a MEF member, I encourage you to log in to the MEF site and listen to the engaging Q&A the working group members had with Cameron. If you’re not a member of the MEF, I encourage you to consider becoming one. Through the MEF, you can gain indispensable insight, interact with people, and make a difference that will benefit you and your company professionally and personally. An important thing to remember is that with membership in any community, including the MEF, like anything in life, is that you get out of it what you put in. The more you put into your relationship with the MEF and your fellow members, the more you can get out.

I hope you enjoyed this peek into the identity infrastructure marketplace. If you have thoughts and ideas as to where you think the marketplace is going, please do reach out. With the MEF, I’m working on a personal data & identity market assessment report. I’d love to connect with you, see MEF Personal Data & Market Assessment Project (Schedule Interview and Share Insights) to learn about the project, schedule an interview with me, or share your insights and resources.

 

00:00 – Introduction
00:08 – About MEF PD&I Working Group
00:28 – Intro guest speaker: Cameron D’mbrosi
01:02 – Handoff from Andrew Parkin-White
01:05 – About Liminal
03:19 – Quote: Digitial ID must be inclusive
03:54 – End Quote
03:58 – Introducing the “Identity Honeycombs”
04:10 – The Map: On solution positioning
05:17 – The Map: Data type color-coding
05:20 – Introducing the three data types
05:27 – Data type 1: First party deterministic
05:54 – Data type 2: Probabilistic
06:47 – Data type 2: individual self-managed
07:08 – Importance of data types to strategy
07:23 – Digital ID a How NOT a What
09:25 – Pulling the pieces together: FinTech ex.
11:58 – What it all means: Future of the market
13:38 – Mobile Identity & Device Intelligence
14:34 – Mobile has an irreplaceable role
14:50 – Mobile central to the consumer’s life
16:15 – Kickoff Q&A
16:28 – Becker Q: Diff between ID and PI
16:44 – Personal data: synonym for identity
17:45 – Why this definition is important
18:28 – Linking the types of data
19:58 – On risk/reward balance with CX friction
23:13 – Orchestration is the key
26:04 – Wrap-up and call to action – Join MEF

REFERENCES MEF PD&I Working Group Guest Speaker: Cameron D’Ambrosi on Identity Infrastructure. (2021).

The post Mobile’s Irreplaceable Role: Liminal’s Cameron D’Ambrosi Speaks to the MEF PD&I Working Group appeared first on Identity Praxis, Inc..


Werdmüller on Medium

Planes, trains, and automobiles

Effective mass transit is possible in America. Continue reading on Medium »

Effective mass transit is possible in America.

Continue reading on Medium »

Tuesday, 28. September 2021

Werdmüller on Medium

Mining on the crop-fields

Proof of Effort has set us free Continue reading on Medium »

Proof of Effort has set us free

Continue reading on Medium »

Saturday, 25. September 2021

Equals Drummond

Giving Workona a Second Chance

This is the first post I’ve made in 2 years—for the simple reason that my day job in SSI (the acronym for Self-Sovereign Identity—click the link and read the book if you want to learn more) has been all-consuming (and … Continue reading →

This is the first post I’ve made in 2 years—for the simple reason that my day job in SSI (the acronym for Self-Sovereign Identity—click the link and read the book if you want to learn more) has been all-consuming (and showing no signs of abating).

So that explains why the last post was a rant about an experience with Lyft. It’s blood-boiling incidents like that which move me past the inertia to actually push out a new post.

This post is similar, but with a different ending. Here’s the story:

For at least four years I had been an inveterate user of The Great Suspender browser extension to help save memory due to the dozens of browser tabs I always had open in Chrome.

Then last March Google remotely blocked The Great Suspender because it had been sold to a malware company.

A friend recommended I try Workona because “it does much more than just suspend tabs—it will transform how you work with your browser”.

I did a little research and found several more such breathless endorsements, so I decided to give Workona a try.

It worked exactly as advertised. Within 30 minutes “my life in a browser” was changed forever. I was never going back. (I won’t go into how Workona works here—just check out the many rave reviews.)

I was hooked enough that after a few weeks I tweeted out a love letter to Workona.

.@WorkonaHQ It is rare that a browser utility stands out enough to merit a direct personal endorsement. But Workona is that good. I read so many reviews saying “it will change the way you use your browser” that I wondered if that could really be true. It was. Within minutes.

When they replied to thank me, I responded.

Honestly, you deserve it. Pretty soon I am going to start wondering, “Am I using Workona inside my browser or am I using my browser inside of Workona?”

End of Act 1.

Act 2 began at about 6PM last Tuesday night when I clicked the “New Version – Upgrade” button that appears in Workona when there’s a new update. As usual, it refreshed in seconds.

Suddenly there was a new icon next to many of my Workona workspaces. I thought, “Cool, I wonder what new feature this is?” I clicked one to find out…

…and up popped a new dialog saying the workspace was now “locked” and the only way to unlock it was to buy a premium subscription (to that point Workona has been completely free).

I went ballistic. This fantastic new tool that had become an integral part of using my browser was suddenly blocking my own work in my own workspaces. I started clicking madly on the links provided in the upgrade dialog to find out who was responsible for this outrage. When I found a Workona contact form, I was almost shouting at the screen as I typed. A sampling:

I am beyond upset that after working with Workona for several months now—whose functionality I love and which I have already recommended to several friends and tweeted about—you suddenly, with NO WARNING—lock all but 5 of my workspaces and hold me hostage to upgrade to free them OR require an SSO connection just to export my own data.

That, my good friends, is highly unethical business behavior.

As you can tell, I was livid.

Thankfully, in the time it took me to go take a short walk and blow off steam, Alex Young at Workona sent me a reply via email.

We’re sorry for the frustration this has caused. To start, you should be able to access all of your existing workspaces if you click the “Open workspace” button when you see the Upgrade modal. We are not locking any users out of their workspaces. 

As for SSO, this is just needed to authenticate you for security purposes. You signed in via SSO, and as a result, that’s how we need to verify you. 

Alex was right. Despite what the upgrade dialog said, the “Open workspace” button that was greyed out (the universal signal that a button is not functional) did in fact work if you clicked it. So I wasn’t locked out of all-but-5 of my workspaces. They had just made it look that way.

End of Act 2.

Act 3 began a few hours later after I finished the work I was under deadline for and finally replied to Alex’s email:

Alex, I appreciate the rapid response to my email. I have no idea if Workona has an automatic timer for when that “upgrade prompt” appears for a user or whether Workona just applied the policy today. (If the latter, I worry that you and the rest of the support staff have been flooded with complaints like mine today.)

Honestly—and feel free to share this within the company—I would have been much more supportive if the whole thing had been messaged differently. For example:

First, an advanced notice could have been made that explained the change in policy—there was NO WARNING whatsoever.

Second, the upgrade could have explained that you can still access the rest of your workspaces, you just have to go through the nag screen.

But the best way to handle it would have been to be upfront from the start and explain that the free version supports up to 5 workspaces and beyond that, you have to subscribe.

That began a dialog in several more rounds of email with Alex where he acknowledged my criticisms and explained the rationale behind the upgrade policy. I particularly appreciated this one:

We did send out an email to all of our users that the change would be happening – are you subscribed to receive our emails? 

As for the rest of your feedback, I have passed this along to our product team. 
Also – a little background regarding our current pricing: We have had to overcome immense technical challenges in the development of what we believe is the world’s best browser work manager.

Work has steadily moved from the desktop to the cloud, and many people do almost all of their most important work in the browser these days. For people that work in the browser, $7 is a good match to the value our app provides, and our pricing survey data backs this up. If you don’t work in the browser, or don’t feel you need a professional/reliable work management system, then we certainly understand that a different solution may be a better fit. We hope this helps you better understand why Workona has the premium pricing it does.

It turns out that Workona had indeed sent out an advance notice (in email, not in their browser extension) of their change in policy in late August. I was on vacation, naturally, so I had missed it.

When I found it and read it—it was perfectly reasonable. And, although I thought $7/mo was a little on the high side for Workona (I use Zoom 6+ hours a day and it’s $15/mo), I really did feel that Workona represented the future of work in the browser.

I was going to wait until the next day to decide about upgrading. But before I went to bed, Alex had assuaged me enough—and most importantly restored my belief in the intent and integrity of Workona as a company—that I went ahead and subscribed for a year.

That was a pretty dramatic turnaround in a single evening. Kudos to Alex for his premium customer service. As I said to him in my final message:

Alex, FYI, I subscribed for the 1 year plan. I want to personally thank you for talking me off the ledge. That’s the kind of customer service that makes or breaks a company IMHO.
I have high hopes for Workona. Keep being awesome.

EPILOG

Today there was another upgrade message from Workona. When I clicked on it, a screen popped up (I had the foresight to take a screenshot) that offered an apology directly to Workona users right there in the browser. It contained a link to this page on the Workona website, the start of which I’ll excerpt here:

An Apology From Our CEO

September 24, 2021

This week, we made some serious missteps while launching Workona’s premium plans. Please allow me to personally apologize and explain exactly what what we’ve done to make it right.

We assumed that a detailed email sent a month ago announcing the changes was enough advance notice for users. Clearly, we were wrong. Your feedback has made it obvious that this was a major mistake. We should have announced this in the product, multiple times. We messed up, and we’re sorry.

Many users were caught off guard by the restrictions of the Free plan and wondered whether Workona was still a good option for them. This was exacerbated by the unclear language we used in our popup, which made them believe they were locked out of their workspaces. This was not the case (and not our intention), but that doesn’t change the damage it did to our users’ trust in Workona.

There’s more. I encourage you to go to the Workona website and read the rest.

I took the time to write this post—my first one in two years—because when you screw up as a company—no matter how big or small—this is how you make it right. You bite the bullet, admit your mistake, and fix it, no matter how much work it takes.

The faster and more transparently you do it, the faster you repair the damage and start restoring faith in your company.

Good job, Workona. (And great job, Alex.) My faith in you has been restored, and I’ll continue to recommend you as a browser work management tool. Keep growing it into something even more amazing.


Kyle Den Hartog

Comparing VCs to ZCAP-LD

Verifiable Credentials are well suited for provenanced statements. ZCAP-LD is great for distributed authorization systems.

A few months back, I wrote about some of the edge points of the verifiable credentials data model and briefly mentioned that the authorization capabilities for linked data (ZCAP-LD) data model was a useful technology for addressing these edges for building better distributed authorization systems. So what are the differences between these two data models and why am I advocating for the separation of concerns for the two of them? The three main points I want to highlight are the conceptual differences in usages and how the data models assist with enforcing those concepts. Then I’ll highlight the differences in the data models and finally, I’ll connect it all to show how these differences make a difference when building distributed authorization systems.

Conceptually verifiable credentials exist to make provenanced assertions. What does this mean though? Basically, a verifiable credential is a way to know who is making the claims, what the claims are, and who the claims are about and it’s done in a standard way so that all parties who issue, hold, and verify the provenance of the claims can do so without having to pre-coordinate their software design. On the other hand, authorization capabilities (ZCAPs) are designed around removing the concept of who the statements are about and rather building around the concept of an “invoker”. And it’s this difference that allows for an emergence of a different authorization system called object capabilities authorization systems. It’s my personal opinion that object-capability systems are better for most security models being designed today due to the simplicity for relying parties, but we’ll have to save that discussion for a later post.

What’s the difference between a subject and an invoker then? The difference is all in the practicality of pairing the claims with the entity. In the verifiable credential by tying all the claims to a subject there’s an inherent coupling of the claims to identity by the issuer. In technical terms, it’s being paired to an identifier which is the “credentialSubject.id” property which means that the claims that are made should only be extended to a single entity which is the subject of the claims. Anything which extends beyond the original claims made by the issuer is a new credential that needs to be evaluated independently of the original veracity of the claims made by the first issuer.

For example, let’s look at a proof of assets credential:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", { "@vocab": "https://bank.com/vocab#" } ], "id": "http://example.edu/credentials/1872", "type": ["VerifiableCredential", "ProofOfAssetsCredential"], "issuer": "did:web:bank.com:branch:id:565049", "issuanceDate": "2021-06-04T20:50:09Z", "credentialSubject": { "id": "did:example:bankAccountOwner", "TotalAccountValueInUSD": 4156.62, }, "proof": { "type": "Ed25519Signature2020", "created": "2021-06-04T20:50:29Z", "verificationMethod": "did:example:bankAccountOwner#key-0", "proofPurpose": "authentication", "proofValue": "..." } }

In this example, we can see that a bank branch is asserting on behalf of the bank that the bank account owner has a total account value with this bank of $4156.62. Now, let’s say the bank account owner has decided to chain their credential to their child so that their child can spend up to $50:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", { "@vocab": "https://bank.com/vocab#" } ], "id": "http://example.edu/credentials/1872", "type": ["VerifiableCredential", "ProofOfAssetsCredential"], "issuer": "did:example:bankAccountOwner", "issuanceDate": "2010-01-01T20:54:24Z", "credentialSubject": { "id": "did:example:child", "TotalAccountValueInUSD": 50.00, }, "proof": { "type": "Ed25519Signature2020", "created": "2017-06-18T21:19:10Z", "proofPurpose": "assertionMethod", "verificationMethod": "did:example:bankAccountOwner#key1", "proofValue": "..." } }

and with this the child is now able to create a verifiable presentation like this:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", { "@vocab": "https://bank.com/vocab" } ], "id": "did:example:76e12ec21ebhyu1f712ebc6f1z2", "type": ["VerifiablePresentation"], "verifiableCredential": [ { "@context": [ "https://www.w3.org/2018/credentials/v1", { "@vocab": "https://bank.com/vocab#" } ], "id": "http://example.edu/credentials/1872", "type": ["VerifiableCredential", "ProofOfAssetsCredential"], "issuer": "did:web:bank.com:branch:id:565049", "issuanceDate": "2021-06-04T20:50:09Z", "credentialSubject": { "id": "did:example:bankAccountOwner", "TotalAccountValueInUSD": 4156.62, }, "proof": { "type": "Ed25519Signature2020", "created": "2021-06-04T20:50:29Z", "verificationMethod": "did:example:bankAccountOwner#key-0", "proofPurpose": "authentication", "proofValue": "..." } }, { "@context": [ "https://www.w3.org/2018/credentials/v1", { "@vocab": "https://bank.com/vocab#" } ], "id": "http://example.edu/credentials/1872", "type": ["VerifiableCredential", "ProofOfAssetsCredential"], "issuer": "did:example:bankAccountOwner", "issuanceDate": "2010-01-01T20:54:24Z", "credentialSubject": { "id": "did:example:child", "TotalAccountValueInUSD": 50.00, }, "proof": { "type": "Ed25519Signature2020", "created": "2017-06-18T21:19:10Z", "proofPurpose": "assertionMethod", "verificationMethod": "did:example:bankAccountOwner#key1", "proofValue": "..." } }], "proof": { "type": "Ed25519Signature2020", "created": "2019-12-11T03:50:55Z", "proofValue": "....", "challenge": "c0ae1c8e-c7e7-469f-b252-86e6a0e7387e", "proofPurpose": "authentication", "verificationMethod": "did:example:assistant#key1" } }

Let’s break down what is being asserted here, who the verifier needs to trust, and how the verifier would go about processing this verifiable presentation. From what we can see here, the bank is asserting that the bank account owner has a total account value of $4156.62, the bank account owner is asserting that their child is authorized to spend $50.00. Seems to make sense, right? Yeah, but the problem here is how is the verifier supposed to be able to trust that the bank account owner verified the child has that money as well, but why should the verifier trust the bank account owner? From their perspective, there are two problems here. First, they don’t trust any random person to make assertions about the amount of money in another person’s bank account. Second, the verifier is unable to correlate the relationship between the bank account owner and the child. From their perspective, the credentialSubject.id should be expected to be two opaque identifiers assuming all they’ve received is this verifiable presentation. So, the problem here is that the verifier is expected to infer knowledge about what is being stated here which presents risks in the business processes being encoded into software. Are they to assume that they’ll be paid by the bank account owner if the child doesn’t pay? The reality is the verifier shouldn’t rely upon this verifiable presentation because there’s no way to chain the trust and figure out the capabilities that the child should have based only on this verifiable presentation. Sure, they could manage some state about the relationships of the identifiers and coordinate with the bank to know that they’ll make sure someone pays, but that leads to bespoke authorization systems built specifically to the business logic which is unlikely to scale to the size of the internet today within a reasonable time frame. Coordination costs are incredibly expensive (look at how long it takes to make a standard) and managing state makes it nearly impossible to generalize the system outside the particular business process it’s designed for.

How do ZCAPs address this problem then? First off, it’s important to recognize that the model of capabilities lends itself to different statements being made. In the verifiable credential model, traditionally most people using them are making claims about “what the subject is”. Whereas with ZCAP-LD I’ve seen the focus of the statements on what the invoker can do. This slight shift combined with the change from a subject to an invoker changes the focus of the verifier to simplify what needs to be checked and makes the checks more explicit for what they’re checking.

Let’s take a look at an example to see what I mean by this:

{ "@context": ["https://w3id.org/security/v2", { "@vocab": "https://bank.com/vocab#" }], // The identifier of this specific zcap object "id": "https://example.com/zcap/id/1", // Since this is the first delegated capability, the parentCapability // points to the target this capability will operate against // (in this case, a payment gateway) "parentCapability": "https://example.com/paymentGateway", // the bank account owner is only allowed to spend the amount in their account "caveat": [{ "type": "MaxPayment", "maxPaymentValue": 4156.62 }], // We are granting authority to any of bankAccountOwner's verification methods "invoker": "did:example:bankAccountOwner", // Finally we sign this object with cryptographic material from // Alyssa's Car's capabilityDelegation field, and using the // capabilityDelegation proofPurpose. "proof": { "type": "Ed25519Signature2018", "created": "2018-02-13T21:26:08Z", "capabilityChain": [ "https://example.com/paymentGateway" ], "proofValue": "...", "proofPurpose": "capabilityDelegation", "verificationMethod": "did:web:bank.com:branch:id:565049#key1" } }

Now for the bank account owner to authorize their child to spend $50.00 they need to produce the following delegated capability:

{ "@context": ["https://w3id.org/security/v2", { "@vocab": "https://bank.com/vocab#" }], // The identifier of this specific delegated zcap object "id": "https://example.com/zcap/id/2", // Pointing up the chain at the capability from // which the bank Account Owner was initially // granted authority by their bank "parentCapability": "https://example.com/zcap/id/1", // Bank account owner adds a caveat: // child can spend a maximum of 50.00 "caveat": [{ "type": "MaxPayment", "maxPaymentValue": 50.00 }], // bank account owner grants authority to any // verification method of their child's DID "invoker": "did:example:child", // Finally the bank account owner signs this // object with the key she was granted the // authority with "proof": { "type": "Ed25519Signature2020", "proofPurpose": "capabilityDelegation", "created": "2018-02-13T21:26:08Z", "creator": "did:example:bankAccountOwner#key1", "signatureValue": "..." } }

Now for the ZCAP to be invoked by the child at the payment gateway to spend $50 they need to produce a ZCAP that looks like the following:

{ "@context": ["https://w3id.org/security/v2", { "@vocab": "https://bank.com/vocab#" }], // The identifier of this specific delegated zcap object "id": "https://example.com/zcap/id/3", // Pointing up the chain at the capability from // which the bank Account Owner was initially // granted authority by their bank "parentCapability": "https://example.com/zcap/id/2", // bank account owner grants authority to any // verification method of their child's DID "invoker": "did:example:child", // the chain of capabilities from the trusted entity // making the intial authorization to the current // capability being invoked "capabilityChain": [{ "@context": ["https://w3id.org/security/v2", { "@vocab": "https://bank.com/vocab#" }], "id": "https://example.com/zcap/id/1", "parentCapability": "https://example.com/paymentGateway", "caveat": [{ "type": "MaxPayment", "maxPaymentValue": 4156.62 }], "invoker": "did:example:bankAccountOwner", "proof": { "type": "Ed25519Signature2018", "created": "2018-02-13T21:26:08Z", "capabilityChain": [ "https://example.com/paymentGateway" ], "proofValue": "...", "proofPurpose": "capabilityDelegation", "verificationMethod": "did:web:bank.com:branch:id:565049#key1" } }, { "@context": ["https://w3id.org/security/v2", { "@vocab": "https://bank.com/vocab#" }], "id": "https://example.com/zcap/id/2", "parentCapability": "https://example.com/zcap/id/1", "caveat": [{ "type": "MaxPayment", "maxPaymentValue": 50.00 }], "invoker": "did:example:child", "proof": { "type": "Ed25519Signature2020", "proofPurpose": "capabilityDelegation", "created": "2018-02-13T21:26:08Z", "creator": "did:example:bankAccountOwner#key1", "signatureValue": "..." } }], "proof": { "type": "Ed25519Signature2020", "proofPurpose": "capabilityInvocation", "created": "2018-02-13T21:27:09Z", "creator": "did:example:child#key1", "signatureValue": "..." } }

To verify this ZCAP, the payment gateway verifier needs to do the following:

verify the proof of the original capability verify the original invoker is authorized to delegate the original capability on behalf of the payment gateway verify the original capability meets or exceeds the authorities necessary to perform the request verify the proof that the bank account owner delegated the original capability to the invoker verify the caveats allow the delegated capability to still meet or exceed the authorities necessary to perform the request verify the proof in the capability invoked by the child

So this is a bit simpler for the payment gateway verifying the chain and makes it explicit what’s being delegated through the different parties. The question of the relationship between the different parties has now been eliminated which means that the payment gateway can operate in a stateless method. Additionally, there’s less coordination between the original delegator and the payment gateway verifier. In the object capabilities model, the authorization semantics are all predetermined by what is possible to be authorized which keeps the claims tightly scoped to the purpose of the capabilities being invoked.

The difference here is that the basis of all authority stems from the verifier rather than building off the statements that existed before the verifier did who may not have designed the original claims for authorization systems. And this key difference is what allows for the simplicity of the verifier system to exist. It moves the model back to having the sole authority set by the verifier rather than the verifier having to adapt and potentially misinterpret the original veracity and the chained veracity of the claims.

So what are the drawbacks of the object-capability paradigm and more specifically of the ZCAP-LD data model? Well first and foremost, ZCAPs are fit for building object-capability authorization models and are unlikely to be applicable beyond that intended scope. For me, that’s alright because I like the idea of designing technology to do one thing well. Especially when dealing with security and access control. Whereas the verifiable credentials data model allows for generic statements to be made which are far more useful for verifiable data ecosystems to emerge. Additionally, it’s worth mentioning that the ZCAPs data model is a relatively immature draft report at the credentials community group whereas the verifiable credentials data model has a much more stable recommendation having completed the process to become an international standard recommended by the W3C. This means there’s far more likely to be interoperability between verifiable credential processors than there would be between ZCAPs.

Why put the time and effort into ZCAP-LD when we’ve already got VCs? Simply put because security is hard and trying to push square pegs into round holes oftentimes leads to bugs that are elevated to mission-critical authentication/authorization bypass vulnerabilities. By designing around a fit-for-purpose data model with a well-defined problem being solved it allows for us to be much more precise about where we believe extensibility is important versus where normative statements should be made to simplify the processing of the data models. By extension, this leads to a simpler security model and likely a much more robust design with fewer vulnerabilities. And that’s the reason that I’m a fan of ZCAP-LD.

Wednesday, 22. September 2021

MyDigitalFootprint

Where do utopia and dystopia collide?

Can we live at the peak paradox? Peak Paradox is the middlemost point of the model. It is the point where everything has equal weight in terms of policy, priority, resources, commitment, interest, data and consequences.  It is the area of a perfect storm.  It is a magical (or imaginary) place where you can have everything, everyone else can have everything, no one is fighting f

Can we live at the peak paradox?

Peak Paradox is the middlemost point of the model. It is the point where everything has equal weight in terms of policy, priority, resources, commitment, interest, data and consequences.  It is the area of a perfect storm.  It is a magical (or imaginary) place where you can have everything, everyone else can have everything, no one is fighting for survival, and every work situation thrives.  In reality, the converse is probably also true. It is where everyone fights for survival; I cannot get what I want, others have nothing and work is just meaningless. It is in between these two states of euphoria and despair that we spend our time.  Never reaching utopia but somehow constantly individual’s individual’s feeling one step closer to dystopia.  In this state, we have to decide how to allocate the limited resources we have to be able to change the situation (for the better.)  

Whilst hard to accept, we actually don’t have the data, model or ability to allocate resources for better outcomes as we live in a complex system. In our ecosystem and economy, our actions have consequences that we cannot see. In the USA, they reintroduced wolves and have found that Wolves play a crucial role in keeping ecosystems healthy. They help keep deer and elk populations in check, benefiting individual’s many other plant and animal species. The carcasses of their prey also help to redistribute nutrients and provide food for other wildlife species, like grizzly bears and scavengers. Scientists are just beginning to fully understand the positive ripple effects that wolves have on ecosystems. The removal of the wolves was forecast to create a more vibrant and healthy ecosystem, but it turns out it collapses without them.  

The Ripple Effect is when an initial disturbance to a system propagates outward to disturb an increasingly more significant portion of the system, like ripples expanding across the water when an object is dropped into it.  The ripple effect is often used colloquially to mean a multiplier in macroeconomics. For example, an individual's reduction in spending reduces the incomes of others and their ability to spend. Unions negotiating for a pay rise drives prosperity for everyone. In sociology, the ripple effect can be observed in how social interactions can affect situations not directly related to the initial interaction and in charitable activities where information can be disseminated and passed from community to community to broaden its impact.  The concept has been applied in computer science within the field of software metrics as a complexity measure.

Living at peak paradox and making decisions at this central point creates ripples. However, we find it easy to predict the positive consequences of ripples we want to see in the direction of the peak purpose we are aiming for on the Peak Paradox model.  We are blinded to the adverse effects because we believe we have understood the risks and concluded this is the best action, given where we are and our resources.  Equally, we find it near impossible to see any good or bad consequences that can occur in other directions.  The point here is that we cannot see how a ripple in a different direction will come back (forward) and change the situation we forecast.  This is the complexity of modelling dependent systems that we do not understand, yet believing we have a risk model that does.   

The purpose of the Peak Paradox model is to shine a light on the other views, other perspectives, other consequences, other risks - not to change the decision, but to clarify how our decision will affect others and we are affected by theirs.  With the same thinking, others acting with their own agency will optimise for a different peak and, in doing so, create ripples in our tank, highlighting why resilience has become such an important but widely misunderstood topic.  We must agree on what we are trying to be resilient for, as it is often hidden. 

The article on how to make better decisions using the Peak Pardox framework suggests that we have to move from the Peak Paradox if we can clarify what we are optimising for and bring the team together.  However, the further we become aligned to a Purpose, whilst the easier the decision, the more unified the culture, the less resilient we become to others ripples.  

Balancing simplicity of decision making, with the allocation of resources and understanding risk and consequences is the optimisation model a board now has to wrestle with. Decision making has never been more challenging.   




Monday, 20. September 2021

Phil Windley's Technometria

JSON is Robot Barf

Summary: JSON has its place. But I think we're overusing it in places where a good notation would serve us better. JSON is robot barf. Don't get me wrong. JSON is a fine serialization format for data and I have no problem with it in that context. My beef is with the use of JSON for configuration files, policy specs, and so on. If JSON were all we had, then we'd have to live with it. But

Summary: JSON has its place. But I think we're overusing it in places where a good notation would serve us better.

JSON is robot barf. Don't get me wrong. JSON is a fine serialization format for data and I have no problem with it in that context. My beef is with the use of JSON for configuration files, policy specs, and so on. If JSON were all we had, then we'd have to live with it. But we've been building parsers for almost 70 years now. The technology is well understood. There are multiple libraries in every language for parsing. And yet, even very mature, well supported frameworks and platforms persist in using JSON instead of a human-friendly notation.

When a system requires programmers to use JSON, what they're effectively asking developers to use an "abstract" syntax instead of a "concrete" syntax. Here's what I mean. This is a function definition in concrete syntax:

And here's the same function definition expressed as an abstract syntax tree (AST) serialized as JSON:

I don't know any programmer who'd prefer to write the abstract syntax instead of the concrete. Can you imagine an entire program expressed like that? Virtually unreadable and definitely not maintainable. Parsing can take as much as 20% of the time taken to compile code, so there's a clear performance win in using abstract syntax over concrete, but even so we, correctly, let the machine do the work.

I get that systems often start out simply with simple configuration. Some inputs are just hierarchies of data. But that often gets more complicated over time. And spending time figuring out the parser when you're excited to just get it working can feel like a burden. But taking the shortcut of making developers and others write the configuration in abstract syntax instead of letting the computer do the work is a mistake.

I'd like to say that the problem is that not enough programmers have a proper CS education, but I fear that's not true. I suspect that even people who've studied CS aren't comfortable with parsing and developing notations. Maybe it's because we treat the subject too esoterically—seemingly useful for people designing a programming language, but not much else. And students pick up on that and figure this is something, like calculus, they're unlikely to ever use IRL. What if programming langauge classes helped students learn the joy and benefit of building little languages instead?

I'm a big believer in the power of notation. And I think we too often shy away from designing the right notation for the job. As I wrote about Domain Specific Languages (DSLs) in 2007:

I'm in the middle of reading Walter Isaacson's new biography of Einstein. It's clear that notation played a major role in his ability to come up with the principle of general relativity. He demurred at first, believing that the math was for someone else to come along later and tidy up. But later in his life, after the experience of working on general relativity, Einstein became an ardent convert.

Similarly, there is power in notation for computing tasks. Not merely the advantage of parameterized execution but in it's ability to allow us to think about problems, express them so that other's can clearly and unambiguously see our thoughts, and collaborate to create joint solutions. What's more, languages can be versioned. GUI configurations are hard to version. Notation has advantages even when it's not executed.

The DSL becomes the focal point for design activities. The other day, I was having a discussion with three friends about a particular feature. Pulling out pencil and paper and writing what the DSL would need to look like to support the feature helped all of us focus and come up with solutions. Without such a tool, I'm not sure how we would have communicated the issues or whether we'd have all had the same conception of them and the ultimate solution we reached.

As this points out, clear notations have advantages beyond being easier to write and understand. They also provide the means to easily share and think about the problem. I think system designers would be better off if we spent more time thinking about the notation developers will use when they configure and use our systems, making it clear and easy to read and write. Good notation is a thinking tool, not just a way to control the system. The result will be increased expressiveness, design leverage, and freedom.

Photo Credit: Sick Android from gfk DSGN (Pixabay)

Tags: programming systems json parsing


Damien Bod

Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph

This article shows how to create Microsoft Teams online meetings in ASP.NET Core using Microsoft Graph. Azure AD is used to implement the authentication using Microsoft.Identity.Web and the authenticated user can create teams meetings and send emails to all participants or attendees of the meeting. Code: https://github.com/damienbod/TeamsAdminUI Blogs in this series Creating Microsoft Teams meeting

This article shows how to create Microsoft Teams online meetings in ASP.NET Core using Microsoft Graph. Azure AD is used to implement the authentication using Microsoft.Identity.Web and the authenticated user can create teams meetings and send emails to all participants or attendees of the meeting.

Code: https://github.com/damienbod/TeamsAdminUI

Blogs in this series

Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph (delegated) Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph application permissions part 2

Setup Azure App registration

An Azure App registration is setup to authenticate against Azure AD. The ASP.NET Core application will use delegated permissions for the Microsoft Graph. The listed permissions underneath are required to create the teams meetings and to send emails to the attendees. The account used to login needs access to office and should be able to send emails.

User.Read Mail.Send Mail.ReadWrite OnlineMeetings.ReadWrite

This is the list of permissions I have activate for this demo.

The Azure App registration requires a user secret or a certificate to authentication the ASP.NET Core Razor page application. Microsoft.Identity.Web uses this to authenticate the application. You should always authenticate the application if possible.

Setup ASP.NET Core application

The Microsoft.Identity.Web Nuget packages with the MicrosoftGraphBeta package are used to implement the Azure AD client. We want to implement Open ID Connect code flow with PKCE and a secret to authenticate the identity and the Microsoft packages implements this client for us.

<ItemGroup> <PackageReference Include="Microsoft.Identity.Web" Version="1.16.1" /> <PackageReference Include="Microsoft.Identity.Web.UI" Version="1.16.1" /> <PackageReference Include="Microsoft.Identity.Web.MicrosoftGraphBeta" Version="1.16.1" /> </ItemGroup>

The ConfigureServices method is used to add the required services for the Azure AD client authentication and the Microsoft Graph client for the API calls. The AddMicrosoftGraph is used to initialize the required permissions.

public void ConfigureServices(IServiceCollection services) { // more services ... var scopes = "User.read Mail.Send Mail.ReadWrite OnlineMeetings.ReadWrite"; services.AddMicrosoftIdentityWebAppAuthentication(Configuration) .EnableTokenAcquisitionToCallDownstreamApi() .AddMicrosoftGraph("https://graph.microsoft.com/beta", scopes) .AddInMemoryTokenCaches(); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); }

The AzureAd configuration is read from the app.settings file. The secrets are read from the user secrets in local development.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodsharepoint.onmicrosoft.com", "TenantId": "5698af84-5720-4ff0-bdc3-9d9195314244", "ClientId": "a611a690-9f96-424f-9ea5-4ba99a642c01", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc" // "ClientSecret": "add secret to the user secrets" },

Creating a Teams meeting using Microsoft Graph

The OnlineMeeting class from Microsoft.Graph is used to create the teams meeting. In this demo, we added a begin and an end DateTime in UTC and the name (Subject) of the meeting. We want that all invited attendees can bypass the lobby and enter directly into the meeting. This is implemented with the LobbyBypassSettings property. The attendees are added to the meeting using the Upn property and setting this with the email of each attendee. The organizer is automatically set to the identity signed in.

public OnlineMeeting CreateTeamsMeeting( string meeting, DateTimeOffset begin, DateTimeOffset end) { var onlineMeeting = new OnlineMeeting { StartDateTime = begin, EndDateTime = end, Subject = meeting, LobbyBypassSettings = new LobbyBypassSettings { Scope = LobbyBypassScope.Everyone } }; return onlineMeeting; } public OnlineMeeting AddMeetingParticipants( OnlineMeeting onlineMeeting, List<string> attendees) { var meetingAttendees = new List<MeetingParticipantInfo>(); foreach(var attendee in attendees) { if(!string.IsNullOrEmpty(attendee)) { meetingAttendees.Add(new MeetingParticipantInfo { Upn = attendee.Trim() }); } } if(onlineMeeting.Participants == null) { onlineMeeting.Participants = new MeetingParticipants(); }; onlineMeeting.Participants.Attendees = meetingAttendees; return onlineMeeting; }

A simple service is used to implement the GraphServiceClient instance which is used to send the Microsoft Graph requests. This uses the Microsoft Graph as described by the docs.

public async Task<OnlineMeeting> CreateOnlineMeeting( OnlineMeeting onlineMeeting) { return await _graphServiceClient.Me .OnlineMeetings .Request() .AddAsync(onlineMeeting); } public async Task<OnlineMeeting> UpdateOnlineMeeting( OnlineMeeting onlineMeeting) { return await _graphServiceClient.Me .OnlineMeetings[onlineMeeting.Id] .Request() .UpdateAsync(onlineMeeting); } public async Task<OnlineMeeting> GetOnlineMeeting( string onlineMeetingId) { return await _graphServiceClient.Me .OnlineMeetings[onlineMeetingId] .Request() .GetAsync(); }

A Razor page is used to create a new Microsoft Teams online meeting. The two services are added to the class and a HTTP Post method implements the form request from the Razor page. This method creates the Microsoft Teams meeting using the services and redirects to the created Razor page with the ID of the meeting.

[AuthorizeForScopes(Scopes = new string[] { "User.read", "Mail.Send", "Mail.ReadWrite", "OnlineMeetings.ReadWrite" })] public class CreateTeamsMeetingModel : PageModel { private readonly AadGraphApiDelegatedClient _aadGraphApiDelegatedClient; private readonly TeamsService _teamsService; public string JoinUrl { get; set; } [BindProperty] public DateTimeOffset Begin { get; set; } [BindProperty] public DateTimeOffset End { get; set; } [BindProperty] public string AttendeeEmail { get; set; } [BindProperty] public string MeetingName { get; set; } public CreateTeamsMeetingModel(AadGraphApiDelegatedClient aadGraphApiDelegatedClient, TeamsService teamsService) { _aadGraphApiDelegatedClient = aadGraphApiDelegatedClient; _teamsService = teamsService; } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } var meeting = _teamsService.CreateTeamsMeeting(MeetingName, Begin, End); var attendees = AttendeeEmail.Split(';'); List<string> items = new(); items.AddRange(attendees); var updatedMeeting = _teamsService.AddMeetingParticipants( meeting, items); var createdMeeting = await _aadGraphApiDelegatedClient.CreateOnlineMeeting(updatedMeeting); JoinUrl = createdMeeting.JoinUrl; return RedirectToPage("./CreatedTeamsMeeting", "Get", new { meetingId = createdMeeting.Id }); } public void OnGet() { Begin = DateTimeOffset.UtcNow; End = DateTimeOffset.UtcNow.AddMinutes(60); } }

Sending Emails to attendees using Microsoft Graph

The Created Razor page displays the meeting JoinUrl and some details of the Teams meeting. The page implements a form which can send emails to all the attendees using Microsoft Graph. The EmailService class implements the email logic to send plain mails or HTML mails using the Microsoft Graph.

using Microsoft.Graph; using System; using System.Collections.Generic; using System.IO; namespace TeamsAdminUI.GraphServices { public class EmailService { MessageAttachmentsCollectionPage MessageAttachmentsCollectionPage = new(); public Message CreateStandardEmail(string recipient, string header, string body) { var message = new Message { Subject = header, Body = new ItemBody { ContentType = BodyType.Text, Content = body }, ToRecipients = new List<Recipient>() { new Recipient { EmailAddress = new EmailAddress { Address = recipient } } }, Attachments = MessageAttachmentsCollectionPage }; return message; } public Message CreateHtmlEmail(string recipient, string header, string body) { var message = new Message { Subject = header, Body = new ItemBody { ContentType = BodyType.Html, Content = body }, ToRecipients = new List<Recipient>() { new Recipient { EmailAddress = new EmailAddress { Address = recipient } } }, Attachments = MessageAttachmentsCollectionPage }; return message; } public void AddAttachment(byte[] rawData, string filePath) { MessageAttachmentsCollectionPage.Add(new FileAttachment { Name = Path.GetFileName(filePath), ContentBytes = EncodeTobase64Bytes(rawData) }); } public void ClearAttachments() { MessageAttachmentsCollectionPage.Clear(); } static public byte[] EncodeTobase64Bytes(byte[] rawData) { string base64String = System.Convert.ToBase64String(rawData); var returnValue = Convert.FromBase64String(base64String); return returnValue; } } }

The CreatedTeamsMeetingModel class is used to implement the Razor page logic to display some meeting details and send emails using a form post request. The OnGetAsync uses the meetingId to request the Teams meeting using Microsoft Graph and displays the data in the UI. The OnPostAsync method sends emails to all attendees.

using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using System.Threading.Tasks; using TeamsAdminUI.GraphServices; using Microsoft.Graph; namespace TeamsAdminUI.Pages { public class CreatedTeamsMeetingModel : PageModel { private readonly AadGraphApiDelegatedClient _aadGraphApiDelegatedClient; private readonly EmailService _emailService; public CreatedTeamsMeetingModel( AadGraphApiDelegatedClient aadGraphApiDelegatedClient, EmailService emailService) { _aadGraphApiDelegatedClient = aadGraphApiDelegatedClient; _emailService = emailService; } [BindProperty] public OnlineMeeting Meeting {get;set;} [BindProperty] public string EmailSent { get; set; } public async Task<ActionResult> OnGetAsync(string meetingId) { Meeting = await _aadGraphApiDelegatedClient.GetOnlineMeeting(meetingId); return Page(); } public async Task<IActionResult> OnPostAsync(string meetingId) { Meeting = await _aadGraphApiDelegatedClient.GetOnlineMeeting(meetingId); foreach (var attendee in Meeting.Participants.Attendees) { var recipient = attendee.Upn.Trim(); var message = _emailService.CreateStandardEmail(recipient, Meeting.Subject, Meeting.JoinUrl); await _aadGraphApiDelegatedClient.SendEmailAsync(message); } EmailSent = "Emails sent to all attendees, please check your mailbox"; return Page(); } } }

The created Razor page implements the HTML display logic and adds a form to send the emails. The JoinUrl is displayed as this is what you need to open the meeting a Microsoft Teams application.

@page "{handler?}" @model TeamsAdminUI.Pages.CreatedTeamsMeetingModel @{ } <h4>Teams Meeting Created: @Model.Meeting.Subject</h4> <hr /> <h4>Meeting Id:</h4> <p>@Model.Meeting.Id</p> <h4>JoinUrl</h4> <p>@Model.Meeting.JoinUrl</p> <h4>Participants</h4> @foreach(var attendee in Model.Meeting.Participants.Attendees) { <p>@attendee.Upn</p> } <form method="post"> <div class="form-group"> <input type="hidden" value="@Model.Meeting.Id" /> <button type="submit" class="btn btn-primary"><i class="fas fa-save"></i> Send Mail to attendees</button> </div> </form> <p>@Model.EmailSent</p>

Testing

When the application is started, you can create a new Teams meeting with the required details. The logged in user must have an account with access to Office and be on the same tenant as the Azure App registration setup for the Microsoft Graph permissions. The Teams meeting is organized using the identity that signed in because we used the delegated permissions.

Once the meeting is created, the created Razor page is opened with the details. You can send an email to all attendees or use the JoinUrl directly to open up the Teams meeting.

Creating Teams meetings and sending emails in ASP.NET Core is really useful and I will do a few following up posts to this as there is so much more you can do here once this is integrated.

Links:

https://docs.microsoft.com/en-us/graph/api/application-post-onlinemeetings

https://github.com/AzureAD/microsoft-identity-web

Send Emails using Microsoft Graph API and a desktop client

https://www.office.com/?auth=2

https://aad.portal.azure.com/

https://admin.microsoft.com/Adminportal/Home

Sunday, 19. September 2021

Here's Tom with the Weather

The Code of Capital

I enjoyed reading The Code of Capital by Katharina Pistor. I liked this book because it provides what seems like a rare perspective that is required to reveal the DNA of capitalism rather than the more common unintended obfuscation from books that are lacking. A benefit is that emerging technologies related to assets can be considered in a more coherent way and we can see that issues which s

I enjoyed reading The Code of Capital by Katharina Pistor. I liked this book because it provides what seems like a rare perspective that is required to reveal the DNA of capitalism rather than the more common unintended obfuscation from books that are lacking. A benefit is that emerging technologies related to assets can be considered in a more coherent way and we can see that issues which seem new often resemble those of the past.

For instance, Pistor asks the genesis question:

How should the initial allocation of property rights in the digital world be achieved, and who is in charge?

While my mind somehow retrieves Otisburg, Pistor more helpfully notes:

These are the same questions the commoners disputed with the landlords and the settler challenged the First Peoples about, as discussed in Chapter 2. Ultimately, these issues were resolved by establishing legal priority rights, backed by the coercive powers of the state.

Law relating to capital is often created in private law offices but is dependent on the state for the legal modules used to assemble these legal codes. The most important of these modules are contract law, property rights, collateral law, trust, corporate and bankruptcy law.

These legal codes transform an asset (object, claim, skill or idea) into capital. Because the legal coding gives the asset attributes of priority, durability, universality and convertability, the asset can create wealth for its holder while surviving challenging economic circumstances and externalizing associated risks to others. The book goes into detail as the legal modules are applied to new assets over time.

Following the evolution of these legal codes helped me understand things that had been a mystery to me. I wish I could have read a book like this a long time ago.

Friday, 17. September 2021

Bill Wendel's Real Estate Cafe

Real estate is still Broken — Time for mass movement to fix it?

As a long-time real estate consumer advocate, it’s obvious that the pandemic has exposed Peak Real Estate Dysfunction and calls for change are coming from inside and… The post Real estate is still Broken — Time for mass movement to fix it? first appeared on Real Estate Cafe.

As a long-time real estate consumer advocate, it’s obvious that the pandemic has exposed Peak Real Estate Dysfunction and calls for change are coming from inside and…

The post Real estate is still Broken — Time for mass movement to fix it? first appeared on Real Estate Cafe.

Thursday, 16. September 2021

Bill Wendel's Real Estate Cafe

Real estate is rigged – Avoid 10 hidden costs of house hunting by being proactive

When asked to describe Real Estate Cafe’s business model over the past three decades, we’ve said (as we did in a recent podcast) that we… The post Real estate is rigged - Avoid 10 hidden costs of house hunting by being proactive first appeared on Real Estate Cafe.

When asked to describe Real Estate Cafe’s business model over the past three decades, we’ve said (as we did in a recent podcast) that we…

The post Real estate is rigged - Avoid 10 hidden costs of house hunting by being proactive first appeared on Real Estate Cafe.


Ludo Sketches

WE DID IT !

Exactly 11 years ago, I was out of Sun/Oracle and started to work for an emerging startup, with a bunch of former colleagues from Sun (although my official starting date in the French subsidiary is November 1st, which is also… Continue reading →

Exactly 11 years ago, I was out of Sun/Oracle and started to work for an emerging startup, with a bunch of former colleagues from Sun (although my official starting date in the French subsidiary is November 1st, which is also my birthdate).

A couple of pictures from my 1st company meeting, in Portugal, end of September 2010.

Fast forward 11 years…

Today is an huge milestone for ForgeRock. We are becoming a public company, with our stock publicly traded under the “FORG” symbol, at the New York Stock Exchange.

I cannot thank enough the founders of ForgeRock for giving me this gigantic opportunity to create the First ForgeRock Engineering Center just outside Grenoble, France, and to drive the destiny of very successful products, especially ForgeRock Directory Services.


Phil Windley's Technometria

Toothbrush Identity

Summary: Identity finds its way into everything—even toothbrushes. Careful planning can overcome privacy concerns to yield real benefits to businesses and customers alike. I have a Philips Sonicare toothbrush. One of the features is a little yellow light that comes on to tell me that the head needs to be changed. The first time the light came on, I wondered how I would reset it once I

Summary: Identity finds its way into everything—even toothbrushes. Careful planning can overcome privacy concerns to yield real benefits to businesses and customers alike.

I have a Philips Sonicare toothbrush. One of the features is a little yellow light that comes on to tell me that the head needs to be changed. The first time the light came on, I wondered how I would reset it once I got a new toothbrush head. I even googled it to find out.

Turns out I needn't have bothered. Once I changed the head the light went off. This didn't happen when I just removed the old head and put it back on. The toothbrush heads have a unique identity that the toothbrush recognizes. This identity is not only used to signal head replacement, but also to put the toothbrush into different modes based on the type of head installed.

Philips calls this BrushSync, but it's just RFID technology underneath the branding. Each head has an RFID chip embedded in it and the toothbrush body reads the data off the head and adjusts its internal state in the appropriate way.

I like this use case RFID because it's got clear benefits for both Philips and their customers. Philips sells more toothbrush heads—so the internet of things (IoT) use case is clearly aligned with business goals. Customers get reminders to replace their toothbrush head and can reset the reminder by simply doing what they'd do anyway—switch the head

There aren't many privacy concerns at present. But as more and more products include RFID chips, you could imagine scanners on garbage trucks that correlate what gets used and thrown out with an address. I guess we need garbage cans that can disable RFID chips when they're thrown away.

I was recently talking to a friend of mine, Eric Olafson, who is a founding investor in Riot. Riot is another example of how thoughtfully applied RFID-based identifiers can solve business and customer problems. Riot creates tech that companies can use for RFID-based, in-store inventory management. This solves a big problem for stores that often don't know what inventory they have on hand. With Riot, a quick scan of the store each morning updates the inventory management system, showing where the inventory data is out of sync with the physical inventory. As more and more of us go to the physical store because the app told us they had the product we wanted, it's nice to know the app isn't lying. Riot puts the RFID on the tag, not the clothing, dealing with many of the privacy concerns.

Both BrushSync and Riot use identity to solve business problems, showing'that unique identifiers on individual products can be good for business and customers alike. This speaks to the breadth of identity and its importance in areas beyond associating identifiers with people. I've noticed an uptick in discussions at IIW about identity for things and the impact that can have. The next IIW is Oct 12-14—online—join us if you're interested.

Photo Credit: SoniCare G3 from Philips USA (fair use)

Tags: identity iot rfid


Nader Helmy

Adding DID ION to MATTR VII

Since the beginning of our journey here at MATTR, decentralization and digital identity have been central to our approach to building products. As part of this, we’ve supported Decentralized Identifiers (or DIDs) since the earliest launch of our platform. We’ve also considered how we might give you more options to expand the utility of these identities over time. An important milestone The

Since the beginning of our journey here at MATTR, decentralization and digital identity have been central to our approach to building products. As part of this, we’ve supported Decentralized Identifiers (or DIDs) since the earliest launch of our platform. We’ve also considered how we might give you more options to expand the utility of these identities over time.

An important milestone

The W3C working group responsible for Decentralized Identifiers recently published the DID v1.0 specification under “Proposed Recommendation” status. This is a significant milestone as DIDs approach global standardization with the pending approval of the W3C Advisory Committee.

DIDs are maturing, but so is the environment and context in which they were originally designed. With a complex ecosystem consisting of dozens of different methodologies and new ones emerging on a regular basis, it’s important to balance the potential of this decentralized approach with a realistic approach for defining the real utility and value of each DID method. For example, the DID Method Rubric provides a good frame of reference for comparing different approaches.

Different types of DIDs can be registered and anchored using unique rules specific to the set of infrastructure where they’re stored. Since DIDs provide provenance for keys which are controlled by DID owners, the rules and systems that govern each kind of DID method have a significant impact on the trust and maintenance model for these identifiers. This is the key thing to remember when choosing a DID method that makes sense for your needs.

Our supported DID methods

In MATTR VII, by supporting a variety of DID methods — deterministic or key-based DIDs, domain-based DIDs, and ledger-based DIDs — we are able to provide tools which can be customized to fit the needs of individual people and organizations.

Key-based DIDs — Largely static, easy to create, and locally controlled. This makes them a natural choice for applications where there’s a need to manage connections and interactions with users directly. DIDs anchored to web domains — These have a different trust model, where control over the domain can bootstrap a connection to a DID. This makes a lot of sense for organizations with existing domain names that already transact and do business online, and can extend their brand and reputation to the domain of DIDs. Ledger-based DIDs — These offer a distributed system of public key infrastructure which is not centrally managed or controlled by a single party. While ledgers differ in their governance and consensus models, they ultimately provide a backbone for anchoring digital addresses in a way which allows them to be discovered and used by other parties. This can be a useful feature where a persistent identifier is needed, such as in online communication and collaboration.

There is no single DID method or type of DID (which at the moment) should be universally applied to every situation. However, by using the strengths of each approach we can allow for a diverse ecosystem of digital identifiers enabling connections between complex networks of people, organizations and machines.

To date, we’ve provided support for three main DID methods in our platform: DID Key, DID Web, and DID Sovrin. These align with three of the central types of infrastructure outlined above.

Introducing DID ION

We’re proud to announce that as of today we’ve added support for DID ION, a DID method which is anchored to IPFS and Bitcoin. We’ve supported the development of the Sidetree protocol that underpins DID ION for some time as it has matured in collaboration with working group members at the Decentralized Identity Foundation.

With contributions from organizations such as Microsoft, Transmute, and SecureKey, Sidetree and DID ION have emerged as a scalable and enterprise-ready solution for anchoring DIDs. The core idea behind the Sidetree protocol is to create decentralized identifiers that can run on any distributed ledger system. DID ION is an implementation of that protocol which backs onto the Bitcoin blockchain, one of the largest and most used public ledger networks in the world.

Sidetree possesses some unique advantages not readily present in other DID methods, such as low cost, high throughput, and built-in portability of the identifier. This provides a number of benefits to people and organizations, especially in supporting a large volume of different kinds of connections with the ability to manage and rotate keys as needed. We have added end-to-end capabilities for creating and resolving DIDs on the ION network across our platform and wallet products.

Although DID ION is just one implementation of the Sidetree protocol, we see promise in other DID methods using Sidetree and will consider adding support for these over time as and when it makes sense. We’ll also continue to develop Sidetree in collaboration with the global standards community to ensure that this protocol and the ION Network have sustainable futures for a long time to come.

At the same time, the community around DID Sovrin is developing a new kind of interoperability by designing a DID method that can work for vast networks of Indy ledgers, rather than focusing on the Sovrin-specific method that’s been used to date. As DID Sovrin gets phased out of adoption, we’re simultaneously deprecating standard support for DID Sovrin within MATTR VII. We’ll be phasing this out shortly with upcoming announcements for customers building on our existing platform.

If you’ve got any use cases that utilize DID Sovrin or want to discuss extensibility options, please reach out to us on any of our social channels or at info@mattr.global and we’ll be happy to work with you.

Looking ahead

We believe this a big step forward in providing a better set of choices when it comes to digital identity for our customers. From the start, we have designed our platform with flexibility and extensibility in mind, and will continue to support different DID methods as the market evolves.

We look forward to seeing how these new tools can be used to solve problems in the real world and will keep working to identify better ways to encourage responsible use of digital identity on the web.

Adding DID ION to MATTR VII was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 15. September 2021

Bill Wendel's Real Estate Cafe

Only pay for what you need: Will DOJ vs NAR result in fee-for-service future?

For the second time in recent weeks, an industry thought-leader has pointed to Fee-for-Service / Menu of Service business models in his ongoing series: Lesson… The post Only pay for what you need: Will DOJ vs NAR result in fee-for-service future? first appeared on Real Estate Cafe.

For the second time in recent weeks, an industry thought-leader has pointed to Fee-for-Service / Menu of Service business models in his ongoing series: Lesson…

The post Only pay for what you need: Will DOJ vs NAR result in fee-for-service future? first appeared on Real Estate Cafe.


blog.deanland.com

Oh, It Drives Me Nuts

My Drupal setup is broken. Some server issue. I have no idea how to fix it. My guess is the alert part that renders when the blog is visited will go away once I clean out all the Spam messages. That worked once before. But ... last time I tried to do that, it wouldn't let me do that. Yeah. Locked out of capability on my own blog due to some glitch. Where can I find someone who REALLY KNO

My Drupal setup is broken. Some server issue. I have no idea how to fix it. My guess is the alert part that renders when the blog is visited will go away once I clean out all the Spam messages. That worked once before.

But ... last time I tried to do that, it wouldn't let me do that. Yeah. Locked out of capability on my own blog due to some glitch.

Where can I find someone who REALLY KNOWS THEIR STUFF around a server and around Drupal to help me out of this? Technical assistance is n order. Know such a person?

read more


Oh, It Drives Me Nuts

My Drupal setup is broken. Some server issue. I have no idea how to fix it. My guess is the alert part that renders when the blog is visited will go away once I clean out all the Spam messages. That worked once before. But ... last time I tried to do that, it wouldn't let me do that. Yeah. Locked out of capability on my own blog due to some glitch. Where can I find someone who REALLY KNO

My Drupal setup is broken. Some server issue. I have no idea how to fix it. My guess is the alert part that renders when the blog is visited will go away once I clean out all the Spam messages. That worked once before.

But ... last time I tried to do that, it wouldn't let me do that. Yeah. Locked out of capability on my own blog due to some glitch.

Where can I find someone who REALLY KNOWS THEIR STUFF around a server and around Drupal to help me out of this? Technical assistance is n order. Know such a person?

read more


Identity Praxis, Inc.

Mobile Messaging, it’s more than 160 characters! It Is Time to Get Strategic

by Jay O’Sullivan, Michael J. Becker Over the past several years, the world of messaging has morphed in front of our very eyes. There are now more than six messaging channel categories, including text (inc. SMS, MMS, RCS), email, social media, chatbots (i.e. for support and conversational commerce), proximity alerts, and more than a dozen over-the-top apps […] The post Mobile Messaging

by Jay O’SullivanMichael J. Becker

Over the past several years, the world of messaging has morphed in front of our very eyes. There are now more than six messaging channel categories, including text (inc. SMS, MMS, RCS), email, social media, chatbots (i.e. for support and conversational commerce), proximity alerts, and more than a dozen over-the-top apps (WhatsApp, WeChat, Viber, Apple Business Chat, Line, iMessage, Facebook Messenger, Pinterest, YouTube, Instagram, LinkedIn, Twitter, and more). What does this mean for organizations? Well, for some they may think it means that their organization has a variety of channels to choose from to reach the people they serve (aka consumers, shoppers, patients, investors, etc.). On the surface, this is true. But what it really means is that organizations must start developing their multi-channel messaging muscles so that they can reach individuals not on the organization’s preferred time and medium of communication, but on the individual’s preferred time and medium of communication.

To effectively manage a messaging program, the most important thing to realize is that messaging channels are NOT all created equal. Every channel has a different audience profile, audience expectations, norms, message lengths, message formats, ways to send and receive messages, and methods for reporting across the engagement continuum (e.g., transaction through relationship). Messaging is an ecosystem all of its own that must be nurtured if you want to find success.

Need I say more? Of course, I do!

“The customer is everywhere and nowhere.” Todd Harrison, SVP of Digital at Skechers (Harrison, 2021)

You may find yourself with prospects and customers and few ways to reach them? You may find yourself with tired campaigns that simply do not perform the way you want them to? You may find yourself losing touch with your most valuable asset, your first-party database. You may ask yourself, how did I get here? You may ask yourself, which channel should I use to achieve the best results?

Todd is right, “the customer is everywhere, and nowhere.” The way to address this problem is to put your customer at the heart of your business. What does this mean? It means you must learn to collect and listen to their preferences? To interact and engage them, not just barrage them with mono-directional messages that basically say, “I have an idea, why don’t you buy more stuff from us?” People want to be respected, heard, and served. You need to treat them as the hero of your story. To make this happen, you need to understand them. You need to focus on building out an end-to-end communication strategy, which includes building a preference-based opt-in database, often referred to as a customer data platform (CDP). Depending on your needs you’ll need dedicated messaging platforms (e.g. SMS, 10DLC, Email, etc.) or possibly a multichannel communication platform as a service (CPasS) to manage real-time messaging across all channels. Moreover, you’ll need a content strategy. And soon, to meet the expectations of those you serve, you’ll need to build the capability to deliver predictive, personalized, contextually aware content and offers, with real-time feedback loops so that you can listen to customers and respond to them when they reply or initiate a conversation with you. In the near future, we will find that messaging has become the cornerstone of the vast majority of businesses’ engagement strategies, but we have a long way to go as only six to ten percent of companies are actively nurturing and running commercial messaging programs today (Ruppert, 2021).

Yes, you can succeed with the tried and true tactics, but for how long? The marketplace and consumer sentiments are changing, and you must change with them, or you’ll be left behind.

“By 2030, society will no longer tolerate a business model that relies on mass transactions of increasingly sensitive personal data: a different system will be in place.” (Data 2030, 2020)

We’ll cover more on the bigger picture in future articles, let’s get back to messaging.

What it takes to run a successful messaging program

The key to a successful messaging program is to start simple and build from there.

You should not be frightened by the sea of mobile messaging opportunities. Embrace them, and take them one step at a time. Take it from us, with the simplest of messaging programs you have the potential to see material success. For example, we are aware of a retail company that, in 12 months after launching a text-based SMS program sprinkled with the occasional MMS message, acquired 700,000 subscribers and is now generating over $1.4M in monthly sales.

Photo by Daria Nepriakhina on Unsplash

Where do you start?

SMS, or text messaging, is the most straightforward, omnipresent, and ubiquitous messaging channel that you can use to engage people anywhere in the world. And, best of all, it is proven. Open rates in SMS are often 95% and higher.

Text messaging is not just about driving sales, and although this is the end-game, it is about being of service to your audience throughout every stage of the relationship they have with you. Text messaging provides at the most basic level more engagement not just as a utility but as a consumer relationship tool – securing interactions, driving pre and post-sales engagements, gathering feedback through surveys, and fostering loyalty and support. You can, and should, use it across every stage of the purchase funnel: discovery, awareness, consideration, conversation, onboarding and adoption, loyalty, support, and offboarding.

NOTE: Keep an eye out for 10DLC, a new messaging standard the kicked off in June 2021. Our next article will be on this.

But What if?

Yes, text messaging, aka SMS, is ubiquitous, but what should you do when you need to grow beyond what texting has to offer? Remember, “the medium is the message” (Marshall McLuhan, 1964). Text messaging is not the right channel for every engagement.

What if you needed to reach and engage people in China, Brazil, Germany, or Australia? What if your product was best suited to be explained via a picture or a video? What are the best channels to reach these consumers? Text is not always the answer!

Photo by Nathan Dumlao on Unsplash

Facebook Messenger, Instagram, WhatsApp, and other messaging channels are making a considerable play at being complementary messaging options for consumers and, in some cases, the primary channel. Geographic location is the main driver for these channel decisions as well as the use case. But the numbers don’t lie in regards to the global user base of over 2 billion consumers.  OTT Messaging Leaders. 

Where Should You Start?

Sounds like a lot? Sounds complicated? It’s really not if you think about these few items and take them one at a time in your messaging and engagement roadmap.

Here are a few things for you to consider that will help mold your direction. First, start with a few pillars: Message type (use-case, transactional, marketing, support…), Geolocation(s), Staffing, and Existing Partners. Consider,

Are you currently offering text transactional or marketing messaging? Which markets and countries are you looking to reach? How much is your social media advertising budget? Are you considering messaging for support (don’t just think SMS, think OTT)? Do you have internal staff to manage your messaging programs? Are you currently using a Marketing Automation platform? Are you working with external guides and mentors (Remember: coaches and mentors are required if you want to be an expert )? Enter Personal Data & Identity

It is critical for you to remember that all effective commerce starts with a meaningful connection and relevant and consistent communication. The path to relevance is through data, particularly personal data, in fact, a new asset class that you need to wield with precision.

The Bottomline

For the majority SMS (text) will be your first entry into the messaging space. But, depending on how you answered the questions above, the odds are you will be a perfect candidate to implement one of the other messaging channels too. Your next step is to evaluate your current programs and roadmap of “Needs” vs ’Wants” and map out the use cases, i.e., the experience you want to offer your customers, prior to finding a good technology and solutions partner.

Your “Needs’ may have to do with driving revenue, loyalty sign-ups, gathering preference data, support efficiencies, personalization, and could be as simple as mapping out your KPI’s and creating efficiencies of your current messaging strategy. Day, time, type of message, MMS vs. SMS, frequency, and more have a say in the results in your program. Remember, planning before tactics!

Messaging has proven to be the most effective tool, especially when used in a thoughtful and meaningful experience, for people-centric engagement programs. But in the age of the connected individual, the bottom line is that it is a necessity, not a nice to have.

There is a lot to navigate, but this is nothing like any other part of your business. You can do it. Create a roadmap. Find your partners. Take one step at a time!

REFERENCES

Data 2030: What does the future of data look like? | WPP. (2020). WPP. https://www.wpp.com/wpp-iq/2020/11/data-2030—what-does-the-future-of-data-look-like

Harrison, T. (2021). Todd Harrison, SVP Digital, Skechers. In LinkedIn. https://www.linkedin.com/in/livetomountainbike/

Most popular global mobile messenger apps as of July 2021, based on number of monthly active users (Most Popular Global Mobile Messenger Apps as of July 2021, Based on Number of Monthly Active Users, 2021)

Most popular global mobile messenger apps as of July 2021, based on number of monthly active users. (2021). Statista. https://www.statista.com/statistics/258749/most-popular-global-mobile-messenger-apps/

Ruppert, P. (2021, September 15). PD&I Market Assessment Interview with Paul Rupert (M. Becker, Interviewer) [Zoom].

The post Mobile Messaging, it’s more than 160 characters! It Is Time to Get Strategic appeared first on Identity Praxis, Inc..

Tuesday, 14. September 2021

Mike Jones: self-issued

OpenID Connect Presentation at 2021 European Identity and Cloud (EIC) Conference

I gave the following presentation on the OpenID Connect Working Group during the September 13, 2021 OpenID Workshop at the 2021 European Identity and Cloud (EIC) conference. As I noted during the talk, this is an exciting time for OpenID Connect; there’s more happening now than at any time since the original OpenID Connect specs […]

I gave the following presentation on the OpenID Connect Working Group during the September 13, 2021 OpenID Workshop at the 2021 European Identity and Cloud (EIC) conference. As I noted during the talk, this is an exciting time for OpenID Connect; there’s more happening now than at any time since the original OpenID Connect specs were created!

OpenID Connect Working Group (PowerPoint) (PDF)

@_Nat Zone

GAIN – Global Assured Identity Network を発表しました。

現地(ミュンヘン)時間13日午後7時半、Europ… The post GAIN – Global Assured Identity Network を発表しました。 first appeared on @_Nat Zone.

現地(ミュンヘン)時間13日午後7時半、European Identity and Cloud Conference 2021で GAIN – Global Assured Identity Network を発表しました。これは、すべての参加者が身元確認済みであるオーバレイネットワークです。

使用したスライドは以下からご覧になっていただけます。

ホワイトペーパー自体は、https://gainforum.org/ からダンロードしていただけます。

参加にあたっての問い合わせは、DigitalTrust _at_ iif.com (International Institute of Finance内事務局。_at_ を @ で置き換えてお使いください)にお願いいたします。また、POC参加に関するお問い合わせは donna _at_ oidf.org にお願いします

GAINとは

詳細は以下の英文ブログに譲りますが、その下にDeepLによる機械翻訳も参考までに載せておきます。(検証していないので間違いや割愛等結構あるかもしれません。が、なんとなく概要を見るのには良いでしょう。原本は英語ですので、詳しく見るにはそちらを御覧ください。)

Announcing GAIN: Global Assured Identity Network

インターネットの始まりには、信頼がありました。誰もが参加している組織に知られており、誰もが責任(Accountability)を負わされることを知っていたのです。

しかし、90年代にインターネットが商業的に利用できるようになると、信頼は失われました。参加者の大半が匿名になり、責任を問われないと認識するようになったのです。その結果、多くの犯罪者やバッドアクターがネットワーク上で活躍するようになりました。インターネットのワイルド・ワイルド・ウエストの時代が到来したのです。

それ以来、状況を改善し、信頼を回復するために多くの努力が払われてきました。しかし、あまり効果はありませんでした。金融犯罪は、人への影響が計り知れない不正行為から生まれ、世界経済に年間GDPの最大5%もの損失を与えています。マネーロンダリング防止やテロ資金対策に莫大な費用がかけられていますが、今のところ効果はありません。金融システム内の1,000ドルの「違法な資金」に対して、コンプライアンスに100ドルが費やされていますが、傍受されるのはわずか1ドル。わずか0.1%です。

このコインの裏側には金融包摂の問題もあります。

多くの人々は、口座開設に関わる本人確認コストのために商業的にとりこむことができず、経済的に排除されています。

匿名で行動できることは、原理的にはプライバシーの面で大きなメリットがあります。しかし、私たちが実現しているのは、個人データが危険にさらされ、善良な行為者が悪質な行為者に容易に追跡されるという状況です。いわば熟練した悪質な行為者のためのプライバシーであり、われわれ「それ以外の人」のためのプライバシーはありません。

30年の時間とコストをかけたにもかかわらず、なぜ私たちはこれらを阻止することに成功しないのでしょうか。

それは、アイデンティティとアカウンタビリティという根本的な問題に取り組んでこなかったからです。

私たちが必要としているのは、エコシステム内のすべての参加者のアカウンタビリティを再確立することです。

このようなエコシステムは、比較可能性と相互承認の原則に基づいて相互に接続することができ、最終的にはサイバーワールドの人口の大部分をカバーする説明責任のあるエコシステムのネットワークを形成することができます。

そのようなエコシステムのひとつが、今日、私たちが提案するものです。GAIN(Global Assured Identity Network)です。

GAINは、アカウンタビリティのある参加者のみで構成されるインターネット上のオーバーレイネットワークです。すべての参加者は、主に銀行やその他の規制対象組織からなるホスティング組織で口座を開設する際に、規制を満たすための身元確認を受けています。これらの高保証属性は、エンドユーザーの指示に基づいて アイデンティティ情報提供者から依拠当事者(RP)に渡される アイデンティティ情報の基礎として使用されます。

エンド・ユーザーは、自己申告ではなく、信頼できるエンティティによって証明された年齢などの属性を証明できるようになります。これは非常に強力です。

例えば、「私は18歳以上です」ということを、「私を信じてください」と言うだけでなく、銀行の証明書を使って証明することができます。

これは、プライバシーの観点からも非常に良いことです。

自己申告の場合、情報の受け手は自分の言っていることが信じられないので、通常、それを証明するために運転免許証などの身分証明書を提示しなければなりません。この時点で、彼女は年齢よりもはるかに多くの属性を開示していることになります。

このような能力は、以前に行われた 身元確認にかかっています。身元確認は非常にコストのかかるプロセスです。

この目的のためだけにそれを行おうとすると、法外な費用がかかる可能性があります。

本業のために身元確認を行う必要がある銀行やその他の規制機関は、この点で有利です。これは、ある巨大なオンライン小売業者が、ホリデーシーズンにしか必要とされない冗長機能をより有効に活用するために、クラウドコンピューティングサービスを提供し始めたようなものです。他のホスティングベンダーは太刀打ちできませんでした。

アイデンティティ情報提供者によってIDが証明されたからといって、個人がGAINで匿名で行動できないわけではありません。個人は、マーチャントやネットワークの他の参加者に対して、匿名または偽名で行動することができます。実際、個人の属性の開示は最小限にとどめるのが普通である。ただし、不正行為を行った場合には、正当な手続きに基づいて個人を追跡し、責任を追及することができるようになっています。指定されたオープナーと呼ばれるエンティティは、取引の履歴を開き、その時の人物を指し示すことができる。

ネットワーク上のすべてのビジネスエンティティは、ホスティング組織でのビジネスアカウントの作成と維持の一環として審査されており、匿名のままでいることはできません。彼らには説明責任があります。その結果、消費者は安心して取引を行うことができ、依拠当事者にとってはビジネスの拡大につながります。すべての善良なアクターが利益を得ることができるのです。

この効果はグローバルなものです。マーチャントは、一度登録して契約するだけで、世界中のネットワーク参加者にアクセスできるようになります。

個人は、エコシステム内のすべての依拠当事者が存在し、説明責任を果たしているという安心感を持ってアクセスすることができます。確かに悪質な業者はまだ存在するでしょうが、その範囲はより限定されたものになります。エコシステムは、現在のインターネットよりもはるかに安全になります。参加者がアクションを起こすのに十分なものになるでしょう。信頼が再び確立されます。

完璧なプロセスはありません。GAINにおいても、悪質な行為者は確実に存在しますが、それは標準ではなく例外となるため、問題ははるかに扱いやすくなります。行動を起こすには「十分」でしょう。

つまり、信頼が再確立されたのです。

Hellō

なお、これとは独立に Dick Hardt によって Hellō が発表されました。

Hellō のなかでも何度かGAINが言及されましたが、おそらく相互接続していくことになると思います。

They look quite complementary to me!

— Dick Hardt (@DickHardt) September 14, 2021
EIC 2021の風景 (C) Nat Sakimura

The post GAIN – Global Assured Identity Network を発表しました。 first appeared on @_Nat Zone.

Monday, 13. September 2021

Damien Bod

Implementing Angular Code Flow with PKCE using node-oidc-provider

This posts shows how an Angular application can be secured using Open ID Connect code flow with PKCE and node-oidc-provider identity provider. This requires the correct configuration on both the client and the identity provider. The node-oidc-provider clients need a configuration for the public client which uses refresh tokens. The grant_types ‘refresh_token’, ‘authorization_code’ are added […]

This posts shows how an Angular application can be secured using Open ID Connect code flow with PKCE and node-oidc-provider identity provider. This requires the correct configuration on both the client and the identity provider.

The node-oidc-provider clients need a configuration for the public client which uses refresh tokens. The grant_types ‘refresh_token’, ‘authorization_code’ are added as well as the offline_access scope.

clients: [ { client_id: 'angularCodeRefreshTokens', token_endpoint_auth_method: 'none', application_type: 'web', grant_types: [ 'refresh_token', 'authorization_code' ], response_types: ['code'], redirect_uris: ['https://localhost:4207'], scope: 'openid offline_access profile email', post_logout_redirect_uris: [ 'https://localhost:4207' ] } ,]

The Angular client is implemented using angular-auth-oidc-client. The offline_access scope is requested as well as the prompt=consent. The nonce validation after a refresh is ignored.

import { NgModule } from '@angular/core'; import { AuthModule, LogLevel } from 'angular-auth-oidc-client'; @NgModule({ imports: [ AuthModule.forRoot({ config: { authority: 'http://localhost:3000', redirectUrl: window.location.origin, postLogoutRedirectUri: window.location.origin, clientId: 'angularCodeRefreshTokens', scope: 'openid profile offline_access', responseType: 'code', silentRenew: true, useRefreshToken: true, logLevel: LogLevel.Debug, ignoreNonceAfterRefresh: true, customParams: { prompt: 'consent', // login, consent }, }, }), ], exports: [AuthModule], }) export class AuthConfigModule {}

That’s all the configuration required.

Links:

https://github.com/panva/node-oidc-provider

https://github.com/damienbod/angular-auth-oidc-client

Saturday, 11. September 2021

Here's Tom with the Weather

Mont Sainte Anne

I enjoyed hiking the “Le Sentier des Pionniers” trail at Mont Sainte Anne today. Definitely ordering pizza tonight.

I enjoyed hiking the “Le Sentier des Pionniers” trail at Mont Sainte Anne today. Definitely ordering pizza tonight.

Friday, 10. September 2021

Moxy Tongue

Not Moxie Marlinspike

Oft confused, no more. https://github.com/lifewithalacrity/lifewithalacrity.github.io/commit/52c30ec1d649494066c3e9c9fa1bbaf95cd6386f  https://github.com/lifewithalacrity/lifewithalacrity.github.io/commit/d7252be02cb351368c2c1bb00c66ad8d15ef5e21 Self-Sovereign Identity has deep roots. It did not just emerge in 2016 after a blog post was written. It did not fail to exist when wikipedia edit

Oft confused, no more.

https://github.com/lifewithalacrity/lifewithalacrity.github.io/commit/52c30ec1d649494066c3e9c9fa1bbaf95cd6386f 

https://github.com/lifewithalacrity/lifewithalacrity.github.io/commit/d7252be02cb351368c2c1bb00c66ad8d15ef5e21

Self-Sovereign Identity has deep roots. It did not just emerge in 2016 after a blog post was written. It did not fail to exist when wikipedia editors denied it subject integrity with the stated message: "good luck with that".

Self-Sovereign Identity is a structural result of accurate Sovereign source authority, expressive by Individuals. People, using tools, can do many amazing things.

The social telling of information: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html constructed to socialize access to methods being advanced by social groups of one form or another, can do nothing to alter the root ownership and expression of "self-sovereign rights". Capturing a social opportunity, or servicing a social service deployed for public benefit, has structural requirements. The "proof is in the pudding" is a digital data result now in the real world. Data literacy is a functional requirement of baseline participation in a "civil society". Root Sovereignty is a design outcome with living data results that must be evaluated for accuracy & integrity. Functional literacy is a requirement for system integrity.

Words carry meaning; literary abstractions of often real and digital objects, methods, processes, ideas. When a word's meaning is not accurate, or not accurate enough, language is negatively affected, and as a result, communication of integrity between people is affected. People, Individuals all, living among one another, is the only actual reality of human existence in the Universe. This structural requirement for accuracy can only translate accurately enough by using the right words. "We the people", if not structurally referring to the Individuals walking in local communities with blood running through their veins, but instead translated via legalese into a legal abstraction giving force to Government administration methods, represents a perversion of human intent.

While amendments have been offered to fix past perversions of human intent and accurate use of language, the root error of omission providing cover for the mis-translation of "We The People" in the United States of America in 2021 is not a construct of words, or literary amendments. Instead, it is a restructuring of baseline participation in the administration of human authority within a Sovereign territory. "We The People", Individuals All, give administrative integrity to the Government derived "of, by, for" our consenting authority. Our Sovereign source authority, represented as Individuals, by our own self-Sovereign identity is the means by which this Nation came into existence, and the only means by which it continues.

America can not be provisioned from a database. People own root authority, personally, or not at all.








Jon Udell

Query like it’s 2022

Monday will be my first day as community lead for Steampipe, a young open source project that normalizes APIs by way of Postgres foreign data wrappers. The project’s taglines are select * from cloud and query like it’s 1992; the steampipe.io home page nicely illustrates these ideas. I’ve been thinking about API normalization for a … Continue reading Query like it’s 2022

Monday will be my first day as community lead for Steampipe, a young open source project that normalizes APIs by way of Postgres foreign data wrappers. The project’s taglines are select * from cloud and query like it’s 1992; the steampipe.io home page nicely illustrates these ideas.

I’ve been thinking about API normalization for a long time. The original proposal for the World Wide Web says:

Databases

A generic tool could perhaps be made to allow any database which uses a commercial DBMS to be displayed as a hypertext view.

We ended up with standard ways for talking to databases — ODBC, JDBC — but not for expressing them on the web.

When I was at Microsoft I was bullish on OData, an outgrowth of Pablo Castro’s wonderful Project Astoria. Part of the promise was that every database-backed website could automatically offer basic API access that wouldn’t require API wrappers for everybody’s favorite programming language. The API was hypertext; a person could navigate it using links and search. Programs wrapped around that API could be useful, but meaningful interaction with data would be possible without them.

(For a great example of what that can feel like, jump into the middle of one of Simon Willison’s datasettes, for example san-francisco.datasettes.com, and start clicking clicking around.)

Back then I wrote a couple of posts on this topic[1, 2]. Many years later OData still hasn’t taken the world by storm. I still think it’s a great idea and would love to see it, or something like it, catch on more broadly. Meanwhile Steampipe takes a different approach. Given a proliferation of APIs and programming aids for them, let’s help by providing a unifying abstraction: SQL.

I’ve done a deep dive into the SQL world over the past few years. The first post in a series I’ve been writing on my adventures with Postgres is what connected me to Steampipe and its sponsor (my new employer) Turbot. When you install Steampipe it brings Postgres along for the ride. Imagine what you could do with data flowing into Postgres from many different APIs and filling up tables you can view, query, join, and expose to tools and systems that talk to Postgres. Well, it’s going to be my job to help imagine, and explain, what’s possible in that scenario.

Meanwhile I need to give some thought to my Twitter tag line: patron saint of trailing edge technologies. It’s funny and it’s true. At BYTE I explored how software based on the Net News Transfer Protocol enabled my team to do things that we use Slack for today. At Microsoft I built a system for community-scale calendaring based on iCalendar. When I picked up NNTP and iCalendar they were already on the trailing edge. Yet they were, and especially in the case of iCalendar still are, capable of doing much more than is commonly understood.

Then of course came web annotation. Although Hypothesis recently shepherded it to W3C standardization it goes all the way back to the Mosaic browser and is exactly the kind of generative tech that fires my imagination. With Hypothesis now well established in education, I hope others will continue to explore the breadth of what’s possible when every document workflow that needs to can readily connect people, activities, and data to selections in documents. If that’s of interest, here are some signposts pointing to scenarios I’ve envisioned and prototyped.

And now it’s SQL. For a long time I set it aside in favor of object, XML, and NoSQL stores. Coming back to it, by way of Postgres, has shown me that:

– Modern SQL is more valuable as a programming language than is commonly understood

– So is Postgres as a programming environment

The tagline query like it’s 1992 seems very on-brand for me. But maybe I should let go of the trailing-edge moniker. Nostalgia isn’t the best way to motivate fresh energy. Maybe query like it’s 2022 sets a better tone? In any case I’m very much looking forward to this next phase.

Thursday, 09. September 2021

Doc Searls Weblog

The Matrix 4.0

The original Matrix is my favorite movie. Not because it was the best movie. Rather because it’s the most important, at least for our Digital Age. (It’s also among the most rewatchable. Hear that, Ringer? Rewatch the whole series before Christmas.) And now the fourth Matrix is coming out: The Matrix Resurrections. Here’s the @TheMatrixMovie‘s […]

The original Matrix is my favorite movie. Not because it was the best movie. Rather because it’s the most important, at least for our Digital Age. (It’s also among the most rewatchable. Hear that, Ringer? Rewatch the whole series before Christmas.)

And now the fourth Matrix is coming out: The Matrix Resurrections. Here’s the @TheMatrixMovie‘s new pinned tweet of the first trailer.

Yeah, it’s a sequel, and sequels tend to sag. Even The Godfather Part 2. (But that one only sagged in the relative sense, since Part 1 was perfect.)

If anything bothers me about this next Matrix it’s that what had seemed an untouchable Classic is now a Franchise. Not a bad beast, the Franchise. Just different: same genus, different species.

Given the way these things go, my expectations are low and my hopes high.

Meanwhile, I’m wondering why Laurence Fishburne, Hugo Weaving, and Lilly Wachowski don’t return in Resurrections. Not being critical here. Just curious.

Bonus link: a must-see from 2014.

Also, from my old blog in 2003:

William Blaze has an interesting take on the political agenda of The Matrix Franchise.

My own thoughts about the original Matrix (that it was a metaphor for marketing, basically) are here, here and here.†

That was back when blogging was blogging. Which it will be again, at least for some of us, when Dave Winer is finished rebooting the practice with Drummer.

† I know those two links are duplicates, but I don’t have the time to hunt down the originals. And Google is no help, because it ignores lots of old material, including much of my first seven years of blogging.


Is there a way out of password hell?

Passwords are hell. Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember. Unless you’re a wizard, this all but requires using a password manager.† Think about how hard that job is. First, it’s impossible for developers of password managers to do […]

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them Every one of those sites and services has its own unique user interfaces, each with its own peculiarities All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.

† We need to fix that Wikipedia page.

Wednesday, 08. September 2021

Phil Windley's Technometria

Fluid Multi-Pseudonymity

Summary: Fluid multi-pseudonymity perfectly describes the way we live our lives and the reality that identity systems must realize if we are to live authentically in the digital sphere. In response to my recent post on Ephemeral Relationships, Emil Sotirov tweeted that this was an example of "fluid multi-pseudonymity as the norm." I love that phrase because it succinctly describes s

Summary: Fluid multi-pseudonymity perfectly describes the way we live our lives and the reality that identity systems must realize if we are to live authentically in the digital sphere.

In response to my recent post on Ephemeral Relationships, Emil Sotirov tweeted that this was an example of "fluid multi-pseudonymity as the norm." I love that phrase because it succinctly describes something I've been trying to explain for years.

Emil was riffing on this article in Aeon You are a network that says "Selves are not only 'networked', that is, in social networks, but are themselves networks." I've never been a fan of philosophical introspections in digital identity discussions. I just don't think they often lead to useful insights. Rather, I like what Joe Andrieu calls functional identity: Identity is how we recognize, remember, and ultimately respond to specific people and things. But this insight, that we are multiple selves, changing over time—even in the course of a day—is powerful. And as Emil points out, our real-life ephemeral relationships are an example of this fluid multi-pseudonymity.

The architectures of traditional, administrative identity systems do not reflect the fluid multi-pseudonymity of real life and consequently are mismatched to how people actually live. I frequently see calls for someone, usually a government, to solve the online identity problem by issuing everyone a permanent "identity." I put that in quotes because I hate when we use the word "identity" in that way—as if everyone has just one and once we link every body (literally) to some government issued identifier and a small number of attributes all our problems will disappear.

These calls don't often come from within the identity community. Identity professionals understand how hard this problem is and that there's no single identity for anyone. But even identity professionals use the word "identity" when they mean "account." I frequently make an ass of myself my pointing that out. I get invited to fewer meetings that way. The point is this: there is no "identity." And we don't build identity systems to manage identities (whatever those are), but, rather, relationships.

All of us, in real life and online, have multiple relationships. Many of those are pseudonymous. Many are ephemeral. But even a relationship that starts pseudonymous and ephemeral can develop into something permanent and better defined over time. Any relationship we have, even those with online services, changes over time. In short, our relationships are fluid and each is different.

Self-sovereign identity excites me because, for the first time, we have a model for online identity that can flexibly support fluid multi-pseudonymity. Decentralized identifiers and verifiable credentials form an identity metasystem capable of being the foundation for any kind of relationship: ephemeral, pseudonymous, ad hoc, permanent, personal, commercial, legal, or anything else. For details on how this all works, see my Frontiers article on the identity metasystem.

An identity metasystem that matches the fluid multi-pseudonymity inherent in how people actually live is vital for personal autonomy and ultimately human rights. Computers are coming to intermediate every aspect of our lives. Our autonomy and freedom as humans depend on how we architect this digital world. Unless we put digital systems under the control of the individuals they serve without intervening administrative authorities and make them as flexible as our real-lives demand, the internet will undermine the quality of life it is meant to bolster. The identity metasystem is the foundation for doing that.

Photo Credit: Epupa Falls from Travel Trip Journey (none)

Tags: identity relationships ssi pseudonymity

Tuesday, 07. September 2021

Jon Udell

The Postgres REPL

R0ml Lefkowitz’s The Image of Postgres evokes the Smalltalk experience: reach deeply into a running system, make small changes, see immediate results. There isn’t yet a fullblown IDE for the style of Postgres-based development I describe in this series, though I can envision a VSCode extension that would provide one. But there is certainly a … Continue reading The Postgres REPL

R0ml Lefkowitz’s The Image of Postgres evokes the Smalltalk experience: reach deeply into a running system, make small changes, see immediate results. There isn’t yet a fullblown IDE for the style of Postgres-based development I describe in this series, though I can envision a VSCode extension that would provide one. But there is certainly a REPL (read-eval-print loop), it’s called psql, and it delivers the kind of immediacy that all REPLs do. In our case there’s also Metabase; it offers a complementary REPL that enhances its power as a lightweight app server.

In the Clojure docs it says:

The Clojure REPL gives the programmer an interactive development experience. When developing new functionality, it enables her to build programs first by performing small tasks manually, as if she were the computer, then gradually make them more and more automated, until the desired functionality is fully programmed. When debugging, the REPL makes the execution of her programs feel tangible: it enables the programmer to rapidly reproduce the problem, observe its symptoms closely, then improvise experiments to rapidly narrow down the cause of the bug and iterate towards a fix.

I feel the same way about the Python REPL, the browser’s REPL, the Metabase REPL, and now also the Postgres REPL. Every function and every materialized view in the analytics system begins as a snippet of code pasted into the psql console (or Metabase). Iteration yields successive results instantly, and those results reflect live data. In How is a Programmer Like a Pathologist Gilad Bracha wrote:

A live program is dynamic; it changes over time; it is animated. A program is alive when it’s running. When you work on a program in a text editor, it is dead.

Tudor Girba amplified the point in a tweet.

In a database-backed system there’s no more direct way to interact with live data than to do so in the database. The Postgres REPL is, of course, a very sharp tool. Here are some ways to handle it carefully.

Find the right balance for tracking incremental change

In Working in a hybrid Metabase / Postgres code base I described how version-controlled files — for Postgres functions and views, and for Metabase questions — repose in GitHub and drive a concordance of docs. I sometimes write code snippets directly in psql or Metabase, but mainly compose in a “repository” (telling word!) where those snippets are “dead” artifacts in a text editor. They come to life when pasted into psql.

A knock on Smalltalk was that it didn’t play nicely with version control. If you focus on the REPL aspect, you could say the same of Python or JavaScript. In any such case there’s a balance to be struck between iterating at the speed of thought and tracking incremental change. Working solo I’ve been inclined toward a fairly granular commit history. In a team context I’d want to leave a chunkier history but still record the ongoing narrative somewhere.

Make it easy to understand the scope and effects of changes

The doc concordance has been the main way I visualize interdependent Postgres functions, Postgres views, and Metabase questions. In Working with interdependent Postgres functions and materialized views I mentioned Laurenz Albe’s Tracking View Dependencies in Postgres. I’ve adapted the view dependency tracker he develops there, and adapted related work from others to track function dependencies.

This tooling is still a work in progress, though. The concordance doesn’t yet include Postgres types, for example, nor the tables that are upstream from materialized views. My hypothetical VSCode extension would know about all the artifacts and react immediately when things change.

Make it easy to find and discard unwanted artifacts

Given a function or view named foo, I’ll often write and test a foo2 before transplanting changes back into foo. Because foo may often depend on bar and call baz I wind up also with bar2 and baz2. These artifacts hang around in Postgres until you delete them, which I try to do as I go along.

If foo2 is a memoized function (see this episode), it can be necessary to delete the set of views that it’s going to recreate. I find these with a query.

select 'drop materialized view ' || matviewname || ';' as drop_stmt from pg_matviews where matviewname ~* {{ pattern }}

That pattern might be question_and_answer_summary_for_group to find all views based on that function, or _6djxg2yk to find all views for a group, or even [^_]{8,8}$ to find all views made by memoized functions.

I haven’t yet automated the discovery or removal of stale artifacts and references to them. That’s another nice-to-have for the hypothetical IDE.

The Image of Postgres

I’ll give R0ml the last word on this topic.

This is the BYTE magazine cover from August of 1981. In the 70s and the 80s, programming languages had this sort of unique perspective that’s completely lost to history. The way it worked: a programming environment was a virtual machine image, it was a complete copy of your entire virtual machine memory and that was called the image. And then you loaded that up and it had all your functions and your data in it, and then you ran that for a while until you were sort of done and then you saved it out. And this wasn’t just Smalltalk, Lisp worked that way, APL worked that way, it was kind of like Docker only it wasn’t a separate thing because everything worked that way and so you didn’t worry very much about persistence because it was implied. If you had a programming environment it saved everything that you were doing in the programming environment, you didn’t have to separate that part out. A programming environment was a place where you kept all your data and business logic forever.

So then Postgres is kind of like Smalltalk only different.

What’s the difference? Well we took the UI out of Smalltalk and put it in the browser. The rest of it is the same, so really Postgres is an application delivery platform, just like we had back in the 80s.


1 https://blog.jonudell.net/2021/07/21/a-virtuous-cycle-for-analytics/
2 https://blog.jonudell.net/2021/07/24/pl-pgsql-versus-pl-python-heres-why-im-using-both-to-write-postgres-functions/
3 https://blog.jonudell.net/2021/07/27/working-with-postgres-types/
4 https://blog.jonudell.net/2021/08/05/the-tao-of-unicode-sparklines/
5 https://blog.jonudell.net/2021/08/13/pl-python-metaprogramming/
6 https://blog.jonudell.net/2021/08/15/postgres-and-json-finding-document-hotspots-part-1/
7 https://blog.jonudell.net/2021/08/19/postgres-set-returning-functions-that-self-memoize-as-materialized-views/
8 https://blog.jonudell.net/2021/08/21/postgres-functional-style/
9 https://blog.jonudell.net/2021/08/26/working-in-a-hybrid-metabase-postgres-code-base/
10 https://blog.jonudell.net/2021/08/28/working-with-interdependent-postgres-functions-and-materialized-views/
11 https://blog.jonudell.net/2021/09/05/metabase-as-a-lightweight-app-server/
12 https://blog.jonudell.net/2021/09/07/the-postgres-repl/


Moxy Tongue

Bitcoin: Founded By Sovereign Source Authority

No Permission Required.  Zero-Trust Infrastructure. A ghost has entered the machine of Society, there is no turning back.

No Permission Required. 

Zero-Trust Infrastructure.

A ghost has entered the machine of Society, there is no turning back.





MyDigitalFootprint

Plotting ROI, and other measures for gauging performance on Peak Paradox

The purpose of this post is to plot where some (there is way too many to do them all) different investment measures align on the Peak Paradox model. It is not to explain in detail what all the measures means and their corresponding strength and weaknesses.  This is a good article if you want the latter for the pure financial ones. Key ROI, Return on Investment. IRR Internal Rate of Re

The purpose of this post is to plot where some (there is way too many to do them all) different investment measures align on the Peak Paradox model. It is not to explain in detail what all the measures means and their corresponding strength and weaknesses.  This is a good article if you want the latter for the pure financial ones.


Key

ROI, Return on Investment. IRR Internal Rate of Return.  RI Residual income. ROE Return on Equity. ROA Return on assets. ROCE  Return on Capital Employed.  ROT Return on Time. IR Impact return.  SV  Social Value.  AR Asset Returns. PER Portfolio Expected Return. SROI Social return on investment. 

The observation is that we have not developed with any level of sophistication the same ability to measure or report on anything outside of finance, which we call “hard.”   By calling other important aspects of a decision “soft” we have framed them as less important and harder to agree on. 


Monday, 06. September 2021

Hyperonomy Digital Identity Lab

Verifiable Credentials Guide for Developer: Call for Participation

Want to contribute to the World Wide Web Consortium (W3C) Developers Guide for Verifiable Credentials? W3C is an international community that develops open standards to ensure the long-term growth of the Web. A new W3C Community Note Work Item Proposal … Continue reading →

Want to contribute to the World Wide Web Consortium (W3C) Developers Guide for Verifiable Credentials?

W3C is an international community that develops open standards to ensure the long-term growth of the Web.

A new W3C Community Note Work Item Proposal entitled Verifiable Credentials Guide for Developers has been submitted and you can help create it.

I want to invite everyone interested #DigitalIdentity, #DecentralizedIdentity, #VerifiableCredentials, #TrustOnTheInternet, and/or #SecureInternetStorage to join this key group of people who will be defining and creating the W3C Verifiable Credentials Guide for Developers.

Please contact me directly or post an email to public-credentials@w3.org

Links

Draft W3C Community Note: https://t.co/veg349grR9 Work Item: Verifiable Credentials Guide for Developers (VC-GUIDE-DEVELOPERS) https://t.co/LziMaeYskG GitHub: https://t.co/ptqaUA6IyC

Damien Bod

Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider

This article shows how to implement authorization in an ASP.NET Core application which uses Azure security groups for the user definitions and Azure B2C to authenticate. Microsoft Graph API is used to access the Azure group definitions for the signed in user. The client credentials flow is used to authorize the Graph API client with […]

This article shows how to implement authorization in an ASP.NET Core application which uses Azure security groups for the user definitions and Azure B2C to authenticate. Microsoft Graph API is used to access the Azure group definitions for the signed in user. The client credentials flow is used to authorize the Graph API client with an application scope definition. This is not optimal, the delegated user flows would be better. By allowing applications rights for the defined scopes using Graph API, you are implicitly making the application an administrator of the tenant as well for the defined scopes.

Code: https://github.com/damienbod/azureb2c-fed-azuread

Blogs in this series

Securing ASP.NET Core Razor Pages, Web APIs with Azure B2C external and Azure AD internal identities Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

Two Azure AD security groups were created to demonstrate this feature with Azure B2C authentication. The users were added to the admin group and the user group as required. The ASP.NET Core application uses an ASP.NET Core Razor page which should only be used by admin users, i.e. people in the group. To validate this in the application, Microsoft Graph API is used to get groups for the signed in user and an ASP.NET Core handler, requirement and policy uses the group claim created from the Azure group to force the authorization.

The groups are defined in the same tenant as the Azure B2C.

A separate Azure App registration is used to define the application Graph API scopes. The User.Read.All application scope is used. In the demo, a client secret is used, but a certificate can also be used to access the API.

The Microsoft.Graph Nuget package is used as a client for Graph API.

<PackageReference Include="Microsoft.Graph" Version="4.4.0" />

The GraphApiClientService class implements the Microsoft Graph API client. A ClientSecretCredential instance is used as the AuthProvider and the definitions for the client are read from the application configurations and the user secrets in development, or Azure Key Vault. The user-id from the name identifier claim is used to get the Azure groups for the signed-in user. The claim namespaces gets added using the Microsoft client, this can be deactivated if required. I usually use the default claim names but as the is an Azure IDP, I left the Microsoft defaults which adds the extra stuff to the claims. The Graph API GetMemberGroups method returns the group IDs for the signed in identity.

using Azure.Identity; using Microsoft.Extensions.Configuration; using Microsoft.Graph; using System.Threading.Tasks; namespace AzureB2CUI.Services { public class GraphApiClientService { private readonly GraphServiceClient _graphServiceClient; public GraphApiClientService(IConfiguration configuration) { string[] scopes = configuration.GetValue<string>("GraphApi:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string>("GraphApi:TenantId"); // Values from app registration var clientId = configuration.GetValue<string>("GraphApi:ClientId"); var clientSecret = configuration.GetValue<string>("GraphApi:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes); } public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphApiUserMemberGroups(string userId) { var securityEnabledOnly = true; return await _graphServiceClient.Users[userId] .GetMemberGroups(securityEnabledOnly) .Request().PostAsync() .ConfigureAwait(false); } } }

The .default scope is used to access the Graph API using the client credential client.

"GraphApi": { "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "ClientId": "1d171c13-236d-4c2b-ac10-0325be2cbc74", "Scopes": ".default" //"ClientSecret": "--in-user-settings--" },

The user and the application are authenticated using Azure B2C and an Azure App registration. Using Azure B2C, only a certain set of claims can be returned which cannot be adapted easily. Once signed-in, we want to include the Azure security group claims in the claims principal. To do this, the Graph API is used to find the claims for the user and add the claims to the claims principal using the IClaimsTransformation implementation. This is where the GraphApiClientService is used.

using AzureB2CUI.Services; using Microsoft.AspNetCore.Authentication; using System.Linq; using System.Security.Claims; using System.Threading.Tasks; namespace AzureB2CUI { public class GraphApiClaimsTransformation : IClaimsTransformation { private GraphApiClientService _graphApiClientService; public GraphApiClaimsTransformation(GraphApiClientService graphApiClientService) { _graphApiClientService = graphApiClientService; } public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) { ClaimsIdentity claimsIdentity = new ClaimsIdentity(); var groupClaimType = "group"; if (!principal.HasClaim(claim => claim.Type == groupClaimType)) { var nameidentifierClaimType = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"; var nameidentifier = principal.Claims.FirstOrDefault(t => t.Type == nameidentifierClaimType); var groupIds = await _graphApiClientService.GetGraphApiUserMemberGroups(nameidentifier.Value); foreach (var groupId in groupIds.ToList()) { claimsIdentity.AddClaim(new Claim(groupClaimType, groupId)); } } principal.AddIdentity(claimsIdentity); return principal; }

The startup class adds the services and the authorization definitions for the ASP.NET Core Razor page application. The IsAdminHandlerUsingAzureGroups authorization handler is added and this is used to validate the Azure security group claim.

public void ConfigureServices(IServiceCollection services) { services.AddTransient<AdminApiService>(); services.AddTransient<UserApiService>(); services.AddScoped<GraphApiClientService>(); services.AddTransient<IClaimsTransformation, GraphApiClaimsTransformation>(); services.AddHttpClient(); services.AddOptions(); string[] initialScopes = Configuration.GetValue<string>( "UserApiOne:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); services.AddSingleton<IAuthorizationHandler, IsAdminHandlerUsingAzureGroups>(); services.AddAuthorization(options => { options.AddPolicy("IsAdminPolicy", policy => { policy.Requirements.Add(new IsAdminRequirement()); }); }); }

The IsAdminHandlerUsingAzureGroups implements the AuthorizationHandler class with the IsAdminRequirement requirement. This handler checks for the administrator group definition from the Azure tenant.

using Microsoft.AspNetCore.Authorization; using Microsoft.Extensions.Configuration; using System; using System.Linq; using System.Threading.Tasks; namespace AzureB2CUI.Authz { public class IsAdminHandlerUsingAzureGroups : AuthorizationHandler<IsAdminRequirement> { private readonly string _adminGroupId; public IsAdminHandlerUsingAzureGroups(IConfiguration configuration) { _adminGroupId = configuration.GetValue<string>("AzureGroups:AdminGroupId"); } protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsAdminRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var claimIdentityprovider = context.User.Claims.FirstOrDefault(t => t.Type == "group" && t.Value == _adminGroupId); if (claimIdentityprovider != null) { context.Succeed(requirement); } return Task.CompletedTask; } } }

The policy for this can be used anywhere in the application.

[Authorize(Policy = "IsAdminPolicy")] [AuthorizeForScopes(Scopes = new string[] { "https://b2cdamienbod.onmicrosoft.com/5f4e8bb1-3f4e-4fc6-b03c-12169e192cd7/access_as_user" })] public class CallAdminApiModel : PageModel {

If a user tries to call the Razor page which was created for admin users, then an Access denied is returned. Of course, in a real application, the menu for this would also be hidden if the user is not an admin and does not fulfil the policy.

If the user is an admin and a member of the Azure security group, the data and the Razor page can be opened and viewed.

By using Azure security groups, it is really easily for IT admins to add or remove users from the admin role. This can be easily managed using Powershell scripts. It is a pity that Microsoft Graph API is required to use the Azure security groups when authenticating using Azure B2C. This is much more simple to use when authenticating using Azure AD.

Links

Managing Azure B2C users with Microsoft Graph API

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-local

https://docs.microsoft.com/en-us/azure/active-directory/

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/azure-ad-b2c

https://github.com/azure-ad-b2c/azureadb2ccommunity.io

https://github.com/azure-ad-b2c/samples

Sunday, 05. September 2021

Jon Udell

Metabase as a lightweight app server

In A virtuous cycle for analytics I said this about Metabase: It’s all nicely RESTful. Interactive elements that can parameterize queries, like search boxes and date pickers, map to URLs. Queries can emit URLs in order to compose themselves with other queries. I came to see this system as a kind of lightweight application server … Continue reading Metabase as a lightweight app server

In A virtuous cycle for analytics I said this about Metabase:

It’s all nicely RESTful. Interactive elements that can parameterize queries, like search boxes and date pickers, map to URLs. Queries can emit URLs in order to compose themselves with other queries. I came to see this system as a kind of lightweight application server in which to incubate an analytics capability that could later be expressed more richly.

Let’s explore that idea in more detail. Consider this query that finds groups created in the last week.

with group_create_days as ( select to_char(created, 'YYYY-MM-DD') as day from "group" where created > now() - interval '1 week' ) select day, count(*) from group_create_days group by day order by day desc

A Metabase user can edit the query and change the interval to, say, 1 month, but there’s a nicer way to enable that. Terms in double squigglies are Metabase variables. When you type {{interval}} in the query editor, the Variables pane appears.

Here I’m defining the variable’s type as text and providing the default value 1 week. The query sent to Postgres will be the same as above. Note that this won’t work if you omit ::interval. Postgres complains: “ERROR: operator does not exist: timestamp with time zone – character varying.” That’s because Metabase doesn’t support variables of type interval as required for date subtraction. But if you cast the variable to type interval it’ll work.

That’s an improvement. A user of this Metabase question can now type 2 months or 1 year to vary the interval. But while Postgres’ interval syntax is fairly intuitive, this approach still requires people to make an intuitive leap. So here’s a version that eliminates the guessing.

The variable type is now Field Filter; the filtered field is the created column of the group table; the widget type is Relative Date; the default is Last Month. Choosing other intervals is now a point-and-click operation. It’s less flexible — 3 weeks is no longer an option — but friendlier.

Metabase commendably provides URLs that capture these choices. The default in this case is METABASE_SERVER/question/1060?interval=lastmonth. For the Last Year option it becomes interval=lastyear.

Because all Metabase questions that use variables work this way, the notion of Metabase as rudimentary app server expands to sets of interlinked questions. In Working in a hybrid Metabase / Postgres code base I showed the following example.

A Metabase question, #600, runs a query that selects columns from the view top_20_annotated_domains_last_week. It interpolates one of those columns, domain, into an URL that invokes Metabase question #985 and passes the domain as a parameter to that question. In the results for question #600, each row contains a link to a question that reports details about groups that annotated pages at that row’s domain.

This is really powerful stuff. Even without all the advanced capabilities I’ve been discussing in this series — pl/python functions, materialized views — you can do a lot more with the Metabase / Postgres combo than you might think.

For example, here’s a interesting idiom I’ve discovered. It’s often useful to interpolate a Metabase variable into a WHERE clause.

select * from dashboard_users where email = {{ email }}

You can make that into a fuzzy search using the case-insensitive regex-match operator ~*.

select * from dashboard_users where email ~* {{ email }}

That’ll find a single address regardless of case; you can also find all records matching, say, ucsc.edu. But it requires the user to type some value into the input box. Ideally this query won’t require any input. If none is given, it lists all addresses in the table. If there is input it does a fuzzy match on that input. Here’s a recipe for doing that. Tell Metabase that {{ email }} a required variable, and set its default to any. Then, in the query, do this:

select * from dashboard_users where email ~* case when {{ email }} = 'any' then '' else {{ email}} end

In the default case the matching operator binds to the empty string, so it matches everything and the query returns all rows. For any other input the operator binds to a value that drives a fuzzy search.

This is all very nice, you may think, but even the simplest app server can write to the database as well as read from it, and Metabase can’t. It’s ultimately just a tool that you point at a data warehouse to SELECT data for display in tables and charts. You can’t INSERT or UPDATE or ALTER or DELETE or CALL anything.

Well, it turns out that you can. Here’s a Metabase question that adds a user to the table.

select add_dashboard_user( {{email}} )

How can this possibly work? If add_dashboard_user were a Postgres procedure you could CALL it from psql, but in this context you can only SELECT.

We’ve seen the solution in Postgres set-returning functions that self-memoize as materialized views. A Postgres function written in pl/python can import and use a Python function from a plpython_helpers module. That helper function can invoke psql to CALL a procedure. So this is possible.

We’ve used Metabase for years. It provides a basic, general-purpose UX that’s deeply woven into the fabric of the company. Until recently we thought of it as a read-only system for analytics, so a lot of data management happens in spreadsheets that don’t connect to the data warehouse. It hadn’t occurred to me to leverage that same basic UX for data management too, and that’s going to be a game-changer. I always thought of Metabase as a lightweight app server. With some help from Postgres it turns out to be a more capable one than I thought.


1 https://blog.jonudell.net/2021/07/21/a-virtuous-cycle-for-analytics/
2 https://blog.jonudell.net/2021/07/24/pl-pgsql-versus-pl-python-heres-why-im-using-both-to-write-postgres-functions/
3 https://blog.jonudell.net/2021/07/27/working-with-postgres-types/
4 https://blog.jonudell.net/2021/08/05/the-tao-of-unicode-sparklines/
5 https://blog.jonudell.net/2021/08/13/pl-python-metaprogramming/
6 https://blog.jonudell.net/2021/08/15/postgres-and-json-finding-document-hotspots-part-1/
7 https://blog.jonudell.net/2021/08/19/postgres-set-returning-functions-that-self-memoize-as-materialized-views/
8 https://blog.jonudell.net/2021/08/21/postgres-functional-style/
9 https://blog.jonudell.net/2021/08/26/working-in-a-hybrid-metabase-postgres-code-base/
10 https://blog.jonudell.net/2021/08/28/working-with-interdependent-postgres-functions-and-materialized-views/
11 https://blog.jonudell.net/2021/09/05/metabase-as-a-lightweight-app-server/
12 https://blog.jonudell.net/2021/09/07/the-postgres-repl/

Saturday, 04. September 2021

reb00ted

Today's apps: context-free computing at its finest

Prompted by this exchange on Twitter: Functionality attached to the context-defining data object. Instead of an “app” silo. Now here’s a thought. :-) — Johannes Ernst (@Johannes_Ernst) September 4, 2021 Let me unpack this a bit. Let’s say that I’d like to send a message to some guy, let’s call him Tom for this example. The messaging app vendors go like: “Oh yes, we’ll make it super-easy fo

Prompted by this exchange on Twitter:

Functionality attached to the context-defining data object. Instead of an “app” silo. Now here’s a thought. :-)

— Johannes Ernst (@Johannes_Ernst) September 4, 2021

Let me unpack this a bit. Let’s say that I’d like to send a message to some guy, let’s call him Tom for this example.

The messaging app vendors go like: “Oh yes, we’ll make it super-easy for him (Johannes) to send a message to Tom, so we give him an extra short handle (say @tom) so he doesn’t need to type much, and also add a ‘reply’ button to previous messages so he won’t even have to type that.”

Which is myopic, because it completely misunderstands or ignores the context of the user. The user doesn’t think that way, at least most of the time.

As a user, I think the pattern is this:

“I need to tell (person) about (news) related to (common context), how do I best do that (i.e. which app)?”

Three concepts and a sub-concept:

Who do I want to tell. In Joshua’s example: the person(s) I’m about to meet with.

What’s the news I want to tell them. Here it is “I am running late”.

But this news only makes sense in the common context of “we agreed to meet today”. If the receiver doesn’t have that context, they will receive my message and it will be pointless and bewildering to them.

What’s the best way of conveying that news. It might be a text, a phone call, or pretty much anything. This item is the least important in this entire scenario, as long as the receiver gets the message.

So thre are two primary “entry” points for this scenario:

It starts with a person. The user thinks “Tom, I need to tell Tom that I’ll be late”, and the frantically tries to find a way of contacting them while hurring in the subway or driving really fast. (We have all been there.)

It starts with the shared context object. The user looks at the calendar and thinks “Oh darn, this meeting, it’s now, I’ll be late, I need to tell them”. (They might not even remember who exactly will be in the meeting, but then likely the calender event has that info.)

The entry point is almost never: “how convenient, I have all people I’m about to meet with in the current window of the messaging app, they all know what whatever I’m going to type into that window next is about the meeting, and I can just simply say that I’ll be late.”

So … if we were to put the user, and their experience, at the center of messaging, messaging wouldn’t be an app. (Well, it might also be an app, but mostly it wouldn’t.)

Instead, the messaging would be a “system service” attached to the context objects in which I’d like to message. In Joshua’s example that is:

the person I’m about to meet with (but this only works if there is a single person; if there are a dozen this does not work).

the shared context about which I am conveying the news: here, the (hopefully shared) calendar event. Joshua’s original point.

Now in my experience, this is just an example for a more general pattern. For example:

meeting notes. In which meeting any meeting notes were taken is of course supremely important. Which is why most meeting minutes start with a title, a date/time and a list of attendees. Instead, they should be attached to the calendar event, just like the message thread. (Yes, including collaborative editing.)

And by “attached” I don’t mean: there’s a URL somewhere in the calendar dialog form that leads to a Google Doc. No, I mean that the calendar can show them, and insert the to-do-items from the meeting, and future meetings from that document, and the other way around: that when you look at the meeting notes, I can see the calendar event, and find out it is a biweekly meeting, and the notes for the other meetings are over there.

projects. Software development is the worst offender, as far as I know. If you and I and half a dozen other people work on a project to implement functionality X, that is our primary shared context. All communication and data creation should occur in that context. But instead we have our code in Github, our bugs in Confluence, our APIs … test cases … screen mockups … video calls … chats … in a gazillion different places, and one of these days I’m sure somebody is going to prove that half of software budgets in many places are consumed by context switching between tools that blissfully ignore that the user thinks not like a vendor does. (And the other half by trying to make the so-called “integrations” between services work.)

There are many more examples. (In a previous life, I ran around with the concept of “situational computing” – which takes context-aware computing to something much more dynamic and extreme; needless to say it was decades before its time. But now with AR/VR-stuff coming, it will probably become part of the mainstream soon.)

The 100 dollar question is of course: if this is the right thing for users, why aren’t apps doing that?

Joshua brought up OpenDoc in a subsequent post. Yep, there is a similarity here, and apps don’t do this kind of thing for the same reason OpenDoc failed. (It didn’t fail because it was slow as molasses and Steve Jobs hated it.)

OpenDoc failed because it would have disintermediated big and powerful companies such as Adobe, who would have had to sell 100 little OpenDoc components, instead of one gigantic monolith containing those 100 components as a take-it-or-leave-it package at “enterprise” pricing. No adoption by Adobe, Adobe feeling Apple proactively attacked their business model, it would have killed Apple right afterwards.

But having OpenDoc would have sooo much better for users. We would have gotten much more innovation, and yes, lower prices. But software vendors, like all businesses, primarily do what is right for them, not their customers.

Which is why we get silos everwhere we look.

If we wanted to change that situation, about OpenDoc-like things, or messaging attached, in context, to things like calendar events, we have to change the economic situation in which the important vendors find themselves.

Plus of course the entire technology stack, because if all you know is how to ship an int main(argv, argc) on your operating system, something componentized and pluggable and user-centric and context-centric is never able to emerge, even if you want to.

(*) The title is meant to be sarcastically, in case you weren’t sure.

Friday, 03. September 2021

Jon Udell

Notes for an annotation SDK

While helping Hypothesis find its way to ed-tech it was my great privilege to explore ways of adapting annotation to other domains including bioscience, journalism, and scholarly publishing. Working across these domains showed me that annotation isn’t just an app you do or don’t adopt. It’s also a service you’d like to be available in … Continue reading Notes for an annotation SDK

While helping Hypothesis find its way to ed-tech it was my great privilege to explore ways of adapting annotation to other domains including bioscience, journalism, and scholarly publishing. Working across these domains showed me that annotation isn’t just an app you do or don’t adopt. It’s also a service you’d like to be available in every document workflow that connects people to selections in documents.

In my talk Weaving the Annotated Web I showcased four such services: Science in the Classroom, The Digital Polarization Project, SciBot, and ClaimChart. Others include tools to evaluate credibility signals, or review claims, in news stories.

As I worked through these and other scenarios, I accreted a set of tools for enabling any annotation-aware interaction in any document-oriented workflow. I’ve wanted to package these as a coherent software development kit, that hasn’t happened yet, but here are some of the ingredients that belong in such an SDK.

Creating an annotation from a selection in a document

Two core operations lie at the heart of any annotation system: creating a note that will bind to a selection in a document, and binding (anchoring) that note to its place in the document. A tool that creates an annotation reacts to a selection in a document by forming one or more selectors that describe the selection.

The most important selector is TextQuoteSelector. If I visit http://www.example.com and select the phrase “illustrative examples” and then use Hypothesis to annotate that selection, the payload sent from the client to the server includes this construct.

{ "type": "TextQuoteSelector", "exact": "illustrative examples", "prefix": "n\n This domain is for use in ", "suffix": " in documents. You may use this\n" }

The Hypothesis client formerly used an NPM module, dom-anchor-text-quote, to derive that info from a selection. It no longer uses that module, and the equivalent code that it does use isn’t separately available. But annotations created using TextQuoteSelectors formed by dom-anchor-text-quote interoperate with those created using the Hypothesis client, and I don’t expect that will change since Hypothesis needs to remain backwards-compatible with itself.

You’ll find something like TextQuoteSelector in any annotation system. It’s formally defined by the W3C here. In the vast majority of cases this is all you need to describe the selection to which an annotation should anchor.

There are, however, cases where TextQuoteSelector won’t suffice. Consider a document that repeats the same passage three times. Given a short selection in the first of those passages, how can a system know that an annotation should anchor to that one, and not the second or third? Another selector, TextPositionSelector (https://www.npmjs.com/package/dom-anchor-text-position), enables a system to know which passage contains the selection.

{ "type": "TextPositionSelector", "start": 51 "end": 72, }

It records the start and end of the selection in the visible text of an HTML document. Here’s the HTML source of that web page.

<div> <h1>Example Domain</h1> <p>This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.</p> <p><a href="https://www.iana.org/domains/example">More information...</a></p> </div>

Here is the visible text to which the TextQuoteSelector refers.

\n\n Example Domain\n This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.\n More information…\n\n\n\n

The positions recorded by a TextQuoteSelector can change for a couple of reasons. If the document is altered, it’s obvious that an annotation’s start and stop numbers might change. Less obviously that can happen even if the document’s text isn’t altered. A news website, for example, may inject different kinds of advertising-related text content from one page load to the next. In that case the positions for two consecutive Hypothesis annotations made on the same selection can differ. So while TextPositionSelector can resolve ambiguity, and provide hints to an annotation system about where to look for matches, the foundation is ultimately TextQuoteSelector.

If you try the first example in the README at https://github.com/judell/TextQuoteAndPosition, you can form your own TextQuoteSelector and TextPositionSelector from a selection in a web page. That repo exists only as a wrapper around the set of modules — dom-anchor-text-quote, dom-anchor-text-position, and wrap-range-text — needed to create and anchor annotations.

Building on these ingredients, HelloWorldAnnotated illustrates a common pattern.

Given a selection in a page, form the selectors needed to post an annotation that targets the selection. Lead a user through an interaction that influences the content of that annotation. Post the annotation.

Here is an example of such an interaction. It’s a content-labeling scenario in which a user rates the emotional affect of a selection. This is the kind of thing that can be done with the stock Hypothesis client, but awkwardly because users must reliably add tags like WeakNegative or StrongPositive to represent their ratings. The app prompts for those tags to ensure consistent use of them.

Although the annotation is created by a standalone app, the Hypothesis client can anchor it, display it, and even edit it.

And the Hypothesis service can search for sets of annotations that match the tags WeakNegative or StrongPositive.

There’s powerful synergy at work here. If your annotation scenario requires controlled tags, or a prescribed workflow, you might want to adapt the Hypothesis client to do those things. But it can be easier to create a standalone app that does exactly what you need, while producing annotations that interoperate with the Hypothesis system.

Anchoring an annotation to its place in a document

Using this same set of modules, a tool or system can retrieve an annotation from a web service and anchor it to a document in the place where it belongs. You can try the second example in the README at https://github.com/judell/TextQuoteAndPosition to see how this works.

For a real-world demonstration of this technique, see Science in the Classroom. It’s a project sponsored by The American Association for the Advancement of Science. Graduate students annotate research papers selected from the Science family of journals so that younger students can learn about the terminology, methods, and outcomes of scientific research.

Pre-Hypothesis, annotations on these papers were displayed using Learning Lens, a viewer that color-codes them by category.

Nothing about Learning Lens changed when Hypothesis came into the picture, it just provided a better way to record the annotations. Originally that was done as it’s often done in the absence of a formal way to describe annotation targets, by passing notes like: “highlight the word ‘proximodistal’ in the first paragraph of the abstract, and attach this note to it.” This kind of thing happens a lot, and wherever it does there’s an opportunity to adopt a more rigorous approach. Nowadays at Science in the Classroom the annotators use Hypothesis to describe where notes should anchor, as well as what they should say. When an annotated page loads it searches Hypothesis for annotations that target the page, and inserts them using the same format that’s always been used to drive the Learning Lens. Tags assigned by annotators align with Learning Lens categories. The search looks only for notes from designated annotators, so nothing unwanted will appear.

An annotation-powered survey

The Credibility Coalition is “a research community that fosters collaborative approaches to understanding the veracity, quality and credibility of online information.” We worked with them on a project to test a set of signals that bear on the credibility of news stories. Examples of such signals include:

Title Representativeness (Does the title of an article accurately reflect its content?) Sources (Does the article cite sources?) Acknowledgement of uncertainty (Does the author acknowledge uncertainty, or the possibility things might be otherwise?)

Volunteers were asked these questions for each of a set of news stories. Many of the questions were yes/no or multiple choice and could have been handled by any survey tool. But some were different. What does “acknowledgement of uncertainty” look like? You know it when you see it, and you can point to examples. But how can a survey tool solicit answers that refer to selections in documents, and record their locations and contexts?

The answer was to create a survey tool that enabled respondents to answer such questions by highlighting one or more selections. Like the HelloWorldAnnotated example above, this was a bespoke client that guided the user through a prescribed workflow. In this case, that workflow was more complex. And because it was defined in a declarative way, the same app can be used for any survey that requires people to provide answers that refer to selections in web documents.

A JavaScript wrapper for the Hypothesis API

The HelloWorldAnnotated example uses functions from a library, hlib, to post an annotation to the Hypothesis service. That library includes functions for searching and posting annotations using the Hypothesis API. It also includes support for interaction patterns common to annotation apps, most of which occur in facet, a standalone tool that searches, displays, and exports sets of annotations. Supported interactions include:

– Authenticating with an API token

– Creating a picklist of groups accessible to the authenticated user

– Assembling and displaying conversation threads

– Parsing annotations

– Editing annotations

– Editing tags

In addition to facet, other tools based on this library include CopyAnnotations and TagRename

A Python wrapper for the Hypothesis API

If you’re working in Python, hypothesis-api is an alternative API wrapper that supports searching for, posting, and parsing annotations.

Notifications

If you’re a publisher who embeds Hypothesis on your site, you can use a wildcard search to find annotations. But it would be helpful to be notified when annotations are posted. h_notify is a tool that uses the Hypothesis API to watch for annotations on individual or wildcard URLs, or from particular users, or in a specified group, or with a specified tag.

When an h_notify-based watcher finds notes in any of these ways, it can send alerts to a Slack channel, or to an email address, or add items to an RSS feed.

At Hypothesis we mainly rely on the Slack option. In this example, user nirgendheim highlighted the word “interacting” in a page on the Hypothesis website.

The watcher sent this notice to our #website channel in Slack.

A member of the support team (Hypothesis handle mdiroberts) saw it there and responded to nirgendheim as shown above. How did nirgendheim know that mdiroberts had responded? The core Hypothesis system sends you an email when somebody replies to one of your notes. h_notify is for bulk monitoring and alerting.

A tiny Hypothesis server

People sometimes ask about connecting the Hypothesis client to an alternate server in order to retain complete control over their data. It’s doable, you can follow the instructions here to build and run your own server, and some people and organizations do that. Depending on need, though, that can entail more effort, and more moving parts, than may be warranted.

Suppose for example you’re part of a team of investigative journalists annotating web pages for a controversial story, or a team of analysts sharing confidential notes on web-based financial reports. The documents you’re annotating are public, but the notes you’re taking in a Hypothesis private group are so sensitive that you’d rather not keep them in the Hypothesis service. You’d ideally like to spin up a minimal server for that purpose: small, simple, and easy to manage within your own infrastructure.

Here’s a proof of concept. This tiny server clocks in just 145 lines of Python with very few dependencies. It uses Python’s batteries-include SQLite module for annotation storage. The web framework is Pyramid only because that’s what I’m familiar with, but could as easily be Flask, the ultra-light framework typically used for this sort of thing.

A tiny app wrapped around those ingredients is all you need to receive JSON payloads from a Hypothesis client, and return JSON payloads when the client searches for annotations to anchor to a page.

The service is dockerized and easy to deploy. To test it I used the fly.io speedrun to create an instance at https://summer-feather-9970.fly.dev. Then I made the handful of small tweaks to the Hypothesis client shown in client-patches.txt. My method for doing that, typical for quick proofs of concept that vary the Hypothesis client in some small way, goes like this:

Clone the Hypothesis client. Edit gulpfile.js to say const IS_PRODUCTION_BUILD = false. This turns off minification so it’s possible to read and debug the client code. Follow the instructions to run the client from a browser extension. After establishing a link between the client repo and browser-extension repo, as per those instructions, use this build command — make build SETTINGS_FILE=settings/chrome-prod.json — to create a browser extension that authenticates to the Hypothesis production service. In a Chromium browser (e.g. Chrome or Edge or Brave) use chrome://extensions, click Load unpacked, and point to the browser-extension/build directory where you built the extension.

This is the easiest way to create a Hypothesis client in which to try quick experiments. There are tons of source files in the repos, but just a handful of bundles and loose files in the built extension. You can run the extension, search and poke around in those bundles, set breakpoints, make changes, and see immediate results.

In this case I only made the changes shown in client-patches.txt:

In options/index.html I added an input box to name an alternate server. In options/options.js I sync that value to the cloud and also to the browser’s localStorage. In the extension bundle I check localStorage for an alternate server and, if present, modify the API request used by the extension to show the number of notes found for a page. In the sidebar bundle I check localStorage for an alternate server and, if present, modify the API requests used to search for, create, update, and delete annotations.

I don’t recommend this cowboy approach for anything real. If I actually wanted to use this tweaked client I’d create branches of the client and the browser-extension, and transfer the changes into the source files where they belong. If I wanted to share it with a close-knit team I’d zip up the extension so colleagues could unzip and sideload it. If I wanted to share more broadly I could upload the extension to the Chrome web store. I’ve done all these things, and have found that it’s feasible — without forking Hypothesis — to maintain branches that maintain small but strategic changes like this one. But when I’m aiming for a quick proof of concept, I’m happy to be a cowboy.

In any event, here’s the proof. With the tiny server deployed to summer-feather-9970.fly.dev, I poked that address into the tweaked client.

And sure enough, I could search for, create, reply to, update, and delete annotations using that 145-line SQLite-backed server.

The client still authenticates to Hypothesis in the usual way, and behaves normally unless you specify an alternate server. In that case, the server knows nothing about Hypothesis private groups. The client sees it as the Hypothesis public layer, but it’s really the moral equivalent of a private group. Others will see it only if they’re running the same tweaked extension and pointing to the same server. You could probably go quite far with SQLite but, of course, it’s easy to see how you’d swap it out for a more robust database like Postgres.

Signposts

I think of these examples as signposts pointing to a coherent SDK for weaving annotation into any document workflow. They show that it’s feasible to decouple and recombine the core operations: creating an annotation based on a selection in a document, and anchoring an annotation to its place in a document. Why decouple? Reasons are as diverse as the document workflows we engage in. The stock Hypothesis system beautifully supports a wide range of scenarios. Sometimes it’s helpful to replace or augment Hypothesis with a custom app that provides a guided experience for annotators and/or an alternative display for readers. The annotation SDK I envision will make it straightforward for developers to build solutions that leverage the full spectrum of possibility.

End notes

https://www.infoworld.com/article/3263344/how-web-annotation-will-transform-content-management.html

Weaving the annotated web


@_Nat Zone

ミュンヘンで行われるEICでキーノートスピーチをします

再来週、9月13日にドイツ・ミュンヘン近郊のウンタ… The post ミュンヘンで行われるEICでキーノートスピーチをします first appeared on @_Nat Zone.

再来週、9月13日にドイツ・ミュンヘン近郊のウンターシュラスハイムで行われるEuropean Identity and Cloud Conference 2021 にキーノートスピーチをしに行ってまいります。1年半ぶりの海外になります。

今回は、ここのところ取り組んでいた「Global Assured Identity Network – GAIN」を、先のSWIFTのCEOであるGottfried Leibbrandt氏と共同で発表いたします。

GAINがいったいどういうものなのかということに関しては、当日を乞ご期待。

Introducing The Global Assured Identity Network (GAIN)
Keynote
Monday, September 13, 2021 19:20—19:40 (CET)

閑話休題 ところで、いい加減2週間隔離やめてもらえないですかね。彼の地は東京の1/3とかしか新型コロナ患者出てないんですが…。

The post ミュンヘンで行われるEICでキーノートスピーチをします first appeared on @_Nat Zone.

Thursday, 02. September 2021

MyDigitalFootprint

Is it better to prevent or correct?

This link gives you access to all the articles and archives for </Hello CDO> “Prevention or Correction” is something society has wrestled with for a very long time. This article focuses on why our experience of “prevention or correction” ideas frames the CDO’s responsibilities and explores a linkage to a company’s approach to data. Almost irrespective of where you reside, we live with
This link gives you access to all the articles and archives for </Hello CDO>

“Prevention or Correction” is something society has wrestled with for a very long time. This article focuses on why our experience of “prevention or correction” ideas frames the CDO’s responsibilities and explores a linkage to a company’s approach to data.


Almost irrespective of where you reside, we live with police, penal, and political systems that cannot fully agree on preventing or correcting. It is not that we disagree with the “why”; is it the “how” that divides us! I am a child of an age when lefthandedness was still seen as something to correct, so we have made some progress.
A fundamental issue is that prevention is the best thinking; but if you prevent it, it does not occur. We are then left with the dilemma, “did you prevent it, or was it not going to occur anyway?” The finance team now ask, “Have we wasted resources on something that would not have happened?”

When something we don’t like does occur, we jump into action as we can correct it. We prioritise, allocate resources and measure them. We (humanity) appear to prefer a problem we can solve (touch, feel, see) and can measure rather than prevent. A proportion of crime is driven by the poor economic situation of a population with no choice. Yet, we keep that same population in the same economic conditions as we have limited resources committed to correction (control.) It is way more messy and complex, and we need both, but prevention is not an easy route. Just think about our current global pandemic, climate change or sustainability. Prevention was a possibility in each case, but we kick into action now correction is needed.

In our new digital and data world, we are facing the same issues of prevention vs correction. Should we prevent data issues or correct data issues, and who owns the problem?

In the data industry right now, we correct and just like the criminal services, we have allocated all our budget to correction, so we don’t have time or resources for prevention. I would be rich if I had a dollar for every time I hear, “We need results, Fish, not an x*x*x*x data philosophy.”

For anyone who follows my views and beliefs, I advocate data quality founded on lineage and provenance (prevention). I am not a fan of the massive industry that has been built up to correct data (correction). I see it as a waste on many levels, but FUD sells to a naive buyer. I am a supporter of having a presence at the source of data to guarantee attestation and assign rights. I cannot get my head around the massive budgets set aside to correct “wrong” data. We believe that data quality is knowing the data is wrong and thinking we can correct it! We have no idea if the correction is correct, and the updated data still lacks attestation. We measure this madness and assign KPI to improve our corrective powers. In this case, because it is data and not humans, prevention is the prefered solution as neither a cure nor correction work at any economic level that can be justified other than your a provider of the service or one of the capital providers. But as already mentioned, prevention is too hard to justify. The same analogy is a recurring theme in the buy (outsource) vs build debate for the new platform, but that is another post.
Unpacking to expose and explore some motivations. We know that the CDO job emerged in 2002, and some 20 years on, it is still in its infancy. Tenures are still short (less than three years), and there is a lack of skills and experience because this data and digital thing are all new compared to marketing, finance, or accounting. As I have written before, our job descriptions for the CDO remain poorly defined, with mandates too broad and allocated resources too limited. All many articles at </Hello CDO> I focus on the conflicts CDO’s have because of the mandates we are given.

Whilst mandates are too broad, thankfully, security and privacy are becoming new roles with their own mandates. The CDO is still being allocated a data transformation mandate but being asked to correct the data whilst doing it. Not surprisingly, most data projects fail as the board remains fixated that the data we have is of value, and we should allocate all our resources correcting and securing it. A bit like the house built on sand, we endlessly commit resources to unpin a house without foundations because it is there, rather than moving to the more secure ground and building the right house. Prevention or correction?

All CDO’s face the classic problem of being asked to solve the world’s debt famine crisis with no budget, no resources and yesterday would be good. Solve the data quality problem by correcting the waste at the end. Because of this commitment to correct data, we find we are trapped. “We should just finish the job as we have gone so far,” is what the finance team says, “prevention is like starting all over again”; it will be too expensive, wreck budgets and we will miss our bonus hurdle as it means a total re-start. The budget is too big so let’s just put in some more foundations into the house built on sand.

It is one of my favourite saying “the reason it is called a shortcut, it because it is a short cut and it misses something out” I often use it when we boil a decision down to an ROI number, believing that all the information needed can be presented in a simple single dimensional number. The correction ideals will always win when we use shortcuts, especially ROI.

Prevention is a hard sell. Correction is a simple sell. Who benefits from an endless correction policy? Probably the CDO! We get to look super busy, it is easy to get the commitment, and no one will argue at the board/ senior leadership team to do anything else. It might take three years to see the transformation has not happened, and there is plenty of demand for CDO’s? Why would a CDO want to support prevention? Why is the CDO role so in question?

Prevention takes time, is complex, and you may not see results during your tenure. I often reflect on what will be my CDO legacy? Correction is instant, looks busy and gets results that you can be measured on. Correction means there is always work to be done. Prevention means you will eventually be out of a job. Since prevention cannot be measured and is hard to justify with an ROI calculation, maybe the CEO needs to focus on measuring the success of analytics?
Note to the CEO Data is complex, and like all expert discipline areas, you will seek advice, opinion and counsel from various sources to help form a balanced view. Data quality is a thorny one as most of those around you will inform you of the benefits of correction over prevention. Correction wins, and it is unlikely any balanced view arguing for prevention.

Perhaps it is worth looking at the CDO job description, focusing on what the KPIs are for and how you shift a focus to the outcomes of the analysis. To improve analysis and outcomes demands better data quality, which correction can only get you so far. You get prevention by the back door.

The Dingle Group

Kilt and Social KYC

In July Vienna Digital Identity hosted a fire side chat with Ingo Ruebe and Mark Cachia on KILT Protocol's role in providing digital identity in the Polkadot ecosystem.

In July Vienna Digital Identity hosted a fire side chat with Ingo Ruebe and Mark Cachia on KILT Protocol's role in providing digital identity in the Polkadot ecosystem.

Ingo and Mark discussed some of the history of Kilt and how it decided work with to the Polkatdot ecosystem and shared Kilt’s new Social KYC offering.

To watch to a recording of the event please check out the link: https://vimeo.com/580422623

My apologies to Ingo and Mark in being so tardy in getting this video up and available. (Michael Shea - Sept 2021)

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Identity Praxis, Inc.

ICO’s Child Protection Rules Take Effect Sept. 2, 2021. Are You Ready?

The UK Information Commission’s (ICO) Children’s Code, officially known as the“Age Appropriate Design Code: a code of practice for online services,” after a year grace period, goes into effect Thursday, Sept. 2, 2021. The code, which falls under section 125(1)(b) of the UK Data Protection Act 2018 (the Act), looks to protect UK children, i.e., people […] The post ICO’s Child Protection Rule

The UK Information Commission’s (ICO) Children’s Code, officially known as the“Age Appropriate Design Code: a code of practice for online services,” after a year grace period, goes into effect Thursday, Sept. 2, 2021. The code, which falls under section 125(1)(b) of the UK Data Protection Act 2018 (the Act), looks to protect UK children, i.e., people under the age of eighteen (18).

Are you ready? I hope so; this code applies to any business, in and out of the UK, that provides digital services to UK children. And, it is not the only one of its kind around the world.

What You Need to Know

In the digital age, people’s digital footprint is first established months before they’re born, grows exponentially throughout their lives, and carries on after their death. Kids are especially vulnerable to being influenced by digital services; moreover, they are more likely to befall cybercrime. For instance, a 2011 study by Carnegie Mellon SyLab study found that children were 51% more likely to have their security number used by someone else.

What is it for?

The code is designed to ensure that service providers of apps, programs, toys, or any other devices that collect and process children’s data “are appropriate for use by, and meet the development needs of, children.” It calls for 15 independent and interdependent legal, service design, and data processing principles and standards to be followed. Specifically, as called out in the code, these are:

• Best interests of the child • Data protection impact assessments • Age appropriate application• Transparency • Detrimental use of data • Policies and community standard • Default settings • Data minimization • Data sharing • Geolocation • Parental controls • Profiling • Nudge techniques • Connected toys and devices • Online tools

Who must comply? Risk of non-compliance

Information society services (ISS) that cater to UK children under the age of 18 or whose services are likely to be accessed by children must adhere to the code or risk ICO public assessment and enforcement notices, warnings, reprimands, and penalty notices (aka administrative fines). Serious code breaches may lead to fines of up to €20 million (or £17.5 million when the UK GDPR comes into effect) or 4% of the provider’s annual worldwide turnover, whichever is higher.

What you need to do

Ensuring the digital future and safety of children and venerable adults (people over 18 who cannot meet their own needs or seek help without assistance) is a founding principal of a healthy society and for running a sustainable business. There are several recommended steps you can take to get into and maintain alignment with the code:

1. Consider the likelihood that a child (or vulnerable adult) might use your service, the liability is on the service provider to determine the likelihood a child will use their service. You should conduct user testing, surveys, market research (inc. competitive analysis), and professional and academic literature reviews.

2. Document your data flows (i.e. conduct a data protection impact assessment (DPIA)), your DPIA should consist of data and systems flow diagrams and detailed descriptions of these systems (inc. services, process, and interfaces), all the data flowing through them, and how data is handled. The flow diagram should illustrate each engagement swim lane (e.g., individual, client, company, third-party) and the direction of data flow. The supporting documentation should define each data element and clearly spell out how it will be collected, used and managed. Take it from me and my firsthand experience. The DPIA lens is an invaluable tool for strategic product development. I highly recommend that you not look at the DIPA process as a legal necessity but rather as a valuable framework for learning about and assessing how your products and services work, exactly what they do and why. Performed with the right lens, the DPIA can be a fertile ground for creative inspiration and innovation.

3. Make it a team sport, effective data management is a company-wide, multi-disciplinary activity. It is important that you ensure all key stakeholders, not just legal, IT, security, and compliance, but also marketing, user experience, customer experience, design, support, product, sales, and third-party compliance partners (experts that can help you and your team succeed) all play a role in ensuring your products and services do not just meet legal requirements but exceed people’s expectations.

But There Is More

The above steps are extremely useful; however, there are two more considerations that should not be neglected- 1) things are just getting started, and 2) people’s sentiment—“the opportunity to differentiate.”

First, Gartner estimates that 10% of the world’s population is currently protected under people-centric regulation, i.e. regulations like Europe’s GDPRCalifornia’s CPPABrazil’s LGPD, and China’s data production law which takes effect Nov. 1 2021. By 2023 Gartner estimate this number will rise to 65%. Moreover, keep in mind, it will not just be omnibus rules that take effect; sectoral-specific rules will apply as well. For example, in the United States, the Federal Trade Commission is reevaluating its child protection laws (COPPA) and the U.S. Department of Education Department will more than likely be updating the Family Educational Rights and Privacy Act (FERPA). Moreover, there are state-specific regulations similar to the CCPA being enacted. For Instance, in July 2021, Colorado just enacted their people-centric regulations, “Privacy Act of 2021.”

Second, globally, as evidenced by the last seven years of the MEF Global Consumer Trust studies, people are waking up to industry data practices. Simply saying they’re unhappy about them is an understatement. People are connected, they’re concerned, and they want control of their data. The problem is, they’re not exactly sure how to go about it.

There is more to win than just staying on the right side of the law. There is an opportunity for companies to go beyond the law, to recognize that digital privacy, the controls and flows of one’s personal data, should be treated as sacred as one’s physical privacy. Privacy should not be a luxury good obtained only by a select minority; it’s a human right. All three major societal constituents (individuals, private sector organizations, and public sector institutions) need to play their part. There is an opportunity for each to weigh in on this debate, especially public sector players who can differentiate themselves by actively and publicly providing people not just with the utility of the company’s service but with tools and education that help people proactively enact their data rights and to secure and gain agency over their digital footprint, not just now but throughout their entire life. Like global warming, if we each do our part, we can get data back under control and achieve a healthy equilibrium throughout the world’s markets.

Useful Resources & Tools

• ICO’s Children’s Code of 2020, legal, design, service principles and standards for protecting children’s data.• ICO DPIA Template, note: does not include flow diagrams examples, which is a miss.• UNICEP Better Business for Children, industry-specific guidance to protect children’s rights.• COPPA Safe Harbor Program, U.S. self-regulatory program for the protection of Children’s data (6 companies have been certified).• Ada for Trust, an art exhibit presented at MyData 2019 detailing 6 key “digital” life moments; • UNICEP Better Business for Children, provides guidance and tool

The MEF Personal Data & Identity (PD&I) Working Group will be holding its next meeting on Sept. 20, 2021 at 7:00 AM PDT/2:00 PM GMT. The MEF welcomes marketer leaders looking to gather insight, interact with fellow leaders, and make an impact on the industry to join the MEF and the PD&I working group’s efforts.

The post ICO’s Child Protection Rules Take Effect Sept. 2, 2021. Are You Ready? appeared first on Identity Praxis, Inc..

Wednesday, 01. September 2021

The Dingle Group

Principles or Cult - An Irreverant Discussion on the Principles of SSI

At the end of 2020, the Sovrin Foundation published the 12 Principles of SSI, https://sovrin.org/principles-of-ssi/ . Building on the earlier works from Kim Cameron, and Christopher Allen, these principles layout what must be foundational principles around any self-sovereign identity system. The evolution of the Principles of SSI came about through the need to differentiate what is ‘true’ SSI

At the end of 2020, the Sovrin Foundation published the 12 Principles of SSI, https://sovrin.org/principles-of-ssi/ . Building on the earlier works from Kim Cameron, and Christopher Allen, these principles layout what must be foundational principles around any self-sovereign identity system. The evolution of the Principles of SSI came about through the need to differentiate what is ‘true’ SSI versus marketing forces twisting the concept. This market driven motivator can bring cultish overtones to the process.

With this event we had a bit of fun in what is otherwise 'serious' business.

A few themes that come out of the discussion:

- The position of the holder of the credential - when does the holder become the source of truth and the key to value creation? In most current business value conversations the holder appears to incidental; the focus is on how to incent the verifiers and issuers with little regard to the position and influence of the holder. While the SSI ‘triangle’ puts the holder at the ‘top’ of the triangle, the lack of consideration of the trust and value equation there, implies the triangle should be inverted elevating the issuers and verifiers and putting the holder on the bottom.

- When principles become a means of “locking in” a definition, they become a means of “locking out” those that do not meet the “bar”. A concern voiced was that with this mental model, and the implication of the moral ‘righteousness’ of this position, the real commercial values of SSI are lost and those not in the ‘cult’ turn away.

The panelists are:

Simone Ravaioli, Director Digitary

Rob van Kranenberg, IoT Council and NGI Forward

Nicky Hickman, Chair of the ID4All WG, Sovrin Foundation

Michael Shea, Moderator of Vienna Digital Identity Meetup

To listen to a recording of the event please check out the link: https://vimeo.com/manage/videos/580329620

Time markers:

0:05:00 - Introduction

0:10:07 - Simone Ravaioli

0:17:09 - Nicky Hickman

0:25:44 - Rob van Kranenburg

0:35:00 - General Discussion

1:15:07 - Rob van Kranenburg

1:23:45 - Wrap up


Resources

Slide decks:

Simone Ravaioli : Principles vs Cult (Simone)

Nicky Hickman : Principles vs Cult (Nicky)

Rob van Kranenburg : Slide Deck 1 Slide Deck 2

Blog Posts:

https://www.linkedin.com/pulse/stoic-sovereign-identity-ssi-simone-ravaioli/

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.

Saturday, 28. August 2021

Jon Udell

Working with interdependent Postgres functions and materialized views

In Working with Postgres types I showed an example of a materialized view that depends on a typed set-returning function. Because Postgres knows about that dependency, it won’t allow DROP FUNCTION foo. Instead it requires DROP FUNCTION foo CASCADE. A similar thing happens with materialized views that depend on tables or other materialized views. Let’s … Continue reading Working with interdependent

In Working with Postgres types I showed an example of a materialized view that depends on a typed set-returning function. Because Postgres knows about that dependency, it won’t allow DROP FUNCTION foo. Instead it requires DROP FUNCTION foo CASCADE.

A similar thing happens with materialized views that depend on tables or other materialized views. Let’s build a cascade of views and consider the implications.

create materialized view v1 as ( select 1 as number, 'note_count' as label ); SELECT 1 select * from v1; number | label -------+------- 1 | note_count

Actually, before continuing the cascade, let’s linger here for a moment. This is a table-like object created without using CREATE TABLE and without explicitly specifying types. But Postgres knows the types.

\d v1; Materialized view "public.v1" Column | Type --------+----- number | integer label | text

The read-only view can become a read-write table like so.

create table t1 as (select * from v1); SELECT 1 select * from t1; number | label -------+------- 1 | note_count \d t1 Table "public.v1" Column | Type --------+----- number | integer label | text

This ability to derive a table from a materialized view will come in handy later. It’s also just interesting to see how the view’s implicit types become explicit in the table.

OK, let’s continue the cascade.

create materialized view v2 as ( select number + 1, label from v1 ); SELECT 1 select * from v2; number | label -------+------- 2 | note_count create materialized view v3 as ( select number + 1, label from v2 ); SELECT 1 select * from v3; number | label -------+------- 3 | note_count

Why do this? Arguably you shouldn’t. Laurenz Albe makes that case in Tracking view dependencies in PostgreSQL. Recognizing that it’s sometimes useful, though, he goes on to provide code that can track recursive view dependencies.

I use cascading views advisedly to augment the use of CTEs and functions described in Postgres functional style. Views that refine views can provide a complementary form of the chunking that aids reasoning in an analytics system. But that’s a topic for another episode. In this episode I’ll describe a problem that arose in a case where there’s only a single level of dependency from a table to a set of dependent materialized views, and discuss my solution to that probem.

Here’s the setup. We have an annotation table that’s reloaded nightly. On an internal dashboard we have a chart based on the materialized view annos_at_month_ends_for_one_year which is derived from the annotation table and, as its name suggests, reports annotation counts on a monthly cycle. At the beginning of the nightly load, this happens: DROP TABLE annotation CASCADE. So the derived view gets dropped and needs to be recreated as part of the nightly process. But that’s a lot of unnecessary work for a chart that only changes monthly.

Here are two ways to protect a view from a cascading drop of the table it depends on. Both reside in a SQL script, monthly.sql, that only runs on the first of every month. First, annos_at_month_ends_for_one_year.

drop materialized view annos_at_month_ends_for_one_year; create materialized view annos_at_month_ends_for_one_year as ( with last_days as ( select last_days_of_prior_months( date(last_month_date() - interval '6 year') ) as last_day ), monthly_counts as ( select to_char(last_day, '_YYYY-MM') as end_of_month, anno_count_between( date(last_day - interval '1 month'), last_day ) as monthly_annos from last_days ) select end_of_month, monthly_annos, sum(monthly_annos) over (order by end_of_month asc rows between unbounded preceding and current row ) as cumulative_annos from monthly_counts ) with data;

Because this view depends indirectly on the annotation table — by way of the function anno_count_between — Postgres doesn’t see the dependency. So the view isn’t affected by the cascading drop of the annotation table. It persists until, once a month, it gets dropped and recreated.

What if you want Postgres to know about such a dependency, so that the view will participate in a cascading drop? You can do this.

create materialized view annos_at_month_ends_for_one_year as ( with depends as ( select * from annotation limit 1 ) last_days as ( ), monthly_counts as ( ) select * from monthly_counts;

The depends CTE doesn’t do anything relevant to the query, it just tells Postgres that this view depends on the annotation table.

Here’s another way to protect a view from a cascading drop. This expensive-to-build view depends directly on the annotation table but only needs to be updated monthly. So in this case, cumulative_annotations is a table derived from a temporary materialized view.

create materialized view _cumulative_annotations as ( with data as ( select to_char(created, 'YYYY-MM') as created from annotation group by created ) select data.created, sum(data.count) over ( order by data.created asc rows between unbounded preceding and current row ) from data group by data.created order by data.created drop table cumulative_annotations; create table cumulative_annotations as ( select * from _cumulative_annotations ); drop materialized view _cumulative_annotations;

The table cumulative_annotations is only rebuilt once a month. It depends indirectly on the annotation table but Postgres doesn’t see that, so doesn’t include it in the cascading drop.

Here’s the proof.

-- create a table create table t1 (number int); insert into t1 (number) values (1); INSERT 0 1 select * from t1; number ------- 1 -- derive a view from t1 create materialized view v1 as (selec