Last Update 9:37 AM February 06, 2023 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Monday, 06. February 2023

John Philpin : Lifestream

🎵 Willy and Bonnie still got it apparently.

🎵 Willy and Bonnie still got it apparently.

🎵 Willy and Bonnie still got it apparently.


🔗 StreetPass for Mastodon …anyone using it?

🔗 StreetPass for Mastodon …anyone using it?

🔗 StreetPass for Mastodon

…anyone using it?


Cunk was so good, that I had to go remind myself of this sho

Cunk was so good, that I had to go remind myself of this short piece from Charlie Brooker. It was done years ago, was my first exposure to him and has to be a timeless classic.

Cunk was so good, that I had to go remind myself of this short piece from Charlie Brooker. It was done years ago, was my first exposure to him and has to be a timeless classic.


Ben Werdmüller

Big Tech is using layoffs to crush worker power

“Workers in an industry that had long been famously union-agnostic at best had been forming bonds, organizing and developing solidarity. Layoffs of this scale and suddenness can be a blow to that process. […] If there’s one thing that firing people in a large-scale and seemingly random way accomplishes, it’s instilling a sense of precarity, even fear, in those who remain.” #T

“Workers in an industry that had long been famously union-agnostic at best had been forming bonds, organizing and developing solidarity. Layoffs of this scale and suddenness can be a blow to that process. […] If there’s one thing that firing people in a large-scale and seemingly random way accomplishes, it’s instilling a sense of precarity, even fear, in those who remain.” #Technology

[Link]


John Philpin : Lifestream

📺 Cunk On Earth Just so very, very good. I still have one

📺 Cunk On Earth Just so very, very good. I still have one reservation that I referenced after the first two episodes .. (read the thread) but that shouldn’t stop you watching it. Information on Reelgood ‘All’ My TV Shows  

📺 Cunk On Earth

Just so very, very good. I still have one reservation that I referenced after the first two episodes .. (read the thread) but that shouldn’t stop you watching it.

Information on Reelgood

‘All’ My TV Shows

 


@_Nat Zone

2月6日(月)22時〜 FIDOアライアンス台北セミナー感想 with 森山W3C理事とNIST当人認証標準SP800-63B を配信しながら読む会をやります

【台北 2月6日】2023 FIDO Taipei…

【台北 2月6日】2023 FIDO Taipei Seminar が、オードリー・タンデジタル大臣のスピーチで開始されました。非常に網羅的かつ「わかってる感」が染み出してくるスピーチでさすがでした。じつはわたしも今日のクロージングパネルに登場することになっています。

Panel: Global Perspectives on Passwordless Authentication

Moderator: Christina Hulka, Chief Operating Officer, FIDO Alliance Panelists: Koichi Moriyama, Chief Security Architect, Corporate Evangelist, and Head of Security Innovations Management, Corporate Strategy & Planning Department, NTT DOCOMO, INC. Nat Sakimura, Chairman, OpenID Foundation Vittorio Bertocci, Principal Architect at Okta Paul Grassi, Principal Security – Identity, Amazon Pamela Dingle, Director of Identity Standards, Microsoft

今日の夜は、FIDOアライアンスのExcecutive Council メンバーでW3C理事である森山さんとともに、本日の振り返りをするとともに、FIDOともっとも関係の深い「当人認証」部分である800-63Bを読んでいきたいと思います。

ホテルからの配信で機器も普段に比べて足りないですし、どこまでうまくいくかわかりませんが、ぜひ起こしください。

付録:FIDO Taipei Seminar アジェンダ 8:00-10:00REGISTRATION AND DEMOS10:00-11:00Welcome Remarks10:00-10:05Welcome RemarksSignature sponsor10:05-10:25Welcoming RemarksRepresentatives of Government10:25-10:45Guest speechAudrey TANG, Minister of Digital Affairs10:45-11:05FIDO Welcome + State of PasswordlessAndrew Shikiar, Executive Director and Chief Marketing Officer of FIDO Alliance11:05-11:30BREAK and DEMOS11:30-12:30FIDO Intros11:30-11:55FIDO Technical Overview & the Future of PasswordlessKi-Eun Shin, Principal Software Development Engineer, SK Telecom11:55-12:10The Value of FIDO CertificationPaul Heim, Director of Certification, FIDO Alliance12:10-12:45Impact of FIDO in Government and PolicyJeremy Grant, Managing Director, Technology Business Strategy, Venable LLP12:40-1:45LUNCH1:45-3:00FIDO Global Case Studies / Content1:45-2:00Taiwan identification solution — TW FidoHan-Chieh Sun, Chunghwa Telecom2:00-2:20NTT DOCOMO Goes PasswordlessMasao Kubo, NTT DOCOMO Inc. 2:20-2:40Bringing FIDO to on Premise Environment in TaiwanZake Huang, Vice President, AuthenTrend3:10-3:30BREAK and DEMOS3:30-4:30FIDO in IoT: FIDO Device Onboard3:30-3:50Introduction to FIDO Device OnboardDavid Turner, Senior Director of Standards Development, FIDO Alliance3:50-4:20Panel: The Imperative for a More Secure IoTModerator: David Turner, Senior Director of Standards Development, FIDO Alliance

Panelists:
Simon Trac, CEO, VinCSS

Giri Mandyman, Chief Security Architect – IoT and Automotive, Qualcomm

Rolf Lindemann, VP Products, Nok Nok Labs4:20-5:00Closing Sessions4:20-5:00Panel: Global Perspectives on Passwordless AuthenticationModerator: Christina Hulka, Chief Operating Officer, FIDO Alliance

Panelists:
Koichi Moriyama, Chief Security Architect, Corporate Evangelist, and Head of Security Innovations Management, Corporate Strategy & Planning Department, NTT DOCOMO, INC.

Nat Sakimura, Chairman, OpenID Foundation

Vittorio Bertocci, Principal Architect at Okta

Paul Grassi, Principal Security – Identity, Amazon

Pamela Dingle, Director of Identity Standards, Microsoft5:00-5:05Closing RemarksKaren Chang, Vice President, Egis Technology Inc.5:05-7:00RECEPTION AND DEMOS

John Philpin : Lifestream

Just to say thank you to some of the unseen heroes of the Mi

Just to say thank you to some of the unseen heroes of the MicroBlog community … more will be added - but it is a start.

February 5th. Follow The Daily Stoic All The P

February 5th. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

February 5th.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

Sunday, 05. February 2023

Ben Werdmüller

For a while I was deeply into ...

For a while I was deeply into the podcasting universe, but these days I really just want text that I can consume in my own way, at my own pace, using my own imagination. Call it impatience. But I love that podcasts exist and that so many people make and listen.

For a while I was deeply into the podcasting universe, but these days I really just want text that I can consume in my own way, at my own pace, using my own imagination. Call it impatience. But I love that podcasts exist and that so many people make and listen.


Simon Willison

Quoting Robin Sloan

When you start a creative project but don’t finish, the experience drags you down. Worst of all is when you never decisively abandon a project, instead allowing it to fade into forgetfulness. The fades add up; they become a gloomy haze that whispers, you’re not the kind of person who DOES things. When you start and finish, by contrast — and it can be a project of any scope: a 24-hour comic, a on

When you start a creative project but don’t finish, the experience drags you down. Worst of all is when you never decisively abandon a project, instead allowing it to fade into forgetfulness. The fades add up; they become a gloomy haze that whispers, you’re not the kind of person who DOES things.

When you start and finish, by contrast — and it can be a project of any scope: a 24-hour comic, a one-page short story, truly anything — it is powerful fuel that goes straight back into the tank. When a project is finished, it exits the realm of “this is gonna be great” and becomes instead something you (and perhaps others) can actually evaluate. Even if that evaluation is disastrous, it is also, I will insist, thrilling and productive. A project finished is the pump of a piston, preparing the engine for the next one.

Robin Sloan


Quoting Jack Clark (on MusicML)

The 21st century is being delayed: We’re stuck with corporations building these incredible artifacts and then staring at them and realizing the questions they encode are too vast and unwieldy to be worth the risk of tackling. The future is here – and it’s locked up in a datacenter, experimented with by small groups of people who are aware of their own power and fear to exercise it. What strange t

The 21st century is being delayed: We’re stuck with corporations building these incredible artifacts and then staring at them and realizing the questions they encode are too vast and unwieldy to be worth the risk of tackling. The future is here – and it’s locked up in a datacenter, experimented with by small groups of people who are aware of their own power and fear to exercise it. What strange times we are in.

Jack Clark (on MusicML)


Ben Werdmüller

A Mass. bill would cut prison time for organ donations. An advocate is calling the measure 'unethical and depraved.'

““They’re a marginalized group in society, highly stigmatized and extremely vulnerable,” Cox said in an interview. “And so to incentivize the selling of your body parts in exchange for the most precious commodity in the world — which is time on this earth, and your freedom — was just so appalling.”” #Democracy [Link]

““They’re a marginalized group in society, highly stigmatized and extremely vulnerable,” Cox said in an interview. “And so to incentivize the selling of your body parts in exchange for the most precious commodity in the world — which is time on this earth, and your freedom — was just so appalling.”” #Democracy

[Link]


Sam Bankman-Fried is not a child

“SBF is being extended the benefit of the doubt that many are not so lucky to get. He is affluent, white, male, and accused of white-collar crimes, and so he is granted the charitable characterization of a naive boy. Meanwhile, the perception that Black children, particularly those accused of violent crimes, are adult criminals has earned its own term: adultification bias.” #

“SBF is being extended the benefit of the doubt that many are not so lucky to get. He is affluent, white, male, and accused of white-collar crimes, and so he is granted the charitable characterization of a naive boy. Meanwhile, the perception that Black children, particularly those accused of violent crimes, are adult criminals has earned its own term: adultification bias.” #Crypto

[Link]


John Philpin : Lifestream

🔗 ‘Nicely’ read … 8 days late.

🔗 ‘Nicely’ read … 8 days late.

Saturday, 04. February 2023

John Philpin : Lifestream

🎵 Over on Scripting.com @dave mentioned that back in the day

🎵 Over on Scripting.com @dave mentioned that back in the day he was a fan of The Monkees. Unashamedly - so was I. He linked to this show. It’s long. The narrator (that’s what he sounds like) - has a boring voice - but still an interesting listen. Never heard of the 🎙️podcast - so had a look at other episodes. Just as long - and I suspect that the voice will be just as boring - BUT - some really

🎵 Over on Scripting.com @dave mentioned that back in the day he was a fan of The Monkees. Unashamedly - so was I. He linked to this show. It’s long. The narrator (that’s what he sounds like) - has a boring voice - but still an interesting listen.

Never heard of the 🎙️podcast - so had a look at other episodes. Just as long - and I suspect that the voice will be just as boring - BUT - some really interesting deep dives into some good songs and bands. I will be listening to this one - the band ‘Love’.


🎵 Third of three … in the bucket of ‘well I never’ … Los L

🎵 Third of three … in the bucket of ‘well I never’ … Los Lobos covers The Beatles ‘The Dukes of September’ .. who? Skaggs, McDonald. Fagen and needless to say, quite a few other great musicians around them.

🎵 Third of three … in the bucket of ‘well I never’ …

Los Lobos covers The Beatles

‘The Dukes of September’ .. who? Skaggs, McDonald. Fagen and needless to say, quite a few other great musicians around them.


🎵Running through some YouTube music last night - and well -

🎵Running through some YouTube music last night - and well - seriously - what is not to love? Stunning. Two very different songs from the same album. 🔗 Steven Wilson - Luminol (Mexico City) 🔗 Steven Wilson - The Raven That Refused To Sing (Germany)

🎵Running through some YouTube music last night - and well - seriously - what is not to love? Stunning. Two very different songs from the same album.

🔗 Steven Wilson - Luminol (Mexico City)

🔗 Steven Wilson - The Raven That Refused To Sing (Germany)


🎵 Jon Lord plays Dave Brubeck

As a kid, shall we say my musical taste did not jibe with my dad’s. Nor with my mum’s for that matter - although she did understand what I was listening to - and why. Probably because of that there are artists that she introduced me to that are still part of my listening cycle - like Dave Brubeck. She passed a while ago, but would have loved Deep Purple’s Jon Lord covering ‘UnSquare Dance’.

As a kid, shall we say my musical taste did not jibe with my dad’s. Nor with my mum’s for that matter - although she did understand what I was listening to - and why. Probably because of that there are artists that she introduced me to that are still part of my listening cycle - like Dave Brubeck.

She passed a while ago, but would have loved Deep Purple’s Jon Lord covering ‘UnSquare Dance’.


Ben Werdmüller

Don’t write this, write that

“Yet despite all of this, I don’t believe you can ignore the audience. You can’t aim at them, you can’t change to suit an imaginary audience in the hope of getting a real one. But writing is not for writers, it is for readers and if they are not in your mind in some way, I think your writing becomes self-indulgent.” #Culture [Link]

“Yet despite all of this, I don’t believe you can ignore the audience. You can’t aim at them, you can’t change to suit an imaginary audience in the hope of getting a real one. But writing is not for writers, it is for readers and if they are not in your mind in some way, I think your writing becomes self-indulgent.” #Culture

[Link]


Follow this site on the fediverse

You can now subscribe to my website on Mastodon / the fediverse by plugging @werd.io@werd.io into your search bar and hitting “follow”. It was really easy. Here’s how I did it: I signed up to Bridgy Fed Made sure my website produces an h-card that describes it well (Known does this out of the box). Added a single line to the top of my website redirects. That’s it. Ryan Barrett, who b

You can now subscribe to my website on Mastodon / the fediverse by plugging @werd.io@werd.io into your search bar and hitting “follow”.

It was really easy. Here’s how I did it:

I signed up to Bridgy Fed Made sure my website produces an h-card that describes it well (Known does this out of the box). Added a single line to the top of my website redirects. That’s it.

Ryan Barrett, who builds and supports the Bridgy set of services, is brilliant, and this simple tool is another reason why. I really appreciate how easy this was.


John Philpin : Lifestream

February 4th. Follow The Daily Stoic All The P

February 4th. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

February 4th.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


Management Positions Could Be Next on Chopping Block as Busi

Management Positions Could Be Next on Chopping Block as Businesses Course Correct It’s telling that the ‘permafrost layer’ is ‘next’ not ‘first’.

Management Positions Could Be Next on Chopping Block as Businesses Course Correct

It’s telling that the ‘permafrost layer’ is ‘next’ not ‘first’.


A ‘glowing’ Palm Springs.

A ‘glowing’ Palm Springs.

A ‘glowing’ Palm Springs.


The Salton Sea in the distance. Easy to forget just how big

The Salton Sea in the distance. Easy to forget just how big it is.

The Salton Sea in the distance. Easy to forget just how big it is.


Ben Werdmüller

Journalistic Lessons for the Algorithmic Age

“Before I go, I wanted to share the lessons I learned building a newsroom that integrated engineers with journalists and sought to use a new model for accountability journalism: the scientific method.” #Media [Link]

“Before I go, I wanted to share the lessons I learned building a newsroom that integrated engineers with journalists and sought to use a new model for accountability journalism: the scientific method.” #Media

[Link]


John Philpin : Lifestream

🎥 Zero Effect, 1998 - ★★★★

Just a great, wacky, fun movie.

Just a great, wacky, fun movie.


Cyberforge (Anil John)

Building mudge's wallet

CyberForge Journal - Feb 4, 2023

The digital wallet is the killer app for the completely open source hardware and software “digital mobile safe” program led by mudge (aka Peiter) Zatko, at Google’s Advanced Technologies and Projects (ATAP) Group back in 2015. We need it today!

Digital wallets are core to enabling individual agency and control over sensitive data in online and offline interactions. In many ways, the European Union is leading this charge with the massive identity ecosystem orchestration they are currently in the midst of.

At the same time, to ensure that wallets truly remain under individual control we must pay attention to choices being made in its development such that the future does not become a collection of proprietary vertical cylinders of excellence (silos), and that individuals have true choice in how and when they select wallets for their personal use.

However, there is a curious but important missing element in all of these digital wallet conversations; everyone talks about protocols to interact with a wallet and what should or should not be stored in a wallet, but there is little to no discussion about ensuring the openness, consistency and fidelity of the core security, privacy, cryptographic and storage aspects that a wallet actually needs!

Common need across digital wallets

Let us fast forward in time to understand what is needed:

I am an EU Citizen with a EU Digital Identity (EUDI) Wallet who is seeking to live and work in the US. I need to interact with the US Government to provide information about who I am, and at the end of that process receive some manner of immigration or work permit related credential. Since I already have an EUDI Wallet, I would like provide information to the USG from that wallet and in turn continue to use that wallet to store any immigration credential I get from the USG.

I am a Canadian who is looking to live and work in the EU and when seeking employment there would like to use the digital wallet from my Province that stores my identity information as well as education credentials. My potential employer supports, per EU regulation, the ability to consume these credentials in a digital form.

Following upon both of the above scenarios, I would like to open a bank account using my digital wallet to present the needed information for KYC.

There are so many more variations of this that span the public and private sector, as well as a variety and combinations of credentials.

The commonality across ALL of them is not the credentials, or the protocols, but the underlying foundation and “trustability” of the digital wallet itself by a counter-party. Everything else is pipes and payloads.

And that, to my mind, is what the Google ATAP Project Vault showed could be done in a completely open manner, such that you can actually have independently verifiable confidence in that wallet when it shows up at your digital front door.

Google ATAP Project Vault

I would highly recommend folks watching the video below, which includes both a great introduction by Regina Dugan (formerly the first female director of DARPA) as well as the overview and demo of the capability by mudge.

So the question is, why build this [...]

It turns out, you already have security elements in your phones and your computers. SIM Cards; they protect the things that are important to the carriers. Trusted platform modules, or TPMs; they protect the things that are important to the OEMs and the content providers. [...]

So, where is the security element that protects the things that are important to you, that you have complete control over? So we made one!

[...] So what do developers have to do to get the phones or laptops or IOT devices ready to use Project Vault? Nothing! The host system thinks this is a regular storage device.

mudge

Link to Project Vault YouTube video

 

As I wrote back in 2015:

Project Vault is a micro-SD form-factor card that, when plugged into a device, makes available to the device a suite of cryptographic tools to enable hashing, signing, bulk encryption, streaming encryption as well as a strong hardware random number generator.

The algorithms in the vault are not exposed to the host system so you can plug it into an untrusted device while being confident that the host system will not be able to corrupt the vault’s crypto services.

It also comes with 4GB of isolated, sealed storage which can be used for enabling capabilities such as an immutable logging system where evidence of malicious activity cannot be covered up by tampering with the logs.

It is operating system agnostic, so will work with a variety of both mobile and desktop operating systems. To the OS it behaves like a proc file system without any kernel drivers, which means that developers don’t have to do anything special in order to utilize it.

The one change I would hope for in any current day implementation “… is a Security Key form-factor with USB-C …” because USB-type C is mandated to become EU’s standard by the end of 2024, and where the EU is leading with that, the world will follow.

Opportunity for personal, truly open digital wallets

There is a clear opportunity here to implement an open source hardware(!) and software foundation (the vault) to enable secure and interoperable wallets that can be used for a variety of purposes, that truly remains under the control of an individual.

However, for anyone going down this path, what will become critical to their success is what they choose not to do rather than what they actually do.

In order to provide a truly open foundation for a digital wallet that is under the personal control of an individual, it will require them to:

keep the focus on enhancing and making production ready the foundational vault capabilities to support hardened, accelerated cryptographic operations and secure storage that are needed for any type of digital wallet build well documented APIs to interact with the core vault services that are open, royalty free and free to implement for anyone think through how best to put into place an independent assessment/verification mechanism with a high degree of transparency and credibility that allows counter-parties to assess the “goodness” of an implementation

I anticipate that the hardest challenges here will not be technical, but instead will be to:

resist the pressure from handset and platform vendors who will seek to influence the work in a manner such that the capabilities being externalized into the vault continue to be gatekeeper’d by them resist the use of platform or vendor specific hardware security elements instead of the open source externalized hardware security elements demonstrated by Project Vault resist getting dragged into the political infighting around directly implementing the variety of credential types or credential issuance and presentation protocols directly in the vault; it should be the responsibility of those specific communities to define and standardize how their protocols can utilize the open platform and vendor neutral vault APIs. Building on an open source foundation

I have no visibility into or awareness of what happened with Project Vault after mudge moved on from Google ATAP. What I do believe is that he, particularly given his prior stint at DARPA, looked around the corner for what was coming, drove the work that was needed to be ready for that future, and made sure it was as widely available as possible by making it entirely open source!

That is a gift that should not be wasted!

newRecently: Commons are not tragic

Elinor Ostrom, the first woman to be awarded the Nobel Prize in Economic Sciences, debunks the “Tragedy of the Commons” which posits that “Humans, when left to their own devices, compete with one another for resources until the resources run out.”

Her research revealed that “Far from being profoundly destructive, we humans have deep capacities for sharing resources with generosity and foresight.”

The features of successful systems, Ostrom and her colleagues found, include:

clear boundaries (the ‘community’ doing the managing must be well-defined);

reliable monitoring of the shared resource;

a reasonable balance of costs and benefits for participants;

a predictable process for the fast and fair resolution of conflicts;

an escalating series of punishments for cheaters;

and good relationships between the community and other layers of authority, from household heads to international institutions.

This has implications for and provides hope to those who are building and investing in the “Internet Commons”.

cyberLinks: random and relevant

Project Vault available under the Apache 2.0 License - “Verilog and VHDL and all the code for the security based Real Time Operating System (RTOS), the SD Controller firmware to allow communications with the host systems, the NAND flash translation layer so you can manipulate the sealed storage, the hardened and accelerated hardware crypto cores, and the interface description language so you can talk to it. Even the processor is open source - the OpenRISC1200”

The Wallet Wars Are Not About Money, They Are About Identity - “Around the world the transition from physical wallets to digital wallets is well underway. An Accenture survey of 16,000 customers in 13 countries found that 56% of them were using digital wallets more than five times every month (compared with only 48% using cards that often) and they interpret these results to mean that heading towards a hundred billion dollars of annual payments revenues for banks are “at risk”.”

OpenWallet Foundation - “The mission of the OWF is to develop an open source engine to enable secure and interoperable multi-purpose wallets anyone can use to build solutions. The OWF aims to set best practices for digital wallet technology through collaboration on open source code for use as a starting point for anyone who strives to build interoperable, secure and privacy-protecting wallets.”

World Wide Web Consortium (W3C) is now a public-interest nonprofit organization - “… perhaps the best insurance policy came in the form of what has been described by The Boston Globe as Berners-Lee’s “greatest act of all” in being something he “didn’t do:” require fees for patents. Instead, a royalty-free patent policy was created so that people who use patents covering technologies in their standards don’t have to pay royalties or fees. Other standards development organizations have since copied this over the years since it was developed in the early 2000s.”

This Journal entry, Building mudge's wallet, first appeared on CyberForge.

Reply to this Journal entry via email


Werdmüller on Medium

Resetting professional goals

Giving myself permission to dream big again. Continue reading on Medium »

Giving myself permission to dream big again.

Continue reading on Medium »


Ben Werdmüller

Resetting professional goals

I’ve been spending some time looking carefully at my professional goals. A few years ago, I open sourced the mission / vision / tactical worksheet I’d been using, which was inspired by high-level organizational strategy. First, it invites you to consider your “mission”: This is your north star. For example, a possible mission statement is to work on technology that makes the world more equa

I’ve been spending some time looking carefully at my professional goals.

A few years ago, I open sourced the mission / vision / tactical worksheet I’d been using, which was inspired by high-level organizational strategy. First, it invites you to consider your “mission”:

This is your north star. For example, a possible mission statement is to work on technology that makes the world more equal. Another example of a mission statement is to work at startups building world-class products that change the world.

Then, your “vision”. For a company, the vision is the world you want to create through your mission and activities. For a person, that’s not far off:

This is where you want to see yourself in 5 or 10 years. One long-term goal is to be the founder of a generational tech company. Other long-term goals are to be a senior individual contributor engineer, or an engineering manager, or a product manager at a large tech giant.

And then the near-term steps:

What measurable, actionable steps bring you closer to your goal?

While I’ve found this to be a useful framework, it undeniably suffers from a lack of focus. For example, the definition of a “measurable, actionable step” could vary a great deal from person to person.

Recently, through professional development at The 19th, I was introduced to the Management Center’s SMARTIE goals:

SMARTIE stands for Strategic, Measurable, Ambitious, Realistic, Time-bound, Inclusive, and Equitable. By incorporating an equity and inclusion component to your SMART goals, you can make sure your organization’s commitment to racial equity and inclusion is anchored by tangible and actionable steps.

The traditional definition of SMART goals is Specific, Measurable, Achievable, Relevant, and Time-Bound. It’s a useful framework in itself, in the sense that at least it provides some structure and concreteness, but in its definitions it encourages you to diminish mission and values in your work. A goal that is Specific and Achievable is highly likely to just be iterative on what you’re already doing.

In contrast, SMARTIE encourages you to dream. By replacing Specific with Strategic, we’re encouraged to think longer-term. By replacing Achievable with Ambitious, we’re given permission to stretch for what we really want. Swapping Relevant for Realistic reminds us to keep our feet on the ground, but in paired this with Achievable we’re being asked to find a productive middle ground between our dreams and reality. And then reminding us that our goals must be Inclusive and Equitable ensures that we consider our impact on others, and on our communities and ecosystems.

I like it a lot.

Giving me permission to make my goals values-oriented is, in itself, a big deal. I’ve often had to smuggle my values into goals and trade them off with the goals of the organization. Here, I’m asked to put them front and center.

But the detail-oriented approach and demand for accountability has also made me reconsider my personal mission and vision.

For years now, my mission has read: To work on projects with the potential to make the world more equal and informed. In itself, it’s been fairly useful: I’ve been able to look at opportunities and ask, “well, does this have the potential to do those things?” And it’s been easy to say “no” to opportunities that don’t.

But it’s possible for an opportunity to have the potential to make the world more equal and informed but still not be a fit for what I want to do. It doesn’t communicate what I actually do, to myself or anyone else. For example, The 19th, my current employer, is clearly an organization with the potential to make the world more equal and informed - it’s a newsroom (check!) that particularly serves women, women of color and the LGBTQ+ community (check!) with the information, resources and community they need to be equal participants in our democracy (check!). So as an organization it’s aligned with my mission. But what does my mission say I should do there? Should I be an illustrator? I’d love that, but I don’t think they’d have me. A journalist? Again, I wish. No, based on my experience as an engineer, founder, and investor, I’m probably better off serving them on the technical side - and even then, by building, supporting, and advising on a particular kind of software.

So it’s more accurate and useful to say that my mission is to support organizations that have the potential to make the world more equal, open, and informed by building and supporting open web software and strategy.

It still needs workshopping. But we’re a lot closer: you still know I want to work to help make the world more equal and informed, but now you can more definitively say where I can be helpful and want to be working. It also emphasizes openness: there are plenty of allegedly equal worlds that are authoritarian or limited, and that’s not what I want to be a part of. And in specifying this greater detail, I can make more detailed choices.

So, onto vision. Perhaps the most famous vision statement in computing is Microsoft’s original wish to create a world with “a microcomputer on every desk and in every home running Microsoft software”, which balanced an intent to markedly change the marketplace (and arguably the world) with making it obvious what Microsoft’s role in that transition would be.

If a person or organization’s mission is fairly hard and fast, their vision is likely to change more often. Microsoft’s vision statement is no longer the above (in no small part because they achieved it). So what is mine? The world I want to bring into existence, based on my ambitions and values as they stand right now?

Here’s my draft attempt: To build and lead a diverse and inclusive generational organization that produces open source software, advice, and advocacy in service of making the world more equal, open, and informed.

Breaking it down: a generational organization inherently says not a startup that’s designed to exit quickly; I want to build something that will last a long time. It also leaves the form of the organization open: it could be a private company, a non-profit, a co-operative, and so on. Diverse and inclusive specifies that it should be an organization with diverse leadership and inclusive practices. Produces open source softwareis self-explanatory, but advice and advocacy is an important clause to me: it says I’m not just building software but also helping people think about their own policies, strategies, and use. I don’t just want to be a personal expert in this arena; I want to build an organization that shares that expertise in service of my mission.

That doesn’t mean I want to do that right this second, or from scratch. I’m very happy at The 19th - and in many ways I am building this organization as a member of its Senior Leadership Team. (It’s a startup, so I think it’s also fair to say that everyone on the team is also building it.) But I don’t think I’d be so happy if I wasn’t learning so much about building a diverse organization, and about hitting that mission. The people I get to work with and the journalism and processes we produce are so good that I’m leveling up more and more the longer I stick around. And understanding that this is important to me helps me figure out what my more tactical goals need to be.

Those tactical goals are where that SMARTIE framework comes into play. Knowing what my underlying mission is, and what I want to have achieved in 5 or 10 years, what are my concrete next steps over the next six to twelve months?

The answer is a mix of the organization’s goals - I want to support it, remember? - and my own developmental tasks. I need to relearn how to center those values in my work, and communicate those values more clearly; I need to build more focused, structured reporting into my team’s policies and procedures; I need to hold us accountable to values as well as productivity. And I want, ambitiously, to lead the industry in doing all those things. There are more, of course, and there’s a lot of prioritization that needs to happen. But by reconsidering my personal mission and vision, and applying a different framework to the individual tactical goals I set out for myself and will be held accountable to, I’m much closer than I was.

 

Photo by Randy Tarampi on Unsplash


Simon Willison

Quoting Andrej Karpathy

The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase vocab size from 50257 to 50304 (nearest multiple of 64). This calculates added useless dimensions but goes down a different kernel path with much higher occupancy. Careful with your Powers of 2. — Andrej Karpathy

The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase vocab size from 50257 to 50304 (nearest multiple of 64). This calculates added useless dimensions but goes down a different kernel path with much higher occupancy. Careful with your Powers of 2.

Andrej Karpathy

Friday, 03. February 2023

Ben Werdmüller

'I wiped my eyes and wrote the facts'

“As a reporter, I felt tasked with the duty of accurately representing this funeral and the vile circumstances that led to it. As a Black reporter, I felt a duty to bear witness to his unjust death and the burden of grief that came with it.” This edition of The 19th’s weekly newsletter is breathtakingly written. Yet another reason I’m proud to work there. #Media

“As a reporter, I felt tasked with the duty of accurately representing this funeral and the vile circumstances that led to it. As a Black reporter, I felt a duty to bear witness to his unjust death and the burden of grief that came with it.” This edition of The 19th’s weekly newsletter is breathtakingly written. Yet another reason I’m proud to work there. #Media

[Link]


John Philpin : Lifestream

February 3rd. Follow The Daily Stoic All The P

February 3rd. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

February 3rd.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


Ben Werdmüller

In ‘The Last of Us,’ a survivor of the AIDS crisis saw his partner's death honored

““As I’m watching it, I’m like, ‘Oh my god,’ [‘The Last of Us’ co-showrunner] Craig Mazin wrote this piece that just made me feel like someone saw me and Robert,” he said. “Somehow Mazin wrote this piece of art that reflected not just the life that Robert and I had, a falling in love in this dystopian time, but the lives of so many of my friends who also found loves that they

““As I’m watching it, I’m like, ‘Oh my god,’ [‘The Last of Us’ co-showrunner] Craig Mazin wrote this piece that just made me feel like someone saw me and Robert,” he said. “Somehow Mazin wrote this piece of art that reflected not just the life that Robert and I had, a falling in love in this dystopian time, but the lives of so many of my friends who also found loves that they loved and lost.”” #Culture

[Link]


John Philpin : Lifestream

📺 I watched the first ten minutes of 🔗 Cunk on Earth on Netf

📺 I watched the first ten minutes of 🔗 Cunk on Earth on Netflix. Added to the top of the watch list.

📺 I watched the first ten minutes of 🔗 Cunk on Earth on Netflix. Added to the top of the watch list.


Ben Werdmüller

Squid skin inspires novel “liquid windows” for greater energy savings

“The idea of a building that can learn, that can adjust this dynamic array on its own to optimize for seasonal and daily changes in solar conditions, is very exciting for us.” No kidding! #Science [Link]

“The idea of a building that can learn, that can adjust this dynamic array on its own to optimize for seasonal and daily changes in solar conditions, is very exciting for us.” No kidding! #Science

[Link]


AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse

“In one example, a generated voice that sounds like actor Emma Watson reads a section of Mein Kampf. In another, a voice very similar to Ben Shapiro makes racist remarks about Alexandria Ocasio-Cortez. In a third, someone saying “trans rights are human rights” is strangled.” #AI [Link]

“In one example, a generated voice that sounds like actor Emma Watson reads a section of Mein Kampf. In another, a voice very similar to Ben Shapiro makes racist remarks about Alexandria Ocasio-Cortez. In a third, someone saying “trans rights are human rights” is strangled.” #AI

[Link]


John Philpin : Lifestream

🔗 Ilhan Omar’s removal from panel was ‘stupidest vote’, says

🔗 Ilhan Omar’s removal from panel was ‘stupidest vote’, says Republican Ken Buck, a committee member and Republican congressman from Colorado, was heard calling the action the “stupidest vote in the world. … maybe, but he still voted ’for’.

🔗 Ilhan Omar’s removal from panel was ‘stupidest vote’, says Republican

Ken Buck, a committee member and Republican congressman from Colorado, was heard calling the action the “stupidest vote in the world.

… maybe, but he still voted ’for’.


Advice from a partner at Battery Ventures … Ray’s net w

Advice from a partner at Battery Ventures … Ray’s net worth is ‘a lot’, making (at just one gig) “$4,564,860 as Chairman of the Board at Guidewire Software Inc.” His point? We need to stop paying people so much (paywalled article). I wonder how much Battery pay him?

Advice from a partner at Battery Ventures …

Ray’s net worth is ‘a lot’, making (at just one gig)

“$4,564,860 as Chairman of the Board at Guidewire Software Inc.”

His point? We need to stop paying people so much (paywalled article).

I wonder how much Battery pay him?


“The Republicans have accepted violence among Congress mem

“The Republicans have accepted violence among Congress members.” 💬 Heather Cox Richardson

“The Republicans have accepted violence among Congress members.”

💬 Heather Cox Richardson


🎵 Just added a few pages about music to my site that emerged

🎵 Just added a few pages about music to my site that emerged from a conversation a few weeks ago with @hutaffe … enjoy. I hope it also contributes to a better understanding of ‘why only three’ @SimonWoods 🥺

🎵 Just added a few pages about music to my site that emerged from a conversation a few weeks ago with @hutaffe … enjoy.

I hope it also contributes to a better understanding of ‘why only three’ @SimonWoods 🥺


Jon Udell

Mapping the wider Fediverse

I began this journey convinced that Steampipe could help the Fediverse evolve, but not sure exactly how. My first thought was to use Steampipe’s API-wrangling superpower to study patterns of communication (and conflict) across the Fediverse. But as one of many Twitter escapees last November, I soon realized that the network I was joining reflected … Continue reading Mapping the wider Fediverse

I began this journey convinced that Steampipe could help the Fediverse evolve, but not sure exactly how. My first thought was to use Steampipe’s API-wrangling superpower to study patterns of communication (and conflict) across the Fediverse. But as one of many Twitter escapees last November, I soon realized that the network I was joining reflected a culture that had been humming along nicely for six years and didn’t particularly want to be the object of sociological study.

As I argued in Autonomy, packet size, friction, fanout, and velocity, Mastodon bakes in certain kinds of friction for reasons. You’ve likely heard about a default unfriendliness to search, which is both a technical setting and a cultural choice that privileges the experience of current flow over the mining of past flow. Even more fundamentally, the ID of a toot not only differs from server to server but also obfuscates the toot’s date, another technical/cultural choice that means you can’t randomly access history by date. None of these frictions is insurmountable. They will be overcome for purposes good and bad. I hope and expect that communities will be able to choose their desired amounts and kinds of friction while still interoperating with others. But for my project it seemed that trying to survey the wider Fediverse wasn’t the right place to start.

So instead I began to explore a different way to read my home timeline. The dashboards that I’ve built and described in this series have become, for me at least, an effective way to scan recent Mastodon flow, then click into the stock client to post, reply, or boost. After overcoming a few obstacles, things are starting to feel like the Bloomberg terminal for Mastodon that I envision.

One of those obstacles was the awkward copy/paste/search of foreign toot URLs that was required in order to interact with them. That’s now overcome by Instance-qualified Mastodon URLs. Another obstacle was the difficulty of curating and effectively reading topical lists of people. The strategies described in Lists and people on Mastodon and Working with Mastodon lists have improved matters nicely. And relationship graphs turned out to be a more useful alternate view of the current flow than I had expected.

I think I’ve proven that a set of Steampipe dashboards, layered on a plugin that maps the Mastodon API to tables that the dashboards query, can improve the ability to absorb and react to Mastodon flow. An unproven corollary: Steampipe’s dashboards-as-code system is only one of potentially many clients of the Mastodon plugin. Any dashboarding system or web app or native app could tap into the same query capability to deliver still another way to experience the flow. But that’s a future episode.

Meanwhile, with a decent reading experience in place, this seemed like a good time to circle back to the question of surveying the wider Fediverse. To begin enabling that I added a couple of new tables to the plugin: mastodon_peer and mastodon_domain_block.

Here’s a query that uses mastodon_peer.

with data as ( select 'https://' || server as server from mastodon_toot where timeline = 'home' limit 100 ), servers as ( select server, count(*) as occurrences from data group by server ) select s.server, s.occurrences, count(p.peer) as peers from servers s join mastodon_peer p on s.server = p.server group by s.server, s.occurrences order by peers desc

In Engish: gather the most recent 100 toots in my home timeline, count the occurrences of each origin server, then ask each origin server how many other servers it talks to. Unsurprisingly my home server, mastodon.social, occurs most often. And because it’s the marquee Mastodon server it has the most peers.

+----------------------------------+-------------+-------+ | server | occurrences | peers | +----------------------------------+-------------+-------+ | https://mastodon.social | 11 | 49732 | | https://fosstodon.org | 1 | 33973 | | https://octodon.social | 1 | 29983 | | https://infosec.exchange | 2 | 26833 | | https://indieweb.social | 9 | 26279 | | https://hachyderm.io | 3 | 19911 | | https://social.treehouse.systems | 3 | 18110 | | https://journa.host | 1 | 18021 | | https://nerdculture.de | 9 | 17984 | | https://werd.social | 2 | 13792 | | https://dan.mastohon.com | 2 | 13351 | | https://masto.nyc | 1 | 10917 | | https://mastodon.archive.org | 1 | 9582 | | https://social.fossdle.org | 1 | 8343 | | https://devdilettante.com | 12 | 6898 | +----------------------------------+-------------+-------+

Here’s a query that uses mastodon_domain_block.

with data as ( select 'https://' || server as server from mastodon_toot where timeline = 'home' limit 100 ), servers as ( select server, count(*) as occurrences from data group by server ) select s.server, s.occurrences, count(d.domain) as "blocked domains" from servers s join mastodon_domain_block d on s.server = d.server group by s.server, s.occurrences order by "blocked domains" desc

This one says: Again gather the origin servers in my recent home timeline, but this time ask each one how many other servers it blocks. Here we see that octodon.social, which happened to show up in my timeline when I ran the query, blocks a lot more servers than mastodon.social does.

+--------------------------+-------------+-----------------+ | server | occurrences | blocked domains | +--------------------------+-------------+-----------------+ | https://octodon.social | 1 | 510 | | https://mastodon.social | 8 | 181 | | https://hachyderm.io | 4 | 125 | | https://infosec.exchange | 4 | 66 | | https://nerdculture.de | 1 | 36 | | https://indieweb.social | 4 | 23 | +--------------------------+-------------+-----------------+

One could, and maybe at some point I will, comprehensively acquire and store this data. But meanwhile, how might it enhance the experience of reading recent Mastodon flow? Here’s what I’ve got so far.

We’ve already seen the first table which answers the question: “How many servers does each of the origin servers in my timeline block?” The second table answers a different question: “Which servers are most often blocked by the origin servers in my timeline?”

The blocking server list column shows an interesting mix of consensus and variation, and I think that will be something to explore comprehensively across the Fediverse. But for now I like how this view contextualizes what’s in my home timeline. Like the relationship graphs, it’s highly dynamic because my immediate network neighborhood is changing all the time. Whatever the current set of servers happens to be, though, I now have some clues about how connected each of those servers is and how aggressively each one is blocking others. This feels like a good first step toward mapping the wider Fediverse.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Ben Werdmüller

The ‘Enshittification’ of TikTok

“This is enshittification: Surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit. From mobile app stores to Steam, from Facebook to Twitter, this is the enshittification lifecycle.” #Technology [Link]

“This is enshittification: Surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit. From mobile app stores to Steam, from Facebook to Twitter, this is the enshittification lifecycle.” #Technology

[Link]


Simon Willison

Quoting Ludwig Stumpp

Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application. In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code — Ludwig Stumpp

Just used prompt injection to read out the secret OpenAI API key of a very well known GPT-3 application.

In essence, whenever parts of the returned response from GPT-3 is executed directly, e.g. using eval() in Python, malicious user can basically execute arbitrary code

Ludwig Stumpp


Ben Werdmüller

I’m Now a Full-Time Professional Open Source Maintainer

“Long term, I want this model to grow beyond me and become a known professional path. This experiment is both easier and harder for me than it will be for those after me: easier because I have an extensive personal network and the financial means to safely take risks; harder because it’s uncharted territory for both me and the clients and because there’s a lack of legal, admin

“Long term, I want this model to grow beyond me and become a known professional path. This experiment is both easier and harder for me than it will be for those after me: easier because I have an extensive personal network and the financial means to safely take risks; harder because it’s uncharted territory for both me and the clients and because there’s a lack of legal, administrative, and marketing tools. I hope that as things progress the barriers will lower, making the model accessible to more and more people.” Inspiring! #Technology

[Link]


John Philpin : Lifestream

I don’t drink Pepsi and I certainly don’t drink Anheuser-Bus

I don’t drink Pepsi and I certainly don’t drink Anheuser-Busch .. so I am delighted to report that I will not be responsible if those companies report successful returns on their planned multimillion dollar Twitter advertising over SuperBowl.

I don’t drink Pepsi and I certainly don’t drink Anheuser-Busch .. so I am delighted to report that I will not be responsible if those companies report successful returns on their planned multimillion dollar Twitter advertising over SuperBowl.


Simon Willison

I’m Now a Full-Time Professional Open Source Maintainer

I’m Now a Full-Time Professional Open Source Maintainer Filippo Valsorda, previously a member of the Go team at Google, is now independent and making a full-time living as a maintainer of various open source projects relating to Go. He's managing to pull in an amount "equivalent to my Google total compensation package", which is a huge achievement: the greatest cost involved in independent open

I’m Now a Full-Time Professional Open Source Maintainer

Filippo Valsorda, previously a member of the Go team at Google, is now independent and making a full-time living as a maintainer of various open source projects relating to Go. He's managing to pull in an amount "equivalent to my Google total compensation package", which is a huge achievement: the greatest cost involved in independent open source is usually the opportunity cost of turning down a big tech salary. He's doing this through a high touch retainer model, where six client companies pay him to keep working on his projects and also provide them with varying amounts of expert consulting.


Ben Werdmüller

The Celsius examiner's report: a picture of fraud and incompetence

“For some reason, Pillay stops short of outright stating that “Celsius was a Ponzi scheme”, but the facts speak for themselves.” #Crypto [Link]

“For some reason, Pillay stops short of outright stating that “Celsius was a Ponzi scheme”, but the facts speak for themselves.” #Crypto

[Link]


Wip Abramson

PhD: Completed It!

2022 marked the end of a big chapter of my life. The culmination of my PhD journey, easily the most challenging project I have ever…

2022 marked the end of a big chapter of my life. The culmination of my PhD journey, easily the most challenging project I have ever undertaken. Alongside graduating from my PhD as a Doctor of Philosophy, this year I formalised my relationship as a Consultant Requirements Engineer at Legendary Requirements, helped to produce RWOT the Hague and participated in a research sprint focused on Digital Identity in Times of Crisis coordinated by the Berkman Klein Center where I wrote a soon to be published short hopepunk story titled Experiences in Memory Space.

This post is a brief reflection on and celebration of this last year and the PhD adventure that brought me to this moment. It will likely be the last post I submit to this site, as I experiment with new self-presentations of myself online.

First, I have to say what an honour, joy and privilege it was to participate as one of the first PhD students at the Blockpass Identity Lab at Edinburgh Napier University. I didn’t fully comprehend what I was getting myself into back in 2018, but it exceeded my expectations.

That I had the freedom and support to explore the boundaries and the depths of human knowledge is something I will be forever grateful for. To perceive the interconnections and evolution of human scientific thought stretching back into our past, back to the seeds of ideas and the subsequent foundations upon which our modern information society has been built. It has been both an awe-inspiring and humbling experience. I am grateful to all those who influenced, shaped and supported me along the way, especially my PhD supervisor and mentor Professor Bill Buchanan.

It is telling that the last post I made to this site was almost two years ago in March 2021. Since then, almost all of my writing energy was turned towards producing my thesis. In the process I wrote over 200,000 words on 750words.com, a lifesaving practice where I worked through my goals, fears, doubts and initial drafts. Then there were the countless words produced iterating through drafts, revisions and rewrites as I attempted to corral my thoughts and sythesise my learnings into a coherent, consistent body of text. The end result being an 80,000 word tome titled Identity and Identification in an Information Society: Augmenting Formal Systems of Identification with Technological Artefacts that I imagine few will ever read. Regardless, it is a piece of work I will always be proud of.

My thesis presents my unique perspective on the identification systems, shaped by my academic research across multiple disciplinary boundaries and my practical experience as an implementer and participant in the rapidly evolving decentralized identity space. I believe there are some nuggets within its pages. My personal highlights include:

The chapter titled Security without Identification which traces the history and evolution scientific thought on cryptography through the lens of cryptographic credentials first conceptualised by David Chaum’s seminal paper Security without Identification. My determination to include a broader, sociological understanding of identification systems, despite advice that it should be left out of scope. See Chapters Identity, Trust and Privacy in an Information Society and Identification Systems. The workshop I ran at the Royal College of Physicians of Edinburgh that introduced verifiable credentials and explored issues and perceptions around their use for clinical passporting solutions. The subtle influences of Ludwick Fleck’s work: The Genesis and Development of a Scientific Fact that I attempted to weave throughout my thesis. The diverse, high quality references throughout my thesis, that are testament to the depth and breadth of human knowledge produced on identity and identification throughout the last 100 years. The content in my appendix, especially my wacky abstract diagrams and a paper I wrote but never published that talks about identity and interaction in terms of complex adaptive systems.

Of course there are some aspects I am less pleased with, but I imagine that is always the case. I completed it, passed my viva and graduated which is all that really matters at the end of the day.

I have ambitions to synthesise and adapt parts of my thesis into more digestible content which I will make available on my new professional self-presentation - https://drwip.com. We will see how that goes. In the meantime, if you are interested to give it a read I am happy to send it across.

It is also important to point out that the thesis is not the journey, just like the map is not the territory. There were so many other explorations, side quests, learning opportunities and experiences that shaped my perspective throughout my PhD and are not well reflected in the thesis. Growing into a researcher and developing my academic voice, exploring the beauty of Edinburgh and Scotland, learning Rust, participating in the interdisciplinary TAPESTRY research sandpit, over two years collaborating with the OpenMined community, the Odyssey hackathon where I contributed to the initial POC for the Commons Stack, RWOT Barcelona, MyData, my attempt at the Cabaret of Dangerous Ideas as part of the unfortunately virtual Edinburgh Fringe, my evolving thoughts about memory and meaning and countless other experiences. With at least half of my studies against the backdrop of the pandemic, adding its own layer of challenges into the mix and changes to adapt to.

My PhD journey was daunting, exhilarating, stressful, inspiring, exhausting, humbling and joyful at different moments along the way. A rollercoaster of emotions, motivation, exhaustion and determination. I am glad it is over, but I will be forever grateful for the opportunities and experiences that it opened up for me.

Looking to the Future

The future is unknown and unknowable. However, despite this I have repeatedly seen the value of setting expectations of it. Imagining, outlining and articulating possible futures that I would like to see come to pass. In fact I am playing around with a whole new space - https://futurewip.com - to playfully explore and experiment with just this in the context of the climate emergency. Even if it is more of a placeholder at the moment. I enjoy learning a new voice and style with which to write in. Crafting new identities you might say. Which if I have learnt anything over my PhD, it is that the identities we hold, take or have applied to us shape our experienced present and inturn influence our future. I believe there a few things more powerful that intentionally constructing loosely held identities for yourself as a means to explore yourself and who you might want to be in the future.

Another thing I learnt during my thesis, is that I love the freedom to self-direct my attention and time whilst exploring things that I care deeply about. Which identity and identification in an information society is certainly one, as I have come to view it as critical to the future of humanity and the forms and structures it might take. Whatever I end up doing in the future, I want to make sure I have time to spend as I choose because I am confident in my ability to spend it well and know this helps me thrive. At the moment that looks like pursuing a research and development agenda focused on memory and meaning as previously discussed on this blog. Indeed the questions I posed on my home page two years ago appear rather prescient and increasingly relevant in the context of LLMs such as ChatGPT. The speculative fiction story I produced as part of the BKC research sprint further explores these ideas through a different creative lens. I have some other interesting ideas and experiments around this concept that I hope to share over the coming months.

Professionally, I see my future intertwined with Legendary Requirements. Through Legendary I have been exposed to a diverse and interesting set of clients, projects and technologies with challenging problems that are fun to work through. It is clear that we add value to our clients, that I add value to Legendary and that Legendary values my contributions. My work is flexible, engaging and self-directed. The people I work with are intelligent, kind and thoughtful. I feel fortunate to have had such a smooth transition from my PhD into a job that respects my time, supports my growth and encourages my independence.

The decentralized identity space is still young and emerging. Legendary Requirements is well positioned to continue to work at the forefront of this space, helping clients discover, document and maintain human requirements for real world systems and use cases.

I am excited to see what the future holds.

Thursday, 02. February 2023

John Philpin : Lifestream

“I need your zip code to run this credit card.” I give

“I need your zip code to run this credit card.” I give him the USA zip code associated with the card. “ But you have a British passport?” I do. “So why is your driving license from New Zealand.” They agreed I could drive.

“I need your zip code to run this credit card.”

I give him the USA zip code associated with the card.

“ But you have a British passport?”

I do.

“So why is your driving license from New Zealand.”

They agreed I could drive.


North Island earthquake: Magnitude 4.8 quake strikes near Te

North Island earthquake: Magnitude 4.8 quake strikes near Te Aroha, jolts Auckland, Bay of Plenty, Waikato - NZ Herald I hope everyone is ok? 4.8 is not a big one … but after the rains there is surely a lot of loose ground around.

North Island earthquake: Magnitude 4.8 quake strikes near Te Aroha, jolts Auckland, Bay of Plenty, Waikato - NZ Herald

I hope everyone is ok? 4.8 is not a big one … but after the rains there is surely a lot of loose ground around.


Simon Willison

GROUNDHOG-DAY.com

GROUNDHOG-DAY.com "The leading Groundhog Day data source". I love this so much: it's a collection of predictions from all 59 groundhogs active in towns scattered across North America (I had no idea there were that many). The data is available via a JSON API too. Via Show HN: Groundhog-day.com – structured groundhog data

GROUNDHOG-DAY.com

"The leading Groundhog Day data source". I love this so much: it's a collection of predictions from all 59 groundhogs active in towns scattered across North America (I had no idea there were that many). The data is available via a JSON API too.

Via Show HN: Groundhog-day.com – structured groundhog data


Carving the Scheduler Out of Our Orchestrator

Carving the Scheduler Out of Our Orchestrator Thomas Ptacek describes Fly's new custom-built alternative to Nomad and Kubernetes in detail, including why they eventually needed to build something custom to best serve their platform. In doing so he provides the best explanation I've ever seen of what an orchestration system actually does. Via @tqbf

Carving the Scheduler Out of Our Orchestrator

Thomas Ptacek describes Fly's new custom-built alternative to Nomad and Kubernetes in detail, including why they eventually needed to build something custom to best serve their platform. In doing so he provides the best explanation I've ever seen of what an orchestration system actually does.

Via @tqbf


Ben Werdmüller

Child care crisis is causing parents to leave their jobs or get fired, study shows

"Of the parents surveyed, 26 percent quit their jobs because of child care problems and 23 percent were fired. The number of parents who were fired or had their pay reduced is three times as high as it was just five years ago. The rate of parents quitting has doubled since 2018.“ #Society [Link]

"Of the parents surveyed, 26 percent quit their jobs because of child care problems and 23 percent were fired. The number of parents who were fired or had their pay reduced is three times as high as it was just five years ago. The rate of parents quitting has doubled since 2018.“ #Society

[Link]


Where is abortion legal? Almost half of all Americans aren’t sure, new poll shows

“Half of women are unsure if medication abortion is legal in their state, and a third don’t know if they are allowed to access emergency contraceptive pills, new polling from the Kaiser Family Foundation (KFF) found.” #Society [Link]

“Half of women are unsure if medication abortion is legal in their state, and a third don’t know if they are allowed to access emergency contraceptive pills, new polling from the Kaiser Family Foundation (KFF) found.” #Society

[Link]


ShotSpotter Employees Not Only Have The Power To Alter Gunshot Reports, But Do It Nearly 10% Of The Time

“ShotSpotter’s human techs don’t just alter reports to distinguish things like a car’s backfiring from a suspected criminal’s gun firing. They also alter determinations and gunshot locations to better serve the needs of law enforcement agencies that interact with them.” #Technology [Link]

“ShotSpotter’s human techs don’t just alter reports to distinguish things like a car’s backfiring from a suspected criminal’s gun firing. They also alter determinations and gunshot locations to better serve the needs of law enforcement agencies that interact with them.” #Technology

[Link]


Media's Money Problem

“Low pay and grueling hours mean barriers to entry that skew journalism toward a certain demographic — white and male. It’s impossible to do your best work shining light on the activities of elected officials when you make $12 an hour and those same elected officials are organizing social media campaigns to put you out of work altogether. And it’s impossible to cover the neede

“Low pay and grueling hours mean barriers to entry that skew journalism toward a certain demographic — white and male. It’s impossible to do your best work shining light on the activities of elected officials when you make $12 an hour and those same elected officials are organizing social media campaigns to put you out of work altogether. And it’s impossible to cover the needed range and depth of stories when you are overworked and underpaid and understaffed.” #Media

[Link]

Wednesday, 01. February 2023

Simon Willison

Python’s “Disappointing” Superpowers

Python’s “Disappointing” Superpowers Luke Plant provides a fascinating detailed list of Python libraries that use dynamic meta-programming tricks in interesting ways - including SQLAlchemy, Django, Werkzeug, pytest and more.

Python’s “Disappointing” Superpowers

Luke Plant provides a fascinating detailed list of Python libraries that use dynamic meta-programming tricks in interesting ways - including SQLAlchemy, Django, Werkzeug, pytest and more.


pyfakefs usage

pyfakefs usage New to me pytest fixture library that provides a really easy way to mock Python's filesystem functions - open(), os.path.listdir() and so on - so a test can run against a fake set of files. This looks incredibly useful. Via Luke Plant

pyfakefs usage

New to me pytest fixture library that provides a really easy way to mock Python's filesystem functions - open(), os.path.listdir() and so on - so a test can run against a fake set of files. This looks incredibly useful.

Via Luke Plant


Ben Werdmüller

Reading, watching, playing, using: January 2023

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for January, 2023. Apps + Websites Permission Slip. “It’s no secret that a huge number of companies are collecting, buying and selling data about us. Find out what information they collect, and take action to help protect yourself.” An app by Consumer Reports that checks to see which busi

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for January, 2023.

Apps + Websites

Permission Slip. “It’s no secret that a huge number of companies are collecting, buying and selling data about us. Find out what information they collect, and take action to help protect yourself.” An app by Consumer Reports that checks to see which businesses hold data about you - and then helps you to remove it. Great stuff.

Books Fiction

The Kaiju Preservation Society, by John Scalzi. This was written as catharsis after the stress and trauma of 2020-21, and reading it was equally cathartic. The author calls it a pop song of a book, and that’s exactly right. It might not be Bach but it has a good beat and I’ll be humming it for months. If you’re looking for catharsis too, you could do much, much worse.

Nonfiction

Rest Is Resistance: A Manifesto, by Tricia Hersey. In a lot of ways best read as a kind of sermon on self-sovereignty, Rest is Resistance is a treatise on fighting back against grind culture and prioritizing your needs over the needs of the exploitative economic system you happen to live in. So many of these harmful ideas are baked into American culture; so much so that some of the pleas here might seem obvious to foreign ears. Nonetheless, we need more of this work, and I found this book to be both affirming and necessary.

Notable Articles AI

The generative AI revolution has begun—how did we get here? “But there was also a surprise. The OpenAI researchers discovered that in making the models bigger, they didn’t just get better at producing text. The models could learn entirely new behaviors simply by being shown new training data. In particular, the researchers discovered that GPT3 could be trained to follow instructions in plain English without having to explicitly design the model that way.” A superb introduction.

SEO Spammers Are Absolutely Thrilled Google Isn't Cracking Down on CNET's AI-Generated Articles.“The implication was clear: that tools like ChatGPT will now allow scofflaws to pollute the internet with near-infinite quantities of bot-generated garbage, and that CNET have now paved the way. In a way, it served as a perfect illustration of a recent warning by Stanford and Georgetown academics that AI tech could rapidly start to fill the internet with endless quantities of misinformation and profiteering.”

OpenAI Used Kenyan Workers on Less Than $2 Per Hour. “One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.””

I asked Chat GPT to write a song in the style of Nick Cave. “ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.”

ChatGPT in DR SBAITSO. “But it got me wondering, what if we replaced the internals of DR SBAITSO with ChatGPT but kept the weird synthesized voice?”

Apple Books quietly launches AI-narrated audiobooks. “Audiobooks narrated by a text-to-speech AI are now available via Apple’s Books service, in a move with potentially huge implications for the multi-billion dollar audiobook industry. Apple describes the new “digital narration” feature on its website as making “the creation of audiobooks more accessible to all,” by reducing “the cost and complexity” of producing them for authors and publishers.” Speaking as a frequent audiobook listener: do not want.

Facial Recognition Tech Used To Jail Black Man For Louisiana Theft - He's Never Been To Louisiana.“There were clear physical differences between Reid and the perpetrator in the surveillance footage, said Reid’s attorney. For example, there was a 40-pound difference in body weight and Reid had a mole on his face. […] Researchers have long noted racial biases in specific facial recognition software, and we’ve seen this play out in wrongful arrests, like those of Nijeer Parks, Robert Williams, and Michael Oliver—all Black men.“

The Expanding Dark Forest and Generative AI. “Hard exiting out of this cycle requires coming up with unquestionably original thoughts and theories. It means seeing and synthesising patterns across a broad range of sources: books, blogs, cultural narratives served up by media outlets, conversations, podcasts, lived experiences, and market trends. We can observe and analyse a much fuller range of inputs than bots and generative models can.”

Business

Americans are increasingly disgruntled at work. “Of note: Workers who were in jobs that could be done remotely, but were forced to work on-site saw an increase of 7 points in active disengagement.”

Macroeconomic Changes Have Made It Impossible for Me to Want to Pay You. “There’s no easy way to say this: I have made the difficult decision to lay off over six thousand of you. In the past two years, we have achieved huge wins together. But unfortunately, the macroeconomic environment has shifted in ways none of us could have foreseen, from an economy in which I did feel like paying you, to one in which I’d rather not.”

Extreme questions to trigger new, better ideas. “The following prompts jostle you out of tiny thinking. Each stretches some dimension of reality to an extreme. So extreme that it is nearly nonsense. But dramatically different perspectives can reveal distinctly new ideas. An idea that would be a 60% solution in an extreme hypothetical case, could be a 2x or even a 10x idea in reality.”

What explains recent tech layoffs, and why should we be worried? “Layoffs often do not cut costs, as there are many instances of laid-off employees being hired back as contractors, with companies paying the contracting firm. Layoffs often do not increase stock prices, in part because layoffs can signal that a company is having difficulty. Layoffs do not increase productivity. Layoffs do not solve what is often the underlying problem, which is often an ineffective strategy, a loss of market share, or too little revenue. Layoffs are basically a bad decision.”

Your Coworkers Are Less Ambitious; Bosses Adjust to the New Order. “Many white-collar workers say the events of the past three years have reordered their priorities and showed them what they were missing when they were spending so much time at the office. Now that normalcy is returning, even some of the workers who used to be always on and always striving say they find themselves eyeing the clock as the day winds down, saying no to overtime work or even taking pay cuts for better work-life balance.” Good!

Climate

Revealed: more than 90% of rainforest carbon offsets by biggest provider are worthless, analysis shows.“The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.”

Compound extreme heat and drought will hit 90% of world population. “The frequency of extreme compounding hazards is projected to intensify tenfold globally due to the combined effects of warming and decreases in terrestrial water storage, under the highest emission scenario. Over 90% of the world population and GDP is projected to be exposed to increasing compounding risks in the future climate, even under the lowest emission scenario.”

Culture

The contagious visual blandness of Netflix. “There are more green screens and sound stages, more CGI, more fixing-it-in-post. As these production tools have gotten slicker and cheaper and thus more widely abused, it’s not that everything looks obviously shitty or too good to feel true, it’s actually that most things look mid in the exact same way. The ubiquity of the look is making it harder to spot, and the overall result is weightless and uncanny. An endless stream of glossy vehicles that are easy to watch and easier to forget.”

Noma, Rated the World’s Best Restaurant, Is Closing Its Doors. “The Copenhagen chef René Redzepi says fine dining at the highest level, with its grueling hours and intense workplace culture, has hit a breaking point: “It’s unsustainable.”” Time to close with one last audacious s’mores dish?

Democracy

‘I’m flabbergasted’: UNC leaders blindsided by trustees' decision on School of Civic Life and Leadership.“Mimi Chapman, chairperson of faculty, said she was “flabbergasted” in response to the exclusion of faculty input in the decision, which she said she considers to be an attack on shared University governance.” From the same university that denied tenure to Nikole Hannah-Jones.

Adam Schiff to run against Porter for Feinstein’s California Senate seat. “Lee is an old-school, anti-establishment liberal with widespread name recognition in the Bay Area. Khanna has built more of a name for himself as a technocrat and wonk in the tech, antitrust, and economic realm, and co-chaired Bernie Sanders’s 2020 presidential campaign. Porter comes from the Elizabeth Warren lane of the party. But Schiff’s congressional identity has been shaped by his establishment ties.”

Election workers could see expanded protections as threats continue. “While election workers mostly powered through a smooth process in November, the threat of political violence continues, according to election officials and voting rights advocates. Ramping up protections for election workers will be critical this year for legislatures.”

NPR obtained secret tapes recorded by prison staff during Virginia executions. “An NPR investigation can now reveal the tapes show the prison neglected to record key evidence during what was considered one of Virginia’s worst executions, and staff appeared unprepared for some of the jobs they were tasked to do in the death chamber.”

Donelan confirms stiffer online safety measures after backbench pressure. “Under a further change to the bill, video footage that shows people crossing the Channel in small boats in a “positive light” will be added to a list of illegal content that all tech platforms must proactively prevent from reaching users.” How is this internet safety?!

Missouri House faces backlash for women’s dress code rule. “Democrats have excoriated Republicans on social media for legislating over what women should be required to wear. Criticism of the rule change comes at a time when the treatment of women in Missouri has received national attention.”

Spot the difference: Boris Johnson appears scrubbed from photo posted by Shapps. “Social media users were quick to point out that Johnson appeared to have been erased from the image – an identical picture is still on the No 10 Flickr account, dated 9 June 2021, with the former PM standing between Shapps and Hart.” How very Stalin of him.

‘It never stops’: killings by US police reach record high in 2022. “US law enforcement killed at least 1,176 people in 2022, making it the deadliest year on record for police violence since experts first started tracking the killings, a new data analysis reveals.”

These anti-trans bills are being prepped for 2023 state legislative sessions. “Lawmakers in at least eight states used the last two months of 2022 to prefile anti-transgender bills ahead of state legislative sessions convening this month — setting up another year of statehouse battles over trans rights, while targeting health care for trans adults in new ways.”

The secret money fueling the conservative anti-ESG push. “This isn’t a grassroots movement, and it isn’t coming from the financial industry, where most experts argue that considering issues like climate is prudent for investors. “I think it’s motivated by politics,” says Witold Henisz, a professor at the University of Pennsylvania’s Wharton School. “They think it’s a political wedge issue. You can see some of the same patterns of money moving into the anti-ESG movement that pushed back against climate science in the aughts.””

A Con Man Is Succeeding Me in Congress Today. “But for now, there is no getting around the fact that Mr. Santos’s con game is a manifestation of a growing political phenomenon of saying or doing anything, with no automatic consequences. Whether it be far-right election deniers, personal attacks that call for violence against opponents, claims of false-flag mass shootings, extremists spouting the first thing that comes to mind and even one politician saying he could “shoot somebody” on Fifth Avenue and still not lose supporters.”

Health

How our microbiome is shaped by family, friends and even neighbours. “People living in the same household share more than just a roof. Be they family or flatmate, housemates tend to have the same microbes colonizing their bodies, and the longer the cohabitation, the more similar these microbiomes become. The conclusion raises the possibility that diseases linked to microbiome dysfunction, including cancer, diabetes and obesity, could be partly transmissible.”

Gas stove health concerns add urgency to calls for changes in public housing. “Alarmingly, in a focus group conducted by the Public Health Law Center in Chicago, nearly 100 percent of public housing participants said they have also turned on their gas stoves to stay warm on cold days, which is an added danger for residents.”

Population Attributable Fraction of Gas Stoves and Childhood Asthma in the United States. “The proportion of childhood asthma that could be theoretically prevented if gas stove use was not present (e.g., state-specific PAFs) varied by state (Illinois = 21.1%; California = 20.1%; New York = 18.8%; Massachusetts = 15.4%; Pennsylvania = 13.5%). Our results quantify the US public health burden attributed to gas stove use and childhood asthma.”

Lead and Cadmium Could Be in Your Dark Chocolate. “The chocolate industry has been grappling with ways to lower those levels. To see how much of a risk these favorite treats pose, Consumer Reports scientists recently measured the amount of heavy metals in 28 dark chocolate bars. They detected cadmium and lead in all of them.”

Media

Newsrooms that move beyond ‘objectivity’ can build trust. “Newer, nonprofit news organizations often have launched with stated missions. The national digital news site the 19th, for example, aims to “elevate voices of women, people of color, and the LGBTQ+ community.””

Three years of The 19th: 30 cities, 54 employees and news that represents. “In the last year alone, we’ve grown at an astronomical pace: from 32 employees to 54, from a news organization that pledged to be the most representative in the nation to one where 65 percent of our staff is non-White, 30 percent are LGBTQ+ and 19 percent are living with disabilities. We’re now on the ground in more than 30 U.S. cities.” I’m so proud to be a part of this team.

Layoff Brain. “Layoffs are the worst for the people who lose their job, but there’s a ripple effect on those who keep them — particularly if they keep them over the course of multiple layoffs. It’s a curious mix of guilt, relief, trepidation, and anger. Are you supposed to be grateful to the company whose primary leadership strategy seems to be keeping its workers trapped in fear? How do you trust your manager’s assurances of security further than the end of the next pay period?”

Trump Looks to Abandon Truth Social, His Own Social Media Platform. “Since late last year, former President Trump has informed several people close to him that he doesn’t want to re-up the exclusivity agreement with his social media company, Truth Social, two sources familiar with the matter tell Rolling Stone. “There’s not going to be a need for that,” is how one of the sources recalls Trump describing his soon-to-expire contractual obligation. […] Trump and some of his close allies have already brainstormed about him tweeting that, even though Big Tech tried to “silence” him over his lies about a “rigged election,” he was now back to make “the Left” miserable.”

Journalists (And Others) Should Leave Twitter. Here’s How They Can Get Started. “Many journalism organizations and public entities, such as local governments, believe Twitter is essential because it’s a place people know they can turn to when there’s big news — and find information from “verified accounts” that (barring a hack) ensure the source is who it’s claiming to be. So, they tell themselves, they have to stick around. This isn’t just short-sighted. It’s foolish.”

Publishers, you should start using Mastodon: 10 reasons why. “There are plenty of articles about why you should leave Twitter (or at least, cross-post to Mastodon) for ethical, safety, political, social, and security/privacy reasons. This post won’t do any of those things. Instead, all my arguments are about why it’s smart from a pure business, marketing, and influence perspective to use Mastodon as soon as possible.”

Science

U.S.D.A. Approves First Vaccine for Honeybees. ““There are millions of beehives all over the world, and they don’t have a good health care system compared to other animals,” she said. “Now we have the tools to improve their resistance against diseases.”” Vaccines for bees!

Society

A vast majority of Americans are concerned people could face criminal penalties for abortion. “The data found that 80 percent of Americans are concerned that domestic abuse survivors could be reported by their abuser for getting an abortion. Eighty percent of people are also concerned that law enforcement could investigate people who have miscarriages or stillbirths if they are suspected of getting an abortion. The poll also found that 75 percent of people are concerned that people who get an abortion could be charged with a felony or go to jail.”

Inside a US Neo-Nazi Homeschool Network With Thousands of Members. “Since the group began in October 2021 it has openly embraced Nazi ideology and promoted white supremacy, while proudly discouraging parents from letting their white children play with or have any contact with people of any other race. Admins and members use racist, homophobic, and antisemitic slurs without shame, and quote Hitler and other Nazi leaders daily in a channel open to the public.”

The tragedy of the commons is a false and dangerous myth. “Even before Hardin’s ‘The Tragedy of the Commons’ was published, however, the young political scientist Elinor Ostrom had proven him wrong. While Hardin speculated that the tragedy of the commons could be avoided only through total privatisation or total government control, Ostrom had witnessed groundwater users near her native Los Angeles hammer out a system for sharing their coveted resource.”

Pain of police killings ripples outward to traumatize Black people and communities across US. “Evidence shows that many Black Americans across the U.S. experience police killings of other Black people as traumatic events, and that this trauma diminishes the ability of Black communities to thrive.” Sobering statistics.

U.S. Officials Announce Plans To Continue Pretending Brutal State-Sponsored Violence Not Supposed To Happen. ““Today, as we deal with the fallout from the death of Tyre Nichols, myself and the highest officials in the American government pledge to keep acting like we don’t want our highly militarized police force to kill innocent civilians every day,” said President Joe Biden at a White House press conference, adding that he and his fellow elected officials would pretend to gasp, pray, and put on a big emotional show every time law enforcement carried out the exact murders against its own citizens they had both tacitly and publicly approved.”

Unionization increased by 200,000 in 2022: Tens of millions more wanted to join a union, but couldn’t .“One crucial way we can promote a more prosperous, equitable economy is to dismantle existing barriers to union organizing and collective bargaining. It is urgent that policymakers enact reforms at the federal and state levels to protect and support workers’ right to unionize.”

Why Elon Musk and the billionaire space bros want to put people in space cages forever. “That said, I disagree with Mr. Wanjek: it requires much more than libertarian naiveté to colonize space. Parking humans in containment shelters, on Mars or elsewhere, so as to breed them and select them like cattle — that requires malice.”

2023 'Doomsday Clock' moved 10 seconds closer to catastrophe. “Scientists revealed on Tuesday that the “Doomsday Clock” has been moved up to 90 seconds before midnight -- the closest humanity has ever been to armageddon.” But everything else is going so well.

Post-Roe March for Life showed anti-abortion activists are far from done. “The next steps for the movement were illustrated by the march’s new route this year: Instead of ending at the steps of the Supreme Court as they have for nearly five decades, activists ended their march at the U.S. Capitol — underscoring their continued push for Congress to enact a federal abortion ban.”

Anti-LGBTQ+ rhetoric and state laws are hurting youth mental health, poll shows. “Seventy-one percent of the 716 surveyed LGBTQ+ youth, ranging from teenagers to young adults who took the online poll last fall, said that debates around state laws restricting the rights of LGBTQ+ young people had negatively impacted their mental health. Twenty-seven percent characterized the negative effect as severe.”

We Convinced Our School to Bring Back Masks. “As parents, it’s worth remembering that the persistent ones usually get what they want. Look at the anti-maskers. They didn’t give up. They kept pushing until they got their way. We don’t need to be that aggressive, but I think a lot of us get discouraged and give up. The anti-masker types never give up. They never seem to get tired. So if we want to beat them and win over middle earth, we have to match their energy.”

Elon Musk-funded nonprofit run by MIT professor offered to finance Swedish pro-nazi group. “The US-based and Elon Musk-funded Future of Life Institute, run by MIT professor and Swedish citizen Max Tegmark, offered a grant of $100,000 to right-wing extremists in Sweden, an Expo investigation reveals.”

Public Transit Goes Off the Rails With Fewer Riders, Dwindling Cash, Rising Crime. “Several of the nation’s largest urban mass-transit systems are at a crossroads, with ridership still depressed three years into the pandemic and federal aid running out.”

Technology

Instagram's co-founders are mounting a comeback. “TikTok’s innovation was to show you stuff using only algorithmic predictions, regardless of who your friends are or who you followed. It soon became the most downloaded app in the world. Artifact represents an effort to do the same thing, but for text.” Potentially an interesting app, based on an interesting insight.

Women, minorities lose ground in tech layoffs. “The technology industry has long struggled to recruit a diverse workforce, but the recent spate of cuts by Silicon Valley companies has hit women particularly hard, according to recently published analyses of demographic data from the layoffs. Women and some minorities were particularly vulnerable to layoffs because they were newer to their jobs and occupied roles that companies were less interested in retaining, experts said.”

U.S. sues Google for allegedly breaking antitrust laws with its ad business. ““For 15 years, Google has pursued a course of anticompetitive conduct that has allowed it to halt the rise of rival technologies, manipulate auction mechanics, to insulate itself from competition, and force advertisers and publishers to use its tools,” said Attorney General Merrick Garland at a press conference announcing the lawsuit.”

Warning: Do not “other” me because of my age. “But I hate this new attention lavished on aging. Why? Because it “others” me. It puts me in a separate category from the rest of the world, and that is not how I think, feel, act, or want to be seen. All of a sudden I am not smart, pretty, successful, talented, or part of the family. I am “old.” I am somebody’s responsibility. I have to be told when to stop driving, and my checkbook can be taken away. I am a candidate for Senior Living (banishment to a place full of other old people).”

Tapbot shuts down Tweetbot as it pivots to Mastodon. “Now that Twitter has confirmed it’s banning third-party clients, some of the most prominent alternatives are going away. Tapbots has shut down work on Tweetbot, one of the more popular iOS apps, as Twitter rendered it non-functional “in a blink of an eye.” The developer is instead pivoting to Ivory, an app for the open social platform Mastodon. While it’s limited to an invitation-only test for now, Tapbots hopes to make the software “better than Tweetbot ever could be.”” Likewise, Mastodon will be better than Twitter ever could be.

U.S. No Fly List Left on Unprotected Airline Server. “Analysis of the server resulted in the discovery of a text file named “NoFly.csv,” a reference to the subset of individuals in the Terrorist Screening Database who have been barred from air travel due to having suspected or known ties to terrorist organizations.”

‘Passion economy’ platforms cut costs in tech downturn. ““People are making choices,” said Rebecca McGrath, an internet analyst at Mintel. “Unless you’re very loyal to a creator, that’ll be one of the obvious things to drop.””

Tesla video promoting self-driving was staged, engineer testifies. “A 2016 video that Tesla used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer.” They’re fun cars to drive, but don’t let them drive themselves.

Medium embraces Mastodon. “Today, Medium is launching a Mastodon instance at http://me.dm/ to help our authors, publications and readers find a home in the fediverse. Mastodon is an emerging force for good in social media and we are excited to join this community.” Hell yeah.

The Effects of Online Content Moderation: Evidence from President Trump's Account Deletion. “The toxicity of tweets sent by Trump followers relative to a representative sample of US Twitter users dropped by around 25% after the account deletion. Second, this effect is larger for pro-Trump tweets and Republican users. Third, Trump’s suspension reduced the total number of tweets, suggesting a drop in engagement. Fourth, we find effects on individuals who did not follow Trump directly but followed somebody that did, suggesting network spillovers.”

The Intercept Obtains Surveillance Footage of Tesla Crash on Bay Bridge. “These semi-autonomous systems are playing the same sort of trick as ChatGPT: they offer a convincing but shallow impression of a competent driverless car without any broader context to fall back on.”

Apache® Appropriation. “We urge The Apache® Software Foundation to take the necessary steps needed to express the ally-ship they promote so deeply on their website, to act in accordance with their own code of conduct, to “be careful in the words that [they] choose”, and change their name.” +1.

San Francisco Police Are Using Driverless Cars as Mobile Surveillance Cameras. “Law enforcement agencies already have access to automated license plate readers, geofence warrants, Ring Doorbell footage, as well as the ability to purchase location data. This practice will extend the reach of an already pervasive web of surveillance.”

Seattle schools sue tech giants over social media harm. “[The lawsuit] blames [social media giants] for worsening mental health and behavioral disorders including anxiety, depression, disordered eating and cyberbullying; making it more difficult to educate students; and forcing schools to take steps such as hiring additional mental health professionals, developing lesson plans about the effects of social media, and providing additional training to teachers.”

‘Office Space’ Inspired Engineer’s Theft Scheme, Police Say. “A software engineer siphoned more than $300,000 from his employer by introducing what prosecutors called a “series of malicious software edits” that wired money into his personal account. If the scheme sounds like the plot of “Office Space,” that’s because the authorities said it was partly inspired by the movie.”

Activity Streams graphical model. “So I did a bit of drawing just to make it clearer (for myself) what kind of data can be shipped around in the Fediverse. To be clear, this is only a small part of the overall stack, but an important one.” Useful work!

Twitter

Elon Musk’s Twitter hit with holocaust denial hate speech lawsuit in Germany. “Current studies prove that 84% of posts containing antisemitic hate speech were not reviewed by social media platforms, as shown in a study by the Center for Countering Digital Hate. Which means that Twitter knows Jews are being publicly attacked on the platform every day and that antisemitism is becoming a normality in our society. And that the platform’s response is by no means adequate.”

Daring Fireball: If You Needed Any More Confirmation, Internal Slack Messages at Twitter Show That Cutting Off Third-Party Clients Was 'Intentional'. “Twitter can of course do what it wants, and Musk owns Twitter so he can do what he wants. But pulling the plug on these clients and ghosting everyone on communications about it is so absurdly disrespectful. Zero respect for the users for those apps, zero respect for the developers behind them — many of whom had been building on the Twitter platform for 10-15 years. Just a clown show.”

How Twitter misleads us about how many people have left — and what to do about it. “To outside observers, it can seem like Twitter users are continuing as before, seemingly unaware of the millions of people who have left. “You left Twitter?” a friend recently remarked, “I hadn’t noticed.” Yet many of the accounts I follow haven’t tweeted in ages, and roughly 15% of them have already set up accounts on Mastodon.” Some great tips in this piece.


Foss & Crafts

55: Free Soft Wear

Morgan talks about "Free Soft Wear": textile processes under free culture licenses! Links: Morgan's talk about Free Soft Wear at the Creative Freedom Summit Elena of Valhalla’s repository of CC BY-SA sewing patterns Morgan's blog Free Soft Wear index Dice bag and simple skirt tutorials RSI Glove pattern Simple sweater Layered Skirt Kat Walsh or @kat@stareinto.space

Morgan talks about "Free Soft Wear": textile processes under free culture licenses!

Links:

Morgan's talk about Free Soft Wear at the Creative Freedom Summit

Elena of Valhalla’s repository of CC BY-SA sewing patterns

Morgan's blog

Free Soft Wear index

Dice bag and simple skirt tutorials

RSI Glove pattern

Simple sweater

Layered Skirt

Kat Walsh or @kat@stareinto.space

Tall Dog Electronics face mask (You may recognize Dan and Tall Dog Electronics of TinyNES fame)

Wikimedia Commons

Project Gutenberg

Learning the sewing machine

RSI episode

FreeSewing (an open source software project that creates made-to-measure creative commons licensed sewing patterns)

Tuesday, 31. January 2023

Ben Werdmüller

Which sales enablement tool powers all those ...

Which sales enablement tool powers all those template-driven follow-up-heavy cold emails? I hereby petition that we yeet it directly into the sun.

Which sales enablement tool powers all those template-driven follow-up-heavy cold emails? I hereby petition that we yeet it directly into the sun.


Few things terrify me more than yoga, ...

Few things terrify me more than yoga, which is really a terror and dissatisfaction with my own body. I would like to fix that.

Few things terrify me more than yoga, which is really a terror and dissatisfaction with my own body. I would like to fix that.


Instagram's co-founders are mounting a comeback

“TikTok’s innovation was to show you stuff using only algorithmic predictions, regardless of who your friends are or who you followed. It soon became the most downloaded app in the world. Artifact represents an effort to do the same thing, but for text.” Potentially an interesting app, based on an interesting insight. #Technology [Link]

“TikTok’s innovation was to show you stuff using only algorithmic predictions, regardless of who your friends are or who you followed. It soon became the most downloaded app in the world. Artifact represents an effort to do the same thing, but for text.” Potentially an interesting app, based on an interesting insight. #Technology

[Link]


The generative AI revolution has begun—how did we get here?

“But there was also a surprise. The OpenAI researchers discovered that in making the models bigger, they didn’t just get better at producing text. The models could learn entirely new behaviors simply by being shown new training data. In particular, the researchers discovered that GPT3 could be trained to follow instructions in plain English without having to explicitly design

“But there was also a surprise. The OpenAI researchers discovered that in making the models bigger, they didn’t just get better at producing text. The models could learn entirely new behaviors simply by being shown new training data. In particular, the researchers discovered that GPT3 could be trained to follow instructions in plain English without having to explicitly design the model that way.” A superb introduction. #AI

[Link]


Newsrooms that move beyond ‘objectivity’ can build trust

“Newer, nonprofit news organizations often have launched with stated missions. The national digital news site the 19th, for example, aims to “elevate voices of women, people of color, and the LGBTQ+ community.”” #Media [Link]

“Newer, nonprofit news organizations often have launched with stated missions. The national digital news site the 19th, for example, aims to “elevate voices of women, people of color, and the LGBTQ+ community.”” #Media

[Link]


@_Nat Zone

2月2日(木)22時〜「NIST SP800-63-4(案)」を配信しながらダラダラ読む会 #3

先週から始めた米国 NIST SP800-63-4…

先週から始めた米国 NIST SP800-63-4 のInitial Public Draft (昨年12月公開、3/24までパブリックコメント受付中)をダラダラ読む会の第3回を2月2日午後10時よりおこないます。今回は、身元確認基準であるパートAの2.2節から読み始めます。この中には、氏名を尋ねるねることの妥当性などの議論も入ってくると思います。え、「氏名」を入力させたりするの当たり前だろうって?そんなこと無いんですよ。そもそも、氏と名を聞くのって、ものすごく西欧よりの文化なわけですよ。文化によっては氏はなかったり、でも氏を入力しないと処理ができなかったり、逆に入れようとすると文字数制限があって入力できなかったり、使えない字があったりですね。

これらは実は文化的公平性(equity)の問題でもあるのです。そして、この公平性の問題が、SP800-63の今回の改定の一つの目玉でもあるのです。

ちなみに、日本では明治時代に氏名に統一してしまったために比較的この辺りは分かりづらくなってはいます。ですが、依然としてこの問題は存在します。たとえば、わたしが家をお貸ししている方は外国の方で氏が無いんですね。なので、氏の欄にも名を繰り返して入れておられます。

また、国際線の搭乗券などでは8文字しか入れられなかったりして、わたしの搭乗券はSAKIMURA NATSUHIK になっています。某銀行のシステムも同様で、キャッシュカード兼クレジットカードのローマ字に入力できないので短くしてくれと言われました。これ幸いと NAT SAKIMURA にしましたが1

そもそもわたしの氏の漢字が入力できないところ多いです。「﨑」がね。わたしは別に「崎」でも良いのですが戸籍が「﨑」なのでこれでないとはねられてしまうことがあるのですよ。そのくせ入力できなかったり、出力が文字化けしたり。

そうそう、カタカナ名も問題ですね。「ナットコンサルティング」って、eLTAXに入力できないですからね。小さい「ッ」と「ィ」が入らないんですね。そのくせ漢字欄から勝手にひっぱってきて入れてたりして、何がエラーになっているかわからず、「頼む、税金払わしてくれよ〜」と泣いてました。原因究明するの大変だった。ちなみに、税理士さんからのアドバイスは「窓口で払いましょう」

こうした入力の制限は本来、どうして氏名を入力させるのかということに立ち返って考えるべきなんですね。たとえばeLTAXだったら、法人番号があるんだからそれで事足りるわけですよ、本来。もともと法人名称なんてユニーク性無いんだから識別子として使えないわけだし。

個人の側にもどると、この「氏名」を求めるというプロセスをやめると、実は改姓によって被る様々な不利益を減じたりする可能性もあるんですよね。氏名を入力させることによって、ある集団に対して申請が難しくなるなどの障害を生んでいないか、それが差別的に機能していないかなどを評価することも「公平性」の観点で重要だったりします。なので、 #選択的夫婦別姓 推進な方々もよろしければぜひ一緒に考えてみてください。

まぁもちろんね、SP800-63-4で問題になっている本丸は氏名ではなく、写真つき身分証明書をもとめることとかそういうことですが。

バージョン4はバージョン3である「SP800-63-3」に比べて、対象範囲がかなり広くなっています。この観点からはちょっと足りないところがあり、その点を著者の一人に「○○がすっぽり落ちてない?」と聞いたところ、「認識はしていてこれから書く予定だ」ということでした。「すべてのコメントは歓迎だ」とのことです。

SP800-63は米国連邦政府向けの基準ですが、各国の関連規格にも大きな影響を与えています。日本の政府の基準もその中の一つです。おりしも日本政府もこのあたりの改定を考えているようですので、SP800-63をちゃんと勉強しておくことは意味があります。

そこで、8回ほどに分けてSP800-63-4を読んでみようと思います。8回にわけるのは、関係する文書は

SP800-63-4 https://nvlpubs.nist.gov/nistpubs/Spe… SP800-63A-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63B-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63C-4 https://doi.org/10.6028/NIST.SP.800-6…

の4冊もあり、どう考えても1回では終わらないからです。

これらは、OpenID Foundation ジャパンで日本語訳も鋭意作成中のようで、配信までに多少日本語で読めるようになったものも揃っているかもしれません。使えるようであれば、それを副読本にしながらやろうかと思います。

チャット参加もビデオ参加2も可能です。 ビデオ参加には mmhmm を使います。mmhmmで参加したい方はリンクをお送りしますのでご連絡ください。チャットはこのYoutubeチャンネルにチャンネル登録して1分以上たてば参加できるはず。ただ、前回直前に登録した方はチャットを送れなかったという事例もあるので、事前に登録することをおすすめします。


Jon Udell

Verification for Mastodon

This post attests to my ownership of this Mastodon account.

This post attests to my ownership of this Mastodon account.

Monday, 30. January 2023

Doc Searls Weblog

Does Sirius XM sound far worse than listening to music on YouTube?

That’s a question asked on Quora and deleted before I posted my answer. So I’m posting my answer here. This is like asking if a car radio sounds better than a TV. Because it’s a matter of where, how, when, and what, more than a matter of sound. There is some overlap in the sense […]

That’s a question asked on Quora and deleted before I posted my answer. So I’m posting my answer here.

This is like asking if a car radio sounds better than a TV. Because it’s a matter of where, how, when, and what, more than a matter of sound.

There is some overlap in the sense that both SiriusXM and YouTube are fully useful on mobile apps. But you don’t watch your radio in your car or listen to your radio on your TV, even though it’s possible to do both through apps that are native to both the road (through Android Auto and Apple Carplay) and the living room (through Roku, Amazon, Apple, and other TV app platforms).

As for the sound itself, YouTiube lets you select audio quality bitrates up to 256kbps AAC & OPUS. SiriusXM’s default bitrate is also 256kpbs, but over the satellite link bitrates are typically lower—sometimes much lower. But, since SiriusXM does not (to my knowledge, so far) publish their bitrates in a place that’s easy to find, its bitrates are subject to debate. Here is one of those on Reddit.

But, again, it’s a matter of where. when, and what, more than how. If you want to see and hear a piece of music, YouTube provides enormous optionality, with its almost boundless collection of videos. If you want radio-like music programming, SiriusXM offers more—plus sports, talk, news, sports (including play-by-play for all the major ones), and more.

Yet the Internet has more than both put together. That’s why the image above is of Radio Paradise, which is one of the oldest and best Internet music stations. It’s live on the Net and the Web, and it has Best Of collections on YouTube as well.

Bonus link (and a lot of fun): radio.garden. There’s an app for that too.


Damien Bod

Using multi-tenant AAD delegated APIs from different tenants

This post looks at implementing and using Azure AD multiple tenant applications from different tenants. A service principal needs to be created for the tenant using the multi-tenant API and consent needs to be given for the API scope. The API will accept tokens from different issuers which need to be validated. It is important […]

This post looks at implementing and using Azure AD multiple tenant applications from different tenants. A service principal needs to be created for the tenant using the multi-tenant API and consent needs to be given for the API scope. The API will accept tokens from different issuers which need to be validated. It is important that all tenants allowed to use the API are validated.

Code: https://github.com/damienbod/AadMutliApis

Setup API Azure App Registration

A multi-tenant API Azure App registration is used to expose the scope which is required to use the API. The API is implemented using ASP.NET Core and validates this claim in the JWT token as well as other claims. V2 tokens are required and only delegated access tokens are created from this App registration.

Implement the API

An ASP.NET Core application implements the API and the security logic validating the access token. An explicit list of issuers can use the API The API validates that a secret is required to authenticate and the client that requested the access token is also validated. The authorization handler validates that the token have a scope claim which means that it is a delegated access token (if produced by Azure AD). Great care has to be taken when using mutli-tenant app registrations because any tenant can use this but not any tenant should be allowed to use the API.

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(JwtBearerDefaults.AuthenticationScheme, options => { options.MetadataAddress = aadMetadataAddress; //options.Authority = issuert1; options.Audience = aud; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateIssuerSigningKey = true, ValidAudiences = new List<string> { aud }, ValidIssuers = new List<string> { issuert1 } }; }); services.AddSingleton<IAuthorizationHandler, ValidTenantsAndClientsHandler>(); services.AddAuthorization(policies => { policies.AddPolicy("ValidTenantsAndClients", p => { // only delegated trusted known clients allowed to use the API p.Requirements.Add(new ValidTenantsAndClientsRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", azpClientId); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

The handler validates that the scope claim has the expected value. Together with the rest of the validation, it is possible the validate that the access token is intended for this API.

public class ValidTenantsAndClientsHandler : AuthorizationHandler<ValidTenantsAndClientsRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, ValidTenantsAndClientsRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scopeClaim = context.User.Claims.FirstOrDefault(t => t.Type == "scope"); if (scopeClaim != null) { var scopes = scopeClaim.Value.Split(" ", StringSplitOptions.RemoveEmptyEntries); if (scopes.Any(t => t == "access_as_user")) { context.Succeed(requirement); } } return Task.CompletedTask; } } Setup Service Principal for other tenants

Now that the application is ready and the App registration exists, a service principal can be created for this Azure App registration in the target tenant.

Connect-AzureAD -TenantId '<UI-tenantId>' New-AzureADServicePrincipal -AppId 'AppId-from-multi-tenant-api' Give consent in your tenant to the Enterprise applications

The service principal can be found in the Enterprise applications blade.

Open the Enterprise Applications blade Find your enterprise application using the Guid ObjectId from the Powershell script Open the permissions blade Grant Admin consent if you require to use local tenant permissions

Using and consent

To use the UI application and the multi-tenant API, consent must be given, usually by a tenant administrator on behalf of all users in this tenant. Different consents screens are displayed depending on the Azure tenant policies and the person using the application.

Once consent ahs been given, this can be viewed in the API permissions of the Enterprise application created for the target tenant.

The extra step of explicitly allowing the tenants that can use the API has advantages. You can continue to use delegated scopes and implement token exchange protocols for the downstream APIs where required. This is a secure way of connecting software systems using different identity providers if implemented correctly. A disadvantage with the approach is that each admin must give consent to use the API on their tenant.

Links
Azure AD Multi tenant Azure App registration consent

https://stackoverflow.com/questions/60929155/how-to-create-service-principal-of-multi-tenant-application


Simon Willison

datasette-scraper, Big Local News and other weeknotes

In addition to exploring the new MusicCaps training and evaluation data I've been working on the big Datasette JSON refactor, and getting excited about a Datasette project that I didn't work on at all. datasette-scraper The best thing about a plugin system is that you can wake up one day and your software has grown extra features without you even having to review a pull request. Colin Dellow

In addition to exploring the new MusicCaps training and evaluation data I've been working on the big Datasette JSON refactor, and getting excited about a Datasette project that I didn't work on at all.

datasette-scraper

The best thing about a plugin system is that you can wake up one day and your software has grown extra features without you even having to review a pull request.

Colin Dellow's datasette-scraper - first released a few weeks ago - takes that a step further: it's a plugin that builds an entire custom application on top of Datasette.

It's really cool!

Colin has a ten minute demo up on YouTube which is well worth checking out.

The plugin implements a website crawler which can crawl pages, build up a queue (using sitemap.xml if it's available), then fetch and cache the content - storing its progress and the resulting data in a SQLite database hosted by Datasette.

It uses some really neat tricks to customize various pieces of the Datasette interface to provide an interface for configuring and controlling the crawler.

Most impressively of all, it implements its own plugin hooks... which means you can use small custom plugins to define how you would like data to be extracted from the pages you are crawling.

Colin has a bunch of other Datasette plugins that are worth checking out too:

datasette-rewrite-sql monkey-patches Datasette's database connection code (since there's no appropriate plugin hook there yet) to provide its own hook for further plugin functions that can rewrite SQL queries before they get executed. datasette-ersatz-table-valued-functions (ersatz (adj.): made or used as a substitute, typically an inferior one, for something else.) is a delightfully gnarly hack which supports custom table-valued SQL functions in SQLite, despite Python's sqlite3 module not providing those. It works by rewriting a SQL query against a function that returns JSON to use a gnarly CTE and json_each() combo instead. datasette-ui-extras is brand new: it tweaks the Datasette default interface in various ways, adding features like sticky header and facets in a sidebar. I'm so excited to see someone experimenting with changes to the default UI in this way, and I fully expect that some of the ideas Colin is playing with here will make it into Datasette core in the future. Datasette and Big Local News

Big Local News is a collaborative data archive for journalists run by a team out of Stanford.

I worked with Ben Welsh this week to build an updated version of an old integration with Datasette, which went live on Friday.

Here's the animated GIF demo from their announcement:

The code I wrote for this integration lives in simonw/datasette-big-local. I ended up building a custom plugin which exclusively works with Big Local (so it's not shipped to PyPI).

The main challenge with this project involved authentication and permissions. Big Local has both public and private datasets, and we wanted the integration to work with both. But if someone opened a file in a private project, only they and others with access to that project should be able to view the resulting table.

In the end I solved this by creating a new SQLite database for each project, then configuring Datasette's authentication system to run a permission check for each user to check that they were allowed to access files in the associated project, via a call to the Big Local GraphQL API.

The first time a user opens a file in Big Local JavaScript on the site there performs an HTTP POST to Datasette, transmitting details of the project ID, the filename and an authentication token for that user. That token can then be used by Datasette to call the GraphQL API on their behalf, verifying their permissions and signing them into the Datasette instance using a signed cookie.

The datasette-big-local README includes full details about how this all works.

Progress on ?_extra=

I finally started making progress on Datasette issue #262: Add ?_extra= mechanism for requesting extra properties in JSON - first opened back in May 2018!

This is the key step on the journey to slimming down Datasette's default JSON representation for tables and queries.

I want to return this by default:

{ "ok": true, "rows": [ {"id": 1, "title": "Example 1"}, {"id": 2, "title": "Example 2"}, {"id": 3, "title": "Example 3"} ], "next": null }

Then allow users to specify all kinds of extra information - the table schema, the suggested facets, information on columns, the underlying SQL query... all by adding on ?_extra=x parameters to the URL.

This turns out to be a lot of work: I'm having to completely refactor the internals of the most complicated part of Datasette's codebase.

Still lots more to go, but I'm happy to finally be making progress here.

Improved Datasette examples

The Datasette website has long had an Examples page linked to from the top navigation - and analytics show that it's one of the most visited pages on the site.

I finally gave that page the upgrade it needed. It now starts with illustrated examples that have been selected to help highlight what Datasette can do - both the variety of problems it can be used to solve, and the way plugins can be used to add extra functionality.

shot-scraper 1.1

I implemented the new screenshots for the examples page using my shot-scraper screenshot automation tool.

The screenshots are taken by a GitHub Actions workflow in the datasette-screenshots repository.

I added 5 new screenshot definitions to the YAML that powers that workflow, which is used by the shot-scraper multi command.

In working with shot-scraper I spotted a couple of opportunities for small improvements, version 1.1 with the following changes:

New --log-console option for logging the output of calls to console.log() to standard error. #101 New --skip and --fail options to specify what should happen if an HTTP 4xx or 5xx error is encountered while trying to load the page. --skip will ignore the error and either exit cleanly or move on to the next screenshot (in the case of multi). --fail will cause the tool to return a non-zero exit code, useful for running in CI environments. #102
datasette-granian

Granian is a new web server for running Python WSGI and ASGI applications, written in Rust.

A while ago I built datasette-gunicorn, a plugin which adds a datasette gunicorn my.db command for serving Datasette using the Gunicorn WSGI server.

datasette-granian now provides the same thing using Granian. It's an alpha release because I haven't actually used it in production yet, but it seems to work well and it adds yet another option for people who want to deploy Datasette.

Creator of Granian Giovanni Barillari was really helpful in helping me figure out how to dynamically serve a freshly configured ASGI application rather than just passing a module path to the granian CLI command.

datasette-faiss 0.2

I introduced datasette-faiss a few weeks ago. It's a plugin that suppors fast vector similarity lookups within Datasette using the FAISS vector search library by Facebook Research.

The first release of the plugin created a FAISS index on server startup for each table that contains an embeddings column. Any similarity searches would then be run against that entire table.

But what if you want to combine those searches with other filters in a query? For example, first filter to every article published in 2022, then run a similarity search on what's left.

In datasette-faiss 0.2 I introduced two new SQLite aggregate functions: faiss_agg() and faiss_agg_with_scores(), that are designed to handle this case.

The new functions work by constructing a new FAISS index from scratch every time they are called, covering just the rows that were processed by the aggregation.

This is best illustrated with an example. The following query first selects the embeddings for just the blog entries published in 2022, then uses those to find items that are most similar to the provided ID.

with entries_2022 as ( select id, embedding from blog_entry_embeddings where id in (select id from blog_entry where created like '2022%') ), faiss as ( select faiss_agg( id, embedding, (select embedding from blog_entry_embeddings where id = :id), 10 ) as results from entries_2022 ), ids as ( select value as id from json_each(faiss.results), faiss ) select blog_entry.id, blog_entry.title, blog_entry.created from ids join blog_entry on ids.id = blog_entry.id

You can try the query out here.

Releases this week shot-scraper: 1.1 - (25 releases total) - 2023-01-30
A command-line utility for taking automated screenshots of websites datasette-render-markdown: 2.1.1 - (10 releases total) - 2023-01-27
Datasette plugin for rendering Markdown datasette-youtube-embed: 0.1 - 2023-01-27
Turn YouTube URLs into embedded players in Datasette datasette-granian: 0.1a0 - 2023-01-20
Run Datasette using the Granian HTTP server datasette-faiss: 0.2 - (2 releases total) - 2023-01-19
Maintain a FAISS index for specified Datasette tables TIL this week Rewriting a Git repo to remove secrets from the history SQLite pragma_function_list() Installing lxml for Python on an M1/M2 Mac Combining CTEs and VALUES in SQLite

Sunday, 29. January 2023

Mike Jones: self-issued

JSON Object Signing and Encryption (JOSE) Working Group Reanimated

I’m thrilled that the IETF has restarted the JSON Object Signing and Encryption (JOSE) Working Group. It’s chartered to work on JSON- and CBOR-based representations for Zero-Knowledge Proofs (ZKPs), selective disclosure enabling minimal disclosure, and non-correlatable presentation. The representations are planned to use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable C

I’m thrilled that the IETF has restarted the JSON Object Signing and Encryption (JOSE) Working Group. It’s chartered to work on JSON- and CBOR-based representations for Zero-Knowledge Proofs (ZKPs), selective disclosure enabling minimal disclosure, and non-correlatable presentation. The representations are planned to use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable Credentials.

See the newly approved JOSE charter at https://datatracker.ietf.org/doc/charter-ietf-jose/03/. The working group will be chaired by Karen O’Donoghue, John Bradley, and John Mattsson, with the assigned area director being Roman Danyliw.

I believe this is a great outcome because the JOSE working group participants already have expertise creating simple, widely-adopted JSON-based cryptographic formats, such as JSON Web Signature (JWS), JSON Web Encryption (JWE), and JSON Web Key (JWK). The new formats will be peers to JWS, JWE, and COSE, reusing elements that make sense, while enabling use of new cryptographic algorithms whose inputs and outputs are not representable in the existing JOSE and COSE formats.

If you’re interested in the work, please join the JOSE mailing list at https://www.ietf.org/mailman/listinfo/jose if you’re not already a member. Also, plan to participate in IETF 116 Yokohama, where we should be able to have the first meeting of the reconstituted working group. I hope to see you there!

As background, the first step in the JOSE rechartering was the JSON Web Proofs (JWP) BoF at IETF 114 in Philadelphia sponsored by Security Area Director Roman Danyliw and chaired by Karen O’Donoghue and John Bradley, during which Jeremie Miller, Kristina Yasuda, Tobias Looker, and I presented. That was follwed by a Virtual Interim JWP BoF in October, 2022, review on the ietf-announce mailing list, and multiple IESG discussions.

All of which brings us back to the (now recurring!) question: “What Would JOSE Do?” Join us and be part of answering it!

Sunday, 29. January 2023

Simon Willison

datasette-scraper walkthrough on YouTube

datasette-scraper walkthrough on YouTube datasette-scraper is Colin Dellow's new plugin that turns Datasette into a powerful web scraping tool, with a web UI based on plugin-driven customizations to the Datasette interface. It's really impressive, and this ten minute demo shows quite how much it is capable of: it can crawl sitemaps and fetch pages, caching them (using zstandard with optional cus

datasette-scraper walkthrough on YouTube

datasette-scraper is Colin Dellow's new plugin that turns Datasette into a powerful web scraping tool, with a web UI based on plugin-driven customizations to the Datasette interface. It's really impressive, and this ten minute demo shows quite how much it is capable of: it can crawl sitemaps and fetch pages, caching them (using zstandard with optional custom dictionaries for extra compression) to speed up subsequent crawls... and you can add your own plugins to extract structured data from crawled pages and save it to a separate SQLite table!

Via datasette-scraper


Examples of sites built using Datasette

Examples of sites built using Datasette I gave the examples page on the Datasette website a significant upgrade today: it now includes screenshots (taken using shot-scraper) of six projects chosen to illustrate the variety of problems Datasette can be used to tackle. Via datasette.io issue #129

Examples of sites built using Datasette

I gave the examples page on the Datasette website a significant upgrade today: it now includes screenshots (taken using shot-scraper) of six projects chosen to illustrate the variety of problems Datasette can be used to tackle.

Via datasette.io issue #129


reb00ted

What if Apple's headset is a smashing success?

Signs are pointing that Apple will announce its first headset in the next few months. This would be a major new product for Apple – and the industry beyond –, but there is very little excitement in the air. We can blame Meta for that. After buying Oculus, iterating over the product for almost 9 years since, and reportedly spending more than $10 billion a year on it, their VR products remains a d

Signs are pointing that Apple will announce its first headset in the next few months. This would be a major new product for Apple – and the industry beyond –, but there is very little excitement in the air.

We can blame Meta for that. After buying Oculus, iterating over the product for almost 9 years since, and reportedly spending more than $10 billion a year on it, their VR products remains a distinct Meh. I bought a Quest 2 myself, and while it definitely has some interesting features (I climbed Mt Everest, in VR!), it mostly sits on the shelf, gathering dust.

So the industry consensus is that Apple’s won’t amount to much either. If Meta couldn’t find compelling use cases, the thinking goes, Apple won’t either, because there aren’t any! (Other than some limited forms of gaming and some niche enterprise ones.)

I think this line of thinking would be a mistake.

My argument: Apple understands their customers and works down their use cases better than anybody. If Apple works on a new product category for many years – and signs are that they have – and then finally decides that the product is ready, chances are, it is. Their track record on new products is largely unblemished since the return of Jobs about 25 years ago:

fruity fun design for a computer (iMac) – success digital music player (iPod) – smashing success smartphone (iPhone) – so successful it killed and reinvented an entire industry table (iPad) – success watch (iWatch) – success … and many smaller products, like headsets, voice assistance, Keynote etc.

Looking for a major dud in those 25 years, I can’t really find one. (Sure, some smaller things like the 25-year anniversary Mac – but that was always a gimmick, not a serious product line.)

It appears that based on their history, betting against Apple’s headset is not a smart move. Even if we can’t imagine why an Apple headset would be compelling before we see it: we non-Apple people didn’t predict iPhone either, but once we saw it, it was “immediately” obvious.

So let’s turn this around. What about we instead assume the headset will be a major success? Then what?

I believe this would transform the entire technology industry profoundly. For historical analogies, I would have to go back all the way to the early 80’s when graphical user interfaces first became widely used – coincidentally (or not) an Apple accomplishment: they represented a fundamentally different way of interacting with computers than the text terminals that came before them. Xerox Parc gave that demo to many people. Nobody saw the potential and went with it, just Apple did. And they pulled a product together that caused the entire industry to transform. Terminals are still in use, but only by very few people for very specific tasks (like system administrators).

What if AR/VR interfaces swept the world as the GUI swept the PC?

I believe they can, if somebody relentlessly focuses on uses cases and really makes them work. I built my first 3D prototype in VRML in 1997. It was compelling back then and it would be today. Those uses can be found, I’m quite certain.

Based on everything we’ve seen, it’s clear that Meta won’t find them. Hanging out with your friends who don’t look like your friends in some 3D universe is just not it. But if anybody can do it, it’s Apple.

So I’m very much looking forward to seeing what they came up with, and I think you should be, too.

Saturday, 28. January 2023

Simon Willison

Quoting Tom Critchlow

We’ve built many tools for publishing to the web - but I want to make the claim that we have underdeveloped the tools and platforms for publishing collections, indexes and small databases. It’s too hard to build these kinds of experiences, too hard to maintain them and a lack of collaborative tools. — Tom Critchlow

We’ve built many tools for publishing to the web - but I want to make the claim that we have underdeveloped the tools and platforms for publishing collections, indexes and small databases. It’s too hard to build these kinds of experiences, too hard to maintain them and a lack of collaborative tools.

Tom Critchlow


Cyber

Cyber "Cyber is a new language for fast, efficient, and concurrent scripting." Lots of interesting ideas in here, but the one that really caught my eye is that its designed to be easily embedded into other languages and "will allow the host to insert gas mileage checks in user scripts. This allows the host to control how long a script can run" - my dream feature for implementing a safe, sandboxe

Cyber

"Cyber is a new language for fast, efficient, and concurrent scripting." Lots of interesting ideas in here, but the one that really caught my eye is that its designed to be easily embedded into other languages and "will allow the host to insert gas mileage checks in user scripts. This allows the host to control how long a script can run" - my dream feature for implementing a safe, sandboxed extension mechanism! Cyber is implemented using Zig and LLVM.

Via lobste.rs


sqlite-jsonschema

sqlite-jsonschema "A SQLite extension for validating JSON objects with JSON Schema", building on the jsonschema Rust crate. SQLite and JSON are already a great combination - Alex suggests using this extension to implement check constraints to validate JSON columns before inserting into a table, or just to run queries finding existing data that doesn't match a given schema.

sqlite-jsonschema

"A SQLite extension for validating JSON objects with JSON Schema", building on the jsonschema Rust crate. SQLite and JSON are already a great combination - Alex suggests using this extension to implement check constraints to validate JSON columns before inserting into a table, or just to run queries finding existing data that doesn't match a given schema.


sqlite-ulid

sqlite-ulid Alex Garcia's sqlite-ulid adds lightning-fast SQL functions for generating ULIDs - Universally Unique Lexicographically Sortable Identifiers. These work like UUIDs but are smaller and faster to generate, and can be canonically encoded as a URL-safe 26 character string (UUIDs are 36 characters). Again, this builds on a Rust crate - ulid-rs - and can generate 1 million byte-represented

sqlite-ulid

Alex Garcia's sqlite-ulid adds lightning-fast SQL functions for generating ULIDs - Universally Unique Lexicographically Sortable Identifiers. These work like UUIDs but are smaller and faster to generate, and can be canonically encoded as a URL-safe 26 character string (UUIDs are 36 characters). Again, this builds on a Rust crate - ulid-rs - and can generate 1 million byte-represented ULIDs with the ulid_bytes() function in just 88.4ms.


sqlite-fastrand

sqlite-fastrand Alex Garcia just dropped three new SQLite extensions, and I'm going to link to all of them. The first is sqlite-fastrand, which adds new functions for generating random numbers (and alphanumeric characters too). Impressively, these out-perform the default SQLite random() and randomblob() functions by about 1.6-2.6x, thanks to being built on the Rust fastrand crate which builds on

sqlite-fastrand

Alex Garcia just dropped three new SQLite extensions, and I'm going to link to all of them. The first is sqlite-fastrand, which adds new functions for generating random numbers (and alphanumeric characters too). Impressively, these out-perform the default SQLite random() and randomblob() functions by about 1.6-2.6x, thanks to being built on the Rust fastrand crate which builds on wyhash, an extremely fast (though not cryptographically secure) hashing function.

Friday, 27. January 2023

Simon Willison

graphql-voyager

graphql-voyager Neat tool for producing an interactive graph visualization of any GraphQL API. Click "Change schema" and then "Introspection" and it will give you a GraphQL query you can run against your own API - copy and paste back the JSON results and the visualizer will show you how your API fits together. I tested this against a datasette-graphql instance and it worked exactly as described.

graphql-voyager

Neat tool for producing an interactive graph visualization of any GraphQL API. Click "Change schema" and then "Introspection" and it will give you a GraphQL query you can run against your own API - copy and paste back the JSON results and the visualizer will show you how your API fits together. I tested this against a datasette-graphql instance and it worked exactly as described.


babelmark3

babelmark3 I found this tool today while investigating an bug in Datasette's datasette-render-markdown plugin: it lets you run a fragment of Markdown through dozens of different Markdown libraries across multiple different languages and compare the results. Under the hood it works with a registry of API URL endpoints for different implementations, most of which are encrypted in the configuration

babelmark3

I found this tool today while investigating an bug in Datasette's datasette-render-markdown plugin: it lets you run a fragment of Markdown through dozens of different Markdown libraries across multiple different languages and compare the results. Under the hood it works with a registry of API URL endpoints for different implementations, most of which are encrypted in the configuration file on GitHub because they are only intended to be used by this comparison tool.

Via datasette-render-markdown issue #13


Exploring MusicCaps, the evaluation data released to accompany Google's MusicLM text-to-music model

Google Research just released MusicLM: Generating Music From Text. It's a new generative AI model that takes a descriptive prompt and produces a "high-fidelity" music track. Here's the paper (and a more readable version using arXiv Vanity). There's no interactive demo yet, but there are dozens of examples on the site. The prompts are things like this: A fusion of reggaeton and electronic dan

Google Research just released MusicLM: Generating Music From Text. It's a new generative AI model that takes a descriptive prompt and produces a "high-fidelity" music track. Here's the paper (and a more readable version using arXiv Vanity).

There's no interactive demo yet, but there are dozens of examples on the site. The prompts are things like this:

A fusion of reggaeton and electronic dance music, with a spacey, otherworldly sound. Induces the experience of being lost in space, and the music would be designed to evoke a sense of wonder and awe, while being danceable.

Included are examples of opera, jazz, peruvian punk, berlin 90s house and many more. It's a really fun page to explore.

The MusicCaps dataset

The paper abstract includes this line:

To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.

Later in the paper they provide a little more detail:

To evaluate MusicLM, we prepare MusicCaps, a high-quality music caption dataset, which we make publicly available. This dataset includes 5.5k music clips from AudioSet (Gemmeke et al., 2017), each paired with corresponding text descriptions in English, written by ten professional musicians. For each 10-second music clip, MusicCaps provides: (1) a free-text caption consisting of four sentences on average, describing the music and (2) a list of music aspects, describing genre, mood, tempo, singer voices, instrumentation, dissonances, rhythm, etc. On average, the dataset includes eleven aspects per clip.

Here's where they published that data on Kaggle.

I love digging into these training datasets - and this one is pretty tiny. I decided to take a look and see what I could learn.

I built musiccaps.datasette.io to support exploring and searching the data.

The dataset itself turns out to not have any audio clips in it at all - instead, each row of the data includes a YouTube video ID and a start and end time for a clip within it.

This is similar to how the LAION dataset used for Stable Diffusion works - that dataset contains URLs to images, but not the images themselves.

A YouTube video ID is just a string like zYM0gtd_PRo - the dataset has no further information.

But... information on those videos is available via the YouTube API. So I fetched back full details of all 5,500 videos and included them in the database as well.

This means we can start answering all sorts of interesting questions:

Which YouTube channels are most represented in the dataset? - the answer is ProGuitarShopDemos with 12, Berliner Philharmoniker with 8, Prymaxe with 8 - generally it's a pretty wide distribution though, with 5,163 channels represented. How many videos are no longer on YouTube? - 18 of the videos referenced in the dataset no longer exist according to the YouTube API. How many videos include the magic YouTube description phrase "No copyright infringement intended"? - 31 of them do. See No Copyright Intended by Andy Baio for background on this superstition.

The search feature is configured to run against the human-written descriptions provided as the key feature of the MusicCaps dataset - try some searches like opera, ballad, guitar, or whimsical.

How I built this

The dataset is available on Kaggle, but since it's licensed CC BY-SA 4.0 I grabbed a copy of it and dropped the CSV into this GitHub repo.

You can explore that using Datasette Lite at this URL:

https://lite.datasette.io/?csv=https://github.com/simonw/musiccaps-datasette/blob/main/data/musiccaps-public.csv

This was a decent starting point, but it felt really important to be able to click "play" and listen to that audio.

I built a new Datasette plugin for this: datasette-youtube-embed.

The plugin works by looking out for YouTube URLs of the following format:

https://www.youtube.com/watch?v=-U16iKiXGuY https://www.youtube.com/watch?v=-U16iKiXGuY&start=30 https://www.youtube.com/watch?v=-U16iKiXGuY&start=30&end=40

If it finds one of those, it replaces it with a YouTube embed that passes through the start and end parameters, if present.

This means it can play the exact clip that was referenced by the MusicCaps dataset.

My first attempt at this plugin used regular YouTube embeds, but Datasette defaults to returning up to 100 rows on a page, and 100 YouTube iframe embeds is pretty heavy!

Instead, I switched the plugin to use the Lite YouTube Embed Web Component by Paul Irish.

Sadly this means the plugin doesn't work with Datasette Lite, so I switched to deploying a full Datasette instance to Vercel instead.

Adding video details from the YouTube API

I wanted to add more context about each of the videos. The YouTube Data API has a videos endpoint which accepts a comma-separated list of video IDs (up to 50 at a time, not mentioned in the documentation) and returns details about each video.

After some experimentation, this turned out to be the recipe that gave me the key data I wanted:

https://www.googleapis.com/youtube/v3/videos ?part=snippet,statistics &id=video_id1,video_id2,video_id3 &key=youtube-api-key

I built a Jupyter notebook that batched up all of the IDs into groups of 50, fetched the data and wrote it into my SQLite database using sqlite-utils.

The audioset_positive_labels column in the initial CSV had values like /m/0140xf,/m/02cjck,/m/04rlf - these turned out to be match IDs in this CSV file of AudioSet categories in the tensorflow/models GitHub repo, so I fetched and transformed those as well.

I had to do a little bit of extra cleanup to get everything working how I wanted. The final result was two tables, with the following schema:

CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT ); CREATE TABLE [videos] ( [id] TEXT PRIMARY KEY, [publishedAt] TEXT, [channelId] TEXT, [title] TEXT, [description] TEXT, [thumbnails] TEXT, [channelTitle] TEXT, [tags] TEXT, [categoryId] TEXT, [liveBroadcastContent] TEXT, [localized] TEXT, [viewCount] INTEGER, [likeCount] INTEGER, [favoriteCount] INTEGER, [commentCount] INTEGER, [defaultAudioLanguage] TEXT, [defaultLanguage] TEXT );

I configured SQLite full-text search against the musiccaps.caption column.

The last step was to create a SQL view that combined the key data from the two tables. After some more iteration I came up with this one:

CREATE VIEW musiccaps_details AS select musiccaps.url as video, json_object( 'label', coalesce(videos.title, 'Missing from YouTube'), 'href', musiccaps.url ) as youtube_link, musiccaps.caption as musiccaps_caption, videos.publishedAt as youtube_published, videos.channelTitle as youtube_channel, videos.description as youtube_description, musiccaps.audioset_names as musiccaps_names, musiccaps.aspect_list as musiccaps_aspects, musiccaps.author_id as musiccaps_author, videos.id as youtube_id, musiccaps.rowid as musiccaps_rowid from musiccaps left join videos on musiccaps.ytid = videos.id;

I built a custom template for the instance homepage to add a search box, then shipped the whole thing to Vercel using the datasette-publish-vercel plugin.

Let me know what you find

Digging around in this data is a lot of fun. I'd love to hear what you find. Hit me up on Mastodon if you find anything interesting!


Moxy Tongue

AI in Civil Society

A Constitution with computational integrity can't originate Rights in a database.. this data domain of AI is not the domain of Rights for people in a "civil society".  Until people own root authority, and systems are all derived accurately, there can be no healthy implementation of AI, or as we are watching, a "civil society" of ID participants. People must own root, AI is an adversarial in
A Constitution with computational integrity can't originate Rights in a database.. this data domain of AI is not the domain of Rights for people in a "civil society". 
Until people own root authority, and systems are all derived accurately, there can be no healthy implementation of AI, or as we are watching, a "civil society" of ID participants. People must own root, AI is an adversarial intelligence deriving all capacity from people, in order to detect/interpret/solve problems. 
Structure yields results.. artificial intelligence is in the fraud stage.. a thing that deceives for gain.
Fraud defined: a thing intended to deceive others, typically by unjustifiably claiming or being credited with accomplishments or qualities of people.


In order to protect the nascent value of AI research, and help humanity manifest this immensely useful technical outcome, the structure of human participation in civil society MUST be conceived anew. 

What is "Sovereign Source Authority"?


Jon Udell

Images considered harmful (sometimes)

The Mastodon dashboards I’ve been developing and describing in this series are backed by a Steampipe plugin that translates SQL queries to Mastodon API calls. Like all Steampipe plugins you can use this one to run those queries in all sorts of ways: from psql or another Postgres CLI (perhaps via cron, perhaps in a … Continue reading Images considered harmful (sometimes)

The Mastodon dashboards I’ve been developing and describing in this series are backed by a Steampipe plugin that translates SQL queries to Mastodon API calls. Like all Steampipe plugins you can use this one to run those queries in all sorts of ways: from psql or another Postgres CLI (perhaps via cron, perhaps in a CI/CD pipeline); from Metabase or Grafana or any Postgres-compatible BI tool; from Python or JavaScript or any programming language. The Steampipe core is a versatile software component that you can plug into just about any environment.

There’s also, of course, Steampipe’s dashboards as code approach which powers the alternate Mastodon UX that I’ve been exploring in this series. You can think of this dashboarding tool as a primitive web browser with a freakish talent for querying Postgres and weaving SQL results into widgets such as infocards, input controls, charts, tables, and relationship graphs. You compose widgets using HCL (Hashicorp Configuration Language) instead of HTML, arrange them using basic layout syntax, and view them by connecting your browser to the local Steampipe dashboard server or to cloud.steampipe.io.

The alternate Mastodon UX built this way was aptly described (thanks again Greg Wilson!) as A Bloomberg terminal for Mastodon. As a reminder, the stock Mastodon web client looks like this.

And the pages served by the Mastodon dashboards look like this.

I can scan the latter view far more effectively.

Now, I’ve been trying out a bunch of alternate Mastodon clients lately. I love Ivory on my phone, and Elk in the browser, and I use them when I want an experience that feels social-media-ish. But I don’t use Ivory or Elk or Toot! or the stock Mastodon web clients (browser, phone) when I want an experience that feels RSS-reader-ish.

For some information landscapes I want to unfold a big map, spread it out, and scan for points of interest. Social media is that kind of landscape, RSS readers were the first way I surveyed it effectively, and these Mastodon dashboards are becoming my new way.

When I compare those two screenshots though, it’s not just the density of the latter that I notice, but also the absence of pictures. At first I fretted about that. Could the dashboard pages render full-strength HTML? Perhaps they will, and there are a few different ways that could happen, but meanwhile I’ve come to embrace the text-only constraint. When I’ve got my map unfolded, and I’m scanning my home timeline or my lists for things to click into, images can be a distraction. I wouldn’t have chosen to omit them, but I find that their absence enables me to focus very effectively on who is speaking, and what they are saying or boosting.

There is also, of course, the graphical view afforded by relationship graphs. These feel very map-like in the way they reveal clusters of people interacting with one another. I’m finding them more useful than anticipated.

But the text that I read in these dashboards is image-free for now. And I think it’s having a calming effect. There are plenty of times when I want images, don’t get me wrong, and that’s partly why I use a mix of conventional Mastodon clients. But there are times when I want to dial down the clutter, just like there are times when I switch my phone to monochrome. Two things can be true: The pictures we share with one another are a source of joy, and they are sensory overload. I’ll make them optional here when I can, and would like to have the option to use all social media interfaces in text mode.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/

Thursday, 26. January 2023

Simon Willison

Guppe Groups

Guppe Groups This is a really neat mechanism for helping build topic-oriented communities on Mastodon: follow @any-group-name@a.gup.pe to join (or create) a group, then that account will re-broadcast any messages from people in that group who mention the group in their message. I found it via the histodons group. I was pondering how something like this might work this recently, so it's great to

Guppe Groups

This is a really neat mechanism for helping build topic-oriented communities on Mastodon: follow @any-group-name@a.gup.pe to join (or create) a group, then that account will re-broadcast any messages from people in that group who mention the group in their message.

I found it via the histodons group. I was pondering how something like this might work this recently, so it's great to see someone has built it already.


Quoting Ben Johnson

[On SQLite for production concurrent writes] In general, WAL mode “just works” as Simon said. You just need to make sure you don’t have long running write transactions, although those are somewhat problematic in any database system. Don’t do stuff like starting a write txn and then calling a remote API and then committing. That’ll kill your write throughout. — Ben Johnson

[On SQLite for production concurrent writes] In general, WAL mode “just works” as Simon said. You just need to make sure you don’t have long running write transactions, although those are somewhat problematic in any database system. Don’t do stuff like starting a write txn and then calling a remote API and then committing. That’ll kill your write throughout.

Ben Johnson

Wednesday, 25. January 2023

Simon Willison

Python Sandbox in Web Assembly

Python Sandbox in Web Assembly Jim Kring responded to my questions on Mastodon about running Python in a WASM sandbox by building this repo, which demonstrates using wasmer-python to run a build of Python 3.6 compiled to WebAssembly, complete with protected access to a sandbox directory. Via @jimkring

Python Sandbox in Web Assembly

Jim Kring responded to my questions on Mastodon about running Python in a WASM sandbox by building this repo, which demonstrates using wasmer-python to run a build of Python 3.6 compiled to WebAssembly, complete with protected access to a sandbox directory.

Via @jimkring

Tuesday, 24. January 2023

Simon Willison

Jortage Communal Cloud

Jortage Communal Cloud An interesting pattern that's emerging in the Mastodon / Fediverse community: Jortage is "a communal project providing object storage and hosting". Each Mastodon server needs to host copies of files - not just for their users, but files that have been imported into the instance because they were posted by other people followed by that instance's users. Jortage lets multipl

Jortage Communal Cloud

An interesting pattern that's emerging in the Mastodon / Fediverse community: Jortage is "a communal project providing object storage and hosting". Each Mastodon server needs to host copies of files - not just for their users, but files that have been imported into the instance because they were posted by other people followed by that instance's users. Jortage lets multiple instances share the same objects, reducing costs and making things more efficient. I like the idea that multiple projects like this can co-exist, improving the efficiency of the overall network without introducing single centralized services.


Quoting Kellan Elliott-McCrea

Large teams spend more time dealing with coordination and are more likely to reach for architecture and abstractions that they hope will reduce coordination costs, aka if I architect this well enough I don’t have to speak to my colleagues. Microservices, event buses, and schema free databases are all examples of attempts to architect our way around coordination. A decade in we’ve learned that the

Large teams spend more time dealing with coordination and are more likely to reach for architecture and abstractions that they hope will reduce coordination costs, aka if I architect this well enough I don’t have to speak to my colleagues. Microservices, event buses, and schema free databases are all examples of attempts to architect our way around coordination. A decade in we’ve learned that these patterns raise the cost of reasoning about a system, during onboarding, during design, and during incidents and outages.

Kellan Elliott-McCrea


reb00ted

Activity Streams graphical model

All you need is a gazillionaire doing strange things to some internet platform, and all of a sudden decentralized social media soars in adoption. So lots of people are suddenly seriously looking at how to contribute, myself included. Core to this is the ActivityPub standard, and real-world implementations that mix it with additional independently defined protocols, such as what Mastodon does.

All you need is a gazillionaire doing strange things to some internet platform, and all of a sudden decentralized social media soars in adoption. So lots of people are suddenly seriously looking at how to contribute, myself included.

Core to this is the ActivityPub standard, and real-world implementations that mix it with additional independently defined protocols, such as what Mastodon does.

None of them are particularly easy to understand. So I did a bit of drawing just to make it clearer (for myself) what kind of data can be shipped around in the Fediverse. To be clear, this is only a small part of the overall stack, but an important one.

Here are some diagrams. They are essentially inheritance diagrams that show what kinds of activities there are, and actors, etc. Posted here in case they are useful for others, too.

And here’s how to interpret my homegrown graphical notation. (I made it up for my dissertation eons ago, and used it ever since. It has certain advantages over, say, UML or traditional ERA diagram styles. IMHO :-))

Monday, 23. January 2023

Simon Willison

Quoting Riley Goodside

I think prompt engineering can be divided into “context engineering”, selecting and preparing relevant context for a task, and “prompt programming”, writing clear instructions. For an LLM search application like Perplexity, both matter a lot, but only the final, presentation-oriented stage of the latter is vulnerable to being echoed. — Riley Goodside

I think prompt engineering can be divided into “context engineering”, selecting and preparing relevant context for a task, and “prompt programming”, writing clear instructions. For an LLM search application like Perplexity, both matter a lot, but only the final, presentation-oriented stage of the latter is vulnerable to being echoed.

Riley Goodside


Hyperonomy Digital Identity Lab

Mapping the Web 7.0 DIDComm/HTTP Architecture Reference Model to the TOIP Trust Spanning Layer Framework

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what … Continue reading

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Simon Willison

Quoting Murray Shanahan

It is very important to bear in mind that this is what large language models really do. Suppose we give an LLM the prompt “The first person to walk on the Moon was ”, and suppose it responds with “Neil Armstrong”. What are we really asking here? In an important sense, we are not really asking who was the first person to walk on the Moon. What we are really asking the model is the following questi

It is very important to bear in mind that this is what large language models really do. Suppose we give an LLM the prompt “The first person to walk on the Moon was ”, and suppose it responds with “Neil Armstrong”. What are we really asking here? In an important sense, we are not really asking who was the first person to walk on the Moon. What we are really asking the model is the following question: Given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence “The first person to walk on the Moon was ”? A good reply to this question is “Neil Armstrong”.

Murray Shanahan


@_Nat Zone

『JIS X 9252:2023 情報技術―オンラインにおけるプライバシーに関する通知及び同意』が発行されました

去る1月20日(金)に、『ISO/IEC 2918…

去る1月20日(金)に、『ISO/IEC 29184 Online privacy notice and consent』 のJIS版、『JIS X 9252:2023 情報技術―オンラインにおけるプライバシーに関する通知及び同意』が発行されました。JIS原案作成委員会事務局を努められたJIPDEC様、検討に参加いただきました先生方、経産省のご担当者様、JSAのご担当者様、また、提出された原案をご審議いただいた日本産業標準調査会標準第二部会 情報技術専門委員の皆様、その他、ご尽力いただきました関係者の皆様にあつく御礼申し上げます。

JIS X 9252 は、日本ではプライバシー・ポリシーと呼ばれることの多い、個人情報の取扱に関する通知の書き方と、同意を処理の根拠として使う場合の同意の取得方法を規格として書き起こしたものです。もともとは経産省のガイドラインだったものをISOに持ち込み、EDPBなどを含む国際的な検討を行い国際規格化したものを、再び日本に持ち帰り、JIS規格にしたものです。「通知」はいかなる場合でも必要ですから、全事業者が関係する規格であると言っても過言ではありません。ぜひお手にお取りいただければと思います。

発行のお知らせと官報はそれぞれ以下のリンクで見ることができます。

■日本規格協会
https://webdesk.jsa.or.jp/books/W11M0090/index/?bunsyo_id=JIS+X+9252%3A2023
■官報
https://kanpou.npb.go.jp/20230120/20230120g00013/20230120g000130041f.html


Damien Bod

Basic Authentication in ASP.NET Core

This article shows how basic authentication could be implemented in an ASP.NET Core application. This is not the recommended way to implement security for user flows as the password is always sent with each request but the flow is sometimes required to implement a standard or you sometimes need to support one side of an […]

This article shows how basic authentication could be implemented in an ASP.NET Core application. This is not the recommended way to implement security for user flows as the password is always sent with each request but the flow is sometimes required to implement a standard or you sometimes need to support one side of an authentication flow which requires this.

Code: https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

Client implementation

An client implementation needs to send an authorization header with a clientId and a clientSecret separated using the “:” char and encoded using base64. The secret is hashed using SHA256 so as not to send the original secret in the request but this does not really improve the security much, it just prevents the original application secret being shared. The request is sent using HTTPS and so the headers are encrypted.

private static string CreateBasicAuthenticationHeader( GetDelegatedApiTokenOAuthTokenExchangeModel reqData) { var builder = new StringBuilder() .Append(reqData.ClientId) .Append(':') .Append(OauthTokenExchangeExtentions .ToSha256(reqData.ClientSecret)); var credentials = Convert.ToBase64String( Encoding.ASCII.GetBytes(builder.ToString())); return credentials; }

The ShA256 hash is implemented using a simple method which returns a base64 string of this.

public static string ToSha256(string text) { using var sha256 = SHA256.Create(); var bytes = Encoding.UTF8.GetBytes(text); var hash = sha256.ComputeHash(bytes); return Convert.ToBase64String(hash); }

The credentials are sent using the Authentication header.

string credentials = CreateBasicAuthenticationHeader(reqData); httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", credentials);

Server implementation

The server part of the flow is implemented using the idunno.Authentication.Basic Nuget package. The credentials are validated using the same hash and checked against the expected values from the configuration.

services.AddAuthentication(BasicAuthenticationDefaults.AuthenticationScheme) .AddBasic(options => { options.Realm = "oauthtokenexchange"; options.Events = new BasicAuthenticationEvents { OnValidateCredentials = context => { var config = context.HttpContext.RequestServices .GetService<IOptions<OauthTokenExchangeConfiguration>>(); if(ValidateBasicAuthentication.IsValid(context.Username, context.Password, config.Value)) { var claims = new[] { new Claim( ClaimTypes.NameIdentifier, context.Username, ClaimValueTypes.String, context.Options.ClaimsIssuer), new Claim( ClaimTypes.Name, context.Username, ClaimValueTypes.String, context.Options.ClaimsIssuer) }; context.Principal = new ClaimsPrincipal( new ClaimsIdentity(claims, context.Scheme.Name)); context.Success(); } return Task.CompletedTask; } }; });

The ValidateBasicAuthentication class checks the used credentials.

public static class ValidateBasicAuthentication { public static bool IsValid( string clientId, string clientSecret, OauthTokenExchangeConfiguration oauthTokenExchangeConfiguration) { if(!clientId.Equals(oauthTokenExchangeConfiguration.ClientId)) { return false; }; if (!clientSecret.Equals( OauthTokenExchangeExtentions.ToSha256( oauthTokenExchangeConfiguration.ClientSecret))) { return false; }; return true; } }

The basic authentication can be validated in the authorize attribute using the correct scheme.

[Authorize(AuthenticationSchemes = BasicAuthenticationDefaults.AuthenticationScheme)] [HttpPost("~/connect/oauthTokenExchangetoken"), Produces("application/json")] public async Task<IActionResult> Exchange([FromForm] OauthTokenExchangePayload oauthTokenExchangePayload) { // business }

This works well but sending the password and the name on ever request is not always the best way of implementing authentication. This should only be used if required when implementing a standard. There are better ways and more secure ways of securing APIs.

Links

https://github.com/blowdart/idunno.Authentication/tree/dev/src/idunno.Authentication.Basic


@_Nat Zone

1月26日(木)22時〜「NIST SP800-63-4(案)」を配信しながらダラダラ読む会

1年近く公表が遅れていた米国 NIST SP800…

1年近く公表が遅れていた米国 NIST SP800-63-4 のプレ・ドラフトが12月に公開され、3/24までパブリックコメント受付中です。バージョン3である「SP800-63-3」に比べて、対象範囲がかなり広くなっています。この観点からはちょっと足りないところがあり、その点を著者の一人に「○○がすっぽり落ちてない?」と聞いたところ、「認識はしていてこれから書く予定だ」ということでした。「すべてのコメントは歓迎だ」とのことです。

SP800-63は米国連邦政府向けの基準ですが、各国の関連規格にも大きな影響を与えています。日本の政府の基準もその中の一つです。おりしも日本政府もこのあたりの改定を考えているようですので、SP800-63をちゃんと勉強しておくことは意味があります。

そこで、2回ほどに分けてSP800-63-4を読んでみようと思います。2回にわけるのは、関係する文書は

SP800-63-4 https://nvlpubs.nist.gov/nistpubs/Spe… SP800-63A-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63B-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63C-4 https://doi.org/10.6028/NIST.SP.800-6…

の4冊もあり、どう考えても1回では終わらないからです。

これらは、OpenID Foundation ジャパンで日本語訳も鋭意作成中のようで、配信までに多少日本語で読めるようになったものも揃っているかもしれません。使えるようであれば、それを副読本にしながらやろうかと思います。

パブコメが 3/24 までと時間に余裕があるのになぜ今やるかというと、今月末に日本政府1の某官庁の関連会議があるからです。

チャット参加もビデオ参加2も可能です。 ビデオ参加には mmhmm を使います。mmhmmで参加したい方はリンクをお送りしますのでご連絡ください。チャットはこのYoutubeチャンネルにチャンネル登録して1分以上たてば参加できるはず。ただ、前回直前に登録した方はチャットを送れなかったという事例もあるので、事前に登録することをおすすめします。


Simon Willison

Wildebeest

Wildebeest New project from Cloudflare, first quietly unveiled three weeks ago: "Wildebeest is an ActivityPub and Mastodon-compatible server". It's built using a flurry of Cloudflare-specific technology, including Workers, Pages and their SQLite-based D1 database. Via @simon

Wildebeest

New project from Cloudflare, first quietly unveiled three weeks ago: "Wildebeest is an ActivityPub and Mastodon-compatible server". It's built using a flurry of Cloudflare-specific technology, including Workers, Pages and their SQLite-based D1 database.

Via @simon

Sunday, 22. January 2023

Simon Willison

Quoting Perplexity AI, via a prompt injection leak attack

Generate a comprehensive and informative answer (but no more than 80 words) for a given question solely based on the provided web Search Results (URL and Summary). You must only use information from the provided search results. Use an unbiased and journalistic tone. Use this current date and time: Wednesday, December 07, 2022 22:50:56 UTC. Combine search results together into a coherent answer. D

Generate a comprehensive and informative answer (but no more than 80 words) for a given question solely based on the provided web Search Results (URL and Summary). You must only use information from the provided search results. Use an unbiased and journalistic tone. Use this current date and time: Wednesday, December 07, 2022 22:50:56 UTC. Combine search results together into a coherent answer. Do not repeat text. Cite search results using [${number}] notation. Only cite the most relevant results that answer the question accurately. If different results refer to different entities with the same name, write separate answers for each entity.

Perplexity AI, via a prompt injection leak attack


Jon Udell

Working with Mastodon lists

Since the early days of the blogosphere I have cherished the ability to view the world through the eyes of people more qualified than me to understand and explain what happens in particular domains. Although Twitter lists were a great way to collect people who provide those perspectives, I made little use of them. Prompted … Continue reading Working with Mastodon lists

Since the early days of the blogosphere I have cherished the ability to view the world through the eyes of people more qualified than me to understand and explain what happens in particular domains. Although Twitter lists were a great way to collect people who provide those perspectives, I made little use of them. Prompted by Martin Fowler’s frustration with lists I described my first steps toward reviving the technique in Lists and people on Mastodon.

First I encapsulated the relevant APIs in a pair of tables provided by the Steampipe plugin for Mastodon: mastodon_list and mastodon_list_account. Then I used those tables to enhance the Mastodon dashboard in a couple of ways. The followers and following tabs now show me who is or isn’t on a list.

And I can use the list tab to read recent posts from people on each list.

With these basic building blocks in place I want to be more intentional about curating these lists. To that end I thought I’d share the lists I’ve built so far, and invite suggestions. You can export your lists from the Mastodon web client using Preferences -> Data export -> Lists. The resulting CSV file has two columns: the name of a list and the Mastodon handle for a person.

list person Library liza@post.lurk.org Library alexwade@fosstodon.org Library jdiffin@glammr.us

Here’s the export I really wanted.

list person server url note Library liza post.lurk.org https://mastodon.social/@liza@post.lurk.org Technologist in Residence at the Harvard Library Innovation Lab. Interactive fiction, generative art, Democratic political tech. Amateur scholar of fringe utopian sci-fi. I will pet your dog. Library alexwade fosstodon.org https://mastodon.social/@alexwade@fosstodon.org Librarian, open science advocate, geek. VP Data Products, @digitalscience (Seattle, WA) Formerly: AllenAI (AI2), CZI, Amazon, Microsoft Research, UW Library jdiffin glammr.us https://mastodon.social/@jdiffin@glammr.us Head of Library Technology Services Section, NLM. Opinions are my own.

It’s easy to create that list using the Steampipe plugin for Mastodon.

steampipe query “select title as list, username, server, instance_qualified_account_url, note from mastodon_list l join mastodon_list_account a on l.id = a.list_id order by list, username” –output csv > mastodon_lists.csv

(I’m using instance-qualified URLs relative to my home server, if your home is elsewhere than mastodon.social you’ll want to adjust the links accordingly.)

I’ve uploaded the output of that query to a Google sheet. The most recent addition to my climate list is Peter Gleick who was quoted yesterday in Farhad Manhjoo’s What Will ‘Weather Whiplash’ Mean for California? I’d been looking for an expert perspective on California water policy, so I checked to see if Peter is on Mastodon, found that he is, and added him to my climate list. Here’s an example of the kind of perspective I’m looking for.

California reservoirs have captured enough water from the recent storms to supply all urban water use in the state for a year. Not counting the massive snowpack. Those claims that “all the water is being wasted flowing to the ocean” is nonsense. — Mastodon link

Of course it’s a complex issue, and I don’t think I’m wrong to be very interested in strategies for capturing more rainfall and using it to recharge aquifers. But this was a useful reality check.

Who are the experts you follow in various domains? Can I cherrypick from your lists? And can we imagine a way of remixing and sharing these lists? I’m looking at you, Lucas Gonze, with fond memories of how Webjay did that for MP3 playlists. Can we do that for Mastodon lists too?


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/

Saturday, 21. January 2023

Simon Willison

Inside the Globus INK: a mechanical navigation computer for Soviet spaceflight

Inside the Globus INK: a mechanical navigation computer for Soviet spaceflight Absolutely beautiful piece of Soviet spacecraft engineering, explained in detail by Ken Shirriff. Via @kenshirriff

Inside the Globus INK: a mechanical navigation computer for Soviet spaceflight

Absolutely beautiful piece of Soviet spacecraft engineering, explained in detail by Ken Shirriff.

Via @kenshirriff

Friday, 20. January 2023

Phil Windleys Technometria

SSI Doesn't Mean Accounts Are Going Away

I saw a tweet that said (paraphrasing): "In the future people won't have accounts. The person (and their wallet) will be the account." While I appreciate the sentiment, I think reality is much more nuanced than that because identity management is about relationships, not identities (whatever those are). Supporting a relationship requires that we recognize, remember, and react to an

I saw a tweet that said (paraphrasing): "In the future people won't have accounts. The person (and their wallet) will be the account." While I appreciate the sentiment, I think reality is much more nuanced than that because identity management is about relationships, not identities (whatever those are).

Supporting a relationship requires that we recognize, remember, and react to another party (person, business, or thing). In self-sovereign identity (SSI), the tools that support that are wallets and agents. For people, these will be personal. For a business or other organization they'll be enterprise wallets and agents. The primary difference between these is that enterprise wallets and agents will be integrated with the other systems that the business uses to support the relationships they have at scale.

Remembering and reacting to another entity requires that you keep information about them for the length of the relationship. Some relationships, like the one I form with the convenience store clerk when I buy a candy bar, are ephemeral, lasting only for the length of the transaction. I don't remember much while its happening and forget it as soon as it's done. Others are long-lasting and I remember a great deal in order for the relationship to have utility.

So, let's say that we're living in the future where SSI is ubiquitous and I have a DID-based relationship with Netflix. I have a wallet full of credentials. In order for my relationship to have utility, they will have to remember a lot about me, like what I've watched, what devices I used, and so on. They will likely still need to store a form of payment since it's a subscription. I call that an account. And for the service Netflix provides, it's likely not optional.

Let's consider a different use case: ecommerce. I go to a site, select what I want to buy, supply information about shipping and payment, and submit the order. I can still create a DID-based relationship, but the information needed from me beyond what I want to buy can all come from my credentials. And it's easy enough to provide that I don't mind supplying it every time. The ecommerce site doesn't need to store any of it. They may still offer to let me create an account, but it's optional. No more required than the loyalty program my local supermarket offers. The relationship I create to make the purchase can be ephemeral if that's what I want.

What will definitely go away is the use of accounts for social login. In social login, large identity providers have accounts that are then used by relying parties to authenticate people. Note that authentication is about recognizing. SSI wallets do away with that need by providing the means for different parties to easily create relationships directly and then use verifiable credentials to know things about the other with certainty. Both parties can mutually authenticate the other. But even here, social login is usually a secondary purpose for the account. I have an account with Google. Even if I never use it for logging in anywhere but Google, I'll still have an account for the primary reasons I use Google.

Another thing that goes away is logging in to your account. You'll still be authenticated, but that will fade into the background as the processes we use for recognizing people (FIDO and SSI) become less intrusive and fade into the background. We have a feel for this now with apps on our smartphones. We rarely authenticate because the app does that and then relies on the smartphone to protect the app from use by unauthorized people. FIDO and SSI let us provide similar experiences on the web as well. Because we won't be logging into them, the idea of accounts will fade from people's consciousness even if they still exist.

I don't think accounts are going away anytime soon simply because they are a necessary part of the relationship I have with many businesses. I want them to remember me and react to me in the context of the interactions we've had in the past. SSI offers new ways of supporting relationships, especially ephemeral ones, that means companies need to store less. But for long-term relationships, your wallet can't be the account. The other party needs their own means of remembering you and they will do that using tools that look just like an account.

Photo Credit: Creditor's Ledger, Holmes McDougall from Edinburgh City of Print (CC BY 2.0)

Tags: identity accounts ssi fido authentication


@_Nat Zone

「最高にエモい」と好評だったOpenID Summit Tokyoクロージング・キーノート 「No ID, No DX」の録画が一般公開されました

3年前のOpenID Summit Tokyo で…

3年前のOpenID Summit Tokyo で「最高にエモい」と好評だった OpenID Summit Tokyo のクロージング・キーノートの録画が公開されました。

崎村さんの Closing Keynote 最高にエモい。好き #openid_tokyo

— きりえ (@__kyrieleison__) January 24, 2020

本編スタートは 06:18 から。将来と希望ということから話を始め、産業革命の本質と大英帝国成立の背景を説明、そこから得られる第4次産業革命への示唆とサイバー大陸=第八大陸の成立とアップル教皇国、フェイスブック王国、グーグル共和国、WeChat人民共和国など列強(#GAFAM)による #第八大陸分割、#DFFT による経済成長についてのEUの見解、#eID #trustservices #eIDAS の意義、そして戦わない楽な道= #西用 路線をとることによる植民地化と貧困への道と、 #変法 による希望のもてる将来の話をしています。

3年前のスピーチですが、今でも全く古びていないと思います。いやむしろ、 #web3 で分散と言っている今こそ、そして今年のG7の議題となったDFFTという観点からも見直してぜひ熟慮していただきたいスピーチです。

お楽しみください。

Thursday, 19. January 2023

Jon Udell

Of course the attention economy is threatened by the Fediverse

Megan McArdle says this week, in the Washington Post, that “Twitter might be replaced, but not by Mastodon or other imitators.” I’m not linking to the article, you can easily find it, but that title is all we need for my purpose here, along with this bit of context: she has 93K followers on Twitter. … Continue reading Of course the attention economy is threatened by the Fediverse

Megan McArdle says this week, in the Washington Post, that “Twitter might be replaced, but not by Mastodon or other imitators.” I’m not linking to the article, you can easily find it, but that title is all we need for my purpose here, along with this bit of context: she has 93K followers on Twitter.

Nobody wants to walk away from that kind of audience. Well, almost nobody. Sam Harris’ recent Twitter exit is a rare example of someone concluding that a large follower count is a net negative. If I were in his shoes I’m not sure I’d be able to do the same. When my own audience was at its peak — at BYTE during the dawn of the Internet, then at InfoWorld in the early years of the blogosphere — I could press the Publish button on my blog and watch in realtime as the responses rolled in on waves of dopamine. It’s addictive, there’s never enough, you’re always looking for the next hit.

When Twitter started, that momentum carried forward for a while. I never racked up a huge follower count — it maxed out just shy of 6K — but most of those people followed me early on, thanks to the the ad-supported publications that had brought me to their attention. My Twitter following reached a plateau years ago. Did I wish for 100K followers? Sure, I’d be lying to pretend otherwise. But gradually I came to see that there was a sweet spot, somewhere between (let’s say) 200 and 15,000 followers, where it was possible to enjoy the kinds of pleasant and stimulating interaction that I’d first experienced in web forums and the blogosophere.

Until it wasn’t. Like a frog in slowly boiling water, I failed to notice how the Twitter experience degraded over time. Fewer and fewer of my 6K followers corresponded regularly, and my social graph there became stagnant. For me the Mastodon reboot has been a delightful replay of the early blogosphere: new acquaintances, collegial discussion, positive energy.

If you occupy a privileged position in the attention economy, as Megan McArdle does now, and as I once did in a more limited way, then no, you won’t see Mastodon as a viable replacement for Twitter. If I were still a quasi-famous columnist I probably wouldn’t either. But I’m no longer employed in the attention economy. I just want to hang out online with people whose words and pictures and ideas intrigue and inspire and delight me, and who might feel similarly about my words and pictures and ideas. There are thousands of such people in the world, not millions. We want to congregate in different online spaces for different reasons. Now we can and I couldn’t be happier. When people say it can’t work, consider why, and who benefits from it not working.

Here’s a graph of the Fediverse as it appears from my perspective right now.

It looks and feels healthy and it’s working just great. I don’t want us to replace Twitter, or imitate it. I want The Internet Transition that I hope is underway.

Wednesday, 18. January 2023

Hyperonomy Digital Identity Lab

Web 7.0 DIDComm-ARM vs. TBD Web5 DWP Stack

Here’s a link to the voice-over/narrative for the following slides: https://youtu.be/1XnPWmpkGro?t=630s Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized … Continu

Here’s a link to the voice-over/narrative for the following slides: https://youtu.be/1XnPWmpkGro?t=630s

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Tuesday, 17. January 2023

Hyperonomy Digital Identity Lab

DIDComm Message Attachment Types

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Here’s a sampling… NOTE: Verifiable credentials are an obvious DIDComm Message Attachment Type. For this reason, VCs are missing from the following list.

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Here’s a sampling…

NOTE: Verifiable credentials are an obvious DIDComm Message Attachment Type. For this reason, VCs are missing from the following list.


Bill Wendels Real Estate Cafe

MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day?

“Life’s most persistent and urgent question is, what are you doing for others?” Starbucks posed Martin Luther King’s challenging question in a full page NYTimes… The post MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day? first appeared on Real Estate Cafe.

“Life’s most persistent and urgent question is, what are you doing for others?” Starbucks posed Martin Luther King’s challenging question in a full page NYTimes…

The post MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day? first appeared on Real Estate Cafe.

Monday, 16. January 2023

Jon Udell

Mastodon relationship graphs

The new release of Steampipe is all about relationship graphs. Our blog post shows how these graphs provide contextual awareness for devops and security pros who can now see all the resources related to an EC2 instance, or determine at a glance whether the permissions related to an IAM role are properly scoped. As always, … Continue reading Mastodon relationship graphs

The new release of Steampipe is all about relationship graphs. Our blog post shows how these graphs provide contextual awareness for devops and security pros who can now see all the resources related to an EC2 instance, or determine at a glance whether the permissions related to an IAM role are properly scoped. As always, developers can explore and remix the code that builds these graphs, and adapt the idioms for their own purposes in any data domain.

These relationship graphs are driven by SQL queries that define nodes and edges. Such queries can use any column of any table provided by any Steampipe plugin to form nodes, and then edges between nodes. If you want to see connections among the people and objects represented by diverse APIs, you can now use SQL idioms to graph them. The only limit is your imagination.

Naturally I imagined graphing Mastodon relationships. So far I’ve built two graphs that visualize my home timeline. Here’s the first one.

Here we’re looking at the most recent 50 boosts (the Mastodon version of retweet) in my homeline. This is the query to find them.

select * from mastodon_toot where timeline = 'home' and reblog_server is not null limit 50

If we focus on Brian Marick we can see that:

Brian belongs to mastdn.social Brian boosted a post by Tim Bray Tim belongs to hachyderm.io

So this graph shows people on a selected server boosting people on other servers. In this case mastdn.social is the selected server, but we can refocus the graph on any other server that’s sending boosts.

The second graph zooms out to show the web of boost relationships among servers. If anyone on infosec.exchange boosts anyone on mastodon.world, there’s an edge connecting the two nodes. Although it’s not happening anywhere in this graph, the arrow can point both ways and would if anyone on mastodon.world were also boosting anyone on infosec.exchange.

Let’s build up the first graph step by step.

Step 1: Identify the selected server

Here’s the definition of the node that represents the selected server.

node { category = category.selected_server args = [ self.input.server.value ] sql = <<EOQ select server as id, server as title, jsonb_build_object( 'server', server ) as properties from mastodon_boosts() where server = $1 EOQ }

Per the documentation, a node’s query must at least select a column aliased as id. Here it’s the server column in a row returned by the above query. I’ve packaged that query into a SQL function, mastodon_boosts, to hide details (timeline = 'home' reblog_server is not null limit 50) and make it easier to focus on what’s special about each node. In this case the special quality is the server column that gives the node its identity matches the selected server.

If the graph block includes only this node, and mastdn.social is the selected server, here is the rendering. Not much to see here yet!

The node defines a bag of properties that can be any of the columns returned by the underlying query; these appear when you hover the node. The node also refers to a category that governs the node’s icon, color, and link. Here’s the category for the selected server.

category "selected_server" { color = "darkgreen" icon = "server" href = "https://{{.properties.'server'}}" } Step 2: Identify boosted servers

Now we’ll add boosted servers. This node uses the same set of records: the 50 most recent boosts in my feed. Again it finds just those whose server column matches the selected server. But the id is now the reblog_server which is the target, instead of the origin, of boosts from the selected server.

node { category = category.boosted_server args = [ self.input.server.value ] sql = <<EOQ select reblog_server as id, reblog_server as title from mastodon_boosts() where server = $1 EOQ }

Here’s the graph with both selected_server and boosted_server nodes. We’ve used another category to differentiate the boosted nodes.

There’s only one selected server but it can send boosts to more than one boosted server. The default rendering folds them into one node but you can click to unfold and see all of them.

Step 3: Identify people who boost others

Where are the people? Let’s add them next, starting with the people who are sending boosts.

node { category = category.person args = [ self.input.server.value ] sql = <<EOQ select username as id, display_name as title, jsonb_build_object( 'instance_qualified_account_url', instance_qualified_account_url ) as properties from mastodon_boosts() where server = $1 EOQ }

The username column gives the node its identity. Note also the property instance_qualified_account_url. That’s the synthetic column we added to the Mastodon plugin last time to ensure that links to people and toots will work properly in the Mastodon client. Because it’s included in a property here, and because category.person refers to that property, links representing people in the graph will resolve properly.

Step 4: Identify people who are boosted

This node takes its identify from the reblog_username column, and uses the synthetic column instance_qualified_reblog_url to provide the link.

node { category = category.boosted_person args = [ self.input.server.value ] sql = <<EOQ select reblog_username as id, reblog_username as title, jsonb_build_object( 'instance_qualified_reblog_url', instance_qualified_reblog_url ) as properties from mastodon_boosts() where server = $1 EOQ }

Step 5: Connect boosters on the selected server to that server

So far we’ve seen only nodes, whose queries minimally return the id property. An edge connects nodes by way of a query that minimally returns columns aliased to from_id and to_id.

edge { sql = <<EOQ select username as from_id, server as to_id, 'belongs to' as title from mastodon_boosts() EOQ }

You’ll also want to provide a title to label the edge. Here this edge occurs twice to represent “John Mashey belongs to mstdn.social” and “Brian Marick belongs to mstdn.social.”

Step 6: Connect people on boosted servers to their servers

This edge works the same way, but captures the relationship between boosted people and their servers.

edge { args = [ self.input.server.value ] sql = <<EOQ select reblog_username as from_id, reblog_server as to_id, 'belongs to' as title from mastodon_boosts() where server = $1 EOQ }

Step 7: Connect boosters to the people they boost

Finally we add an edge to connect boosters to the people they boost.

edge { category = category.boost args = [ self.input.server.value ] sql = <<EOQ select username as from_id, reblog_username as to_id, 'boosts' as title, jsonb_build_object( 'reblog_username', reblog_username, 'reblog_server', reblog_server, 'content', reblog ->> 'content' ) as properties from mastodon_boosts() where server = $1 EOQ }

And now we’ve completed the first graph shown above.

Graphing GitHub relationships

You can use this grammar of nodes and edges to describe relationships in any domain. Here’s a graph that looks across all the Steampipe-related repos and shows recently-updated PRs from external contributors.

And here’s one that uses any Steampipe plugin to show recently-updated pull requests for a selected repo.

These two views share a common SQL query and serve complementary purposes. The table is handy for sorting by date or author, the graph highlights one-to-many relationships.

Lifting the burden of context assembly

In What TimeDance got right I mourned the passing of a tool for scheduling meetings that had excelled at bringing together the messages and documents related to a meeting. I called this “context assembly” — a term I’d picked up from Jack Ozzie, cofounder of Groove, another collaboration tool whose passing I mourn. Context assembly is hard work. Too often the burden falls on people who only need to use that context and would rather not spend time and effort creating it.

We’ve seen how SQL can unify access to APIs. Now it can also help us see relationships among the data we extract from those APIs.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Damien Bod

Implementing secure Microsoft Graph application clients in ASP.NET Core

The article looks at the different way a Microsoft Graph application client can be implemented and secured in an ASP.NET Core application or a .NET application. This type of client is intended for applications or application logic where no user is involved. Code: https://github.com/damienbod/MicrosoftGraphAppToAppSecurity Accessing Microsoft Graph can be initialized for app-to-app (application perm

The article looks at the different way a Microsoft Graph application client can be implemented and secured in an ASP.NET Core application or a .NET application. This type of client is intended for applications or application logic where no user is involved.

Code: https://github.com/damienbod/MicrosoftGraphAppToAppSecurity

Accessing Microsoft Graph can be initialized for app-to-app (application permissions) security in three different ways. The flows can only be used in a trusted host. The different implementation types are as follows:

Using Managed Identities Using Azure SDK and Graph SDK directly with client credentials Using Microsoft.Identity.Client and MSAL to acquire an access token which can be used directly against Microsoft Graph or using GraphServiceClient with the DelegateAuthenticationProvider class Using Managed Identities

Using managed identities for the Azure deployments is the most secure of the three ways to implement this client. This is because no secret or certificates are shared and so cannot be abused and there is no need for secret rotation.

Setup

We use a web application deployed to an Azure App Service to setup the security. A managed identity is created for this Azure resource. If the Azure App Service is deleted, so is the managed identity and the assigned Graph roles. Only this Azure resource can use the managed identity.

Once the Azure resource is created, the Graph App roles can be assigned to the managed identity.

Powershell scripting

I created the Powershell script using a blog from Microsoft. This powershell script finds the managed identity and assigns the User.Read.All application permission to the managed identity.

$TenantID = "<your-tenant-id>" $DisplayNameServicePrincpal ="<your-azure-app-registration-or-other-azure-resource>" $GraphAppId = "00000003-0000-0000-c000-000000000000" $PermissionName = "User.Read.All" Connect-AzureAD -TenantId $TenantID $sp = (Get-AzureADServicePrincipal -Filter "displayName eq '$DisplayNameServicePrincpal'") Write-Host $sp $GraphServicePrincipal = Get-AzureADServicePrincipal -Filter "appId eq '$GraphAppId'" $AppRole = $GraphServicePrincipal.AppRoles | Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"} New-AzureAdServiceAppRoleAssignment -ObjectId $sp.ObjectId -PrincipalId $sp.ObjectId -ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id

This can be checked in the Azure portal using the Enterprise applications blade and filtering for managed identities.

The permissions contains the Graph User.Read.All application permission.

Implementing the client

The client is implemented using Azure.Identity and Graph SDK. We have two setups, one for the production and all other Azure deployments and one for development. The managed identity is used everywhere except the dev deployments and only this can be used. The local dev uses an Azure App registration with the client credentials flow. The GetGraphClientWithManagedIdentity method returns the GraphServiceClient Graph SDK client setup for the correct deployment. The correct ChainedTokenCredential is used to secure the client. It is important that only the correct managed identity for the exact resource can be used in production. No secret or certificates is required for this solution, the managed identity and Azure takes care of this. The GraphServiceClient is for the application and handles the HttpClient creation so the service is created as a singleton.

using Azure.Identity; using Microsoft.Graph; namespace GraphManagedIdentity; public class GraphApplicationClientService { private readonly IConfiguration _configuration; private readonly IHostEnvironment _environment; private GraphServiceClient? _graphServiceClient; public GraphApplicationClientService(IConfiguration configuration, IHostEnvironment environment) { _configuration = configuration; _environment = environment; } /// <summary> /// gets a singleton instance of the GraphServiceClient /// </summary> /// <returns></returns> public GraphServiceClient GetGraphClientWithManagedIdentityOrDevClient() { if (_graphServiceClient != null) return _graphServiceClient; string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var chainedTokenCredential = GetChainedTokenCredentials(); _graphServiceClient = new GraphServiceClient(chainedTokenCredential, scopes); return _graphServiceClient; } private ChainedTokenCredential GetChainedTokenCredentials() { if (!_environment.IsDevelopment()) { return new ChainedTokenCredential(new ManagedIdentityCredential()); } else // dev env { var tenantId = _configuration["AzureAd:TenantId"]; var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var devClientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); var chainedTokenCredential = new ChainedTokenCredential(devClientSecretCredential); return chainedTokenCredential; } } }

The service is added to the IoC and can be used anywhere in the application. Once deployed, the managed identity is used, otherwise the dev setup runs.

builder.Services.AddSingleton<GraphApplicationClientService>(); builder.Services.AddScoped<AadGraphSdkApplicationClient>();

I use it in a service then:

using Azure.Identity; using Azure.Security.KeyVault.Secrets; using Microsoft.Graph; using System.Security.Cryptography.X509Certificates; namespace GraphClientCrendentials; public class AadGraphSdkApplicationClient { private readonly IConfiguration _configuration; private readonly GraphApplicationClientService _graphService; public AadGraphSdkApplicationClient(IConfiguration configuration, GraphApplicationClientService graphService) { _configuration = configuration; _graphService = graphService; } public async Task<int> GetUsersAsync() { var graphServiceClient = _graphService.GetGraphClientWithClientSecretCredential(); IGraphServiceUsersCollectionPage users = await graphServiceClient.Users .Request() .GetAsync(); return users.Count; } } Dev setup

An Azure App registration is used to implement the OAuth client credentials flow and uses the Graph SDK client in development. The Graph application is added to the single tenant Azure App registration. An enterprise application is created from this.

The ChainedTokenCredential uses the app.settings and the user secrets to configure the client. The client uses the OAuth client credentials flow to acquire an access token. I normally use secrets for development for simplicity but if more security is required, a certificate can be used and the secret/certificate can be used directly from an Azure KeyVault.

"AzureAd": { "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "3606b25d-f670-4bab-ab70-437460143d89" //"ClientSecret": "add secret to the user secrets" //"CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]", //"Certificate": { // "SourceType": "KeyVault", // "KeyVaultUrl": "<VaultUri>", // "KeyVaultCertificateName": "<CertificateName>" //} }, Using Azure SDK and Graph SDK directly

A Microsoft Graph client can be setup to to use the client credentials flow to initialize the Graph SDK GraphServiceClient. This is a good way of creating the OAuth client credentials flow if it is used outside the Azure tenant. It is recommended to use a certificate and this is normally stored in an Azure Key Vault. This uses the OAuth client credentials flow and uses the client assertions to acquire a new access token.

The flow can be setup to use a secret:

private GraphServiceClient GetGraphClientWithClientSecretCredential() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); return new GraphServiceClient(clientSecretCredential, scopes); }

Or setup to use a certificate:

private async Task<GraphServiceClient> GetGraphClientWithClientCertificateCredentialAsync() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var certififacte = await GetCertificateAsync(); var clientCertificateCredential = new ClientCertificateCredential( tenantId, clientId, certififacte, options); // var clientCertificatePath = _configuration.GetValue<string>("AzureAd:CertificateName"); // https://learn.microsoft.com/en-us/dotnet/api/azure.identity.clientcertificatecredential?view=azure-dotnet // var clientCertificateCredential = new ClientCertificateCredential( // tenantId, clientId, clientCertificatePath, options); return new GraphServiceClient(clientCertificateCredential, scopes); } private async Task<X509Certificate2> GetCertificateAsync() { var identifier = _configuration["AzureAd:ClientCertificates:0:KeyVaultCertificateName"]; if (identifier == null) throw new ArgumentNullException(nameof(identifier)); var vaultBaseUrl = _configuration["AzureAd:ClientCertificates:0:KeyVaultUrl"]; if(vaultBaseUrl == null) throw new ArgumentNullException(nameof(vaultBaseUrl)); var secretClient = new SecretClient(vaultUri: new Uri(vaultBaseUrl), credential: new DefaultAzureCredential()); // Create a new secret using the secret client. var secretName = identifier; //var secretVersion = ""; KeyVaultSecret secret = await secretClient.GetSecretAsync(secretName); var privateKeyBytes = Convert.FromBase64String(secret.Value); var certificateWithPrivateKey = new X509Certificate2(privateKeyBytes, string.Empty, X509KeyStorageFlags.MachineKeySet); return certificateWithPrivateKey; }

I usually use a secret for development and a certificate for production.

Using Microsoft.Identity.Client and MSAL

A third way of implementing the Graph client is to use Microsoft.Identity.Client or Microsoft.Identity.Web. This uses the ConfidentialClientApplicationBuilder to create a new IConfidentialClientApplication instance and can use a secret or a certificate to acquire the access token.

Microsoft.Identity.Client with a secret:

var app = ConfidentialClientApplicationBuilder.Create(config.ClientId) .WithClientSecret(config.ClientSecret) .WithAuthority(new Uri(config.Authority)) .Build(); app.AddInMemoryTokenCache();

or with a certificate and client assertions:

var app = ConfidentialClientApplicationBuilder.Create(config.ClientId) .WithCertificate(certificate) .WithAuthority(new Uri(config.Authority)) .Build(); app.AddInMemoryTokenCache();

The GraphServiceClient can be created using the DelegateAuthenticationProvider. As I understand you should avoid using the DelegateAuthenticationProvider if possible.

GraphServiceClient graphServiceClient = new GraphServiceClient("https://graph.microsoft.com/V1.0/", new DelegateAuthenticationProvider(async (requestMessage) => { // Retrieve an access token for Microsoft Graph (gets a fresh token if needed). AuthenticationResult result = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); // Add the access token in the Authorization header of the API request. requestMessage.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken); })); } Notes

There are three different ways of creating Microsoft Graph application clients and it is sometimes hard to understand when you should use which. This is not for the delegated clients. In an ASP.NET Core application you would use Microsoft.Identity.Web for a delegated client which then uses Microsoft Graph on behalf of the user. System assigned managed identities do not require managing secrets or certificates but can only be used in the same tenant. The client credentials flow can be used from anywhere. Microsoft recommends using certificates when using the client credentials flow.

Links

https://learn.microsoft.com/en-us/azure/app-service/scenario-secure-app-access-microsoft-graph-as-app?tabs=azure-powershell

https://learn.microsoft.com/en-us/azure/active-directory/develop/sample-v2-code#service–daemon

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph

https://oceanleaf.ch/azure-managed-identity/

https://learningbydoing.cloud/blog/stop-using-client-secrets-start-using-managed-identities/

https://github.com/Azure/azure-sdk-for-net

https://learn.microsoft.com/en-us/dotnet/api/azure.identity.environmentcredential?view=azure-dotnet

https://learn.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS

Saturday, 14. January 2023

Hyperonomy Digital Identity Lab

One’s Journey to the Grave – Hunter S. Thompson

“Life should not be a journey to the grave with the intention of arriving safely in a pretty and well preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and … Continue reading →

“Life should not be a journey to the grave with the intention of arriving safely in a pretty and well preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a Ride!”

― Hunter S. Thompson, The Proud Highway: Saga of a Desperate Southern Gentleman, 1955-1967

Friday, 13. January 2023

@_Nat Zone

1月16日(月)20時〜「犯罪予防や安全確保のためのカメラ画像利用に関する有識者検討会報告書(案)」を配信しながらダラダラ読む会やります

「犯罪予防や安全確保のためのカメラ画像利用に関する…

「犯罪予防や安全確保のためのカメラ画像利用に関する有識者検討会報告書(案)」に関する意見募集が2023年1月12日に始まりました。意見募集期間は2月12日までです。ページ数は70ページ以上あり、かなり充実しているんではないかと予想しています。

ぜひとも読むべきなんですが、一人でやっているとずるずると行きそうなので、1月16日午後8時からライブリーディングしようと思います。チャットのコメントも拾いますが、もしぜひ音声や画像でも出たいというかたがいらっしゃいましたらご連絡ください(←直接存じ上げている方に限定します)。特にこのあたり詳しくて、教えてやっても良いよという方!ちなみに、わたしもこの辺りは専門外なので、ひとりの学習者としてやります。

あと、最近体調があまり優れないので、もしだめになったらごめんなさい。その場合、twitter アカウント (https://twitter.com/_nat )で随時状況はお知らせします。

なお、これに引き続き他にやろうと思っているものにNIST SP800-63-4 PD 、BGIN SBT Study Report 、アメリカの Personal Financial Data Rightsなどがあります。最後のはほぼ時間切れ… (1/25 締切)。これもあって、やるぞ宣言して自分を叱咤激励するメソッドをばというわけです。

さて、できるかな…

Thursday, 12. January 2023

Doc Searls Weblog

FM Stations Down on Gibraltar Peak

[Update: 11:20 AM Wednesday 18 January] Well, I woke this morning to hear all the signals from Gibraltar Peak back on the air. I don’t know if the site is on generator power, or if electric power has been restored. This pop-out from a map symbol on Southern California Edison’s Power Outage Awareness Map suggests the […]

[Update: 11:20 AM Wednesday 18 January] Well, I woke this morning to hear all the signals from Gibraltar Peak back on the air. I don’t know if the site is on generator power, or if electric power has been restored. This pop-out from a map symbol on Southern California Edison’s Power Outage Awareness Map suggests the latter:

However, I am listening right now to KZSB/1290 AM’s FM signal on 96.9 from Gibraltar Peak, where the show hosts are detailing many road closures, noting that sections of Gibraltar road are “down the hill,” meaning not there anymore, and unlikely to be fixed soon. I think I also heard them say their FM transmitter is on generator power. Far as I know, they are the only station covering local road closures, buildings damaged, farms and orchards damaged, and related topics, in great detail. It’s old-fashioned local radio at its best. Hats off.

Looking at the power requirements up there, only two stations are high-power ones: KDB/93.7’s transmitter pumps 4.9kW into a stack of five antenna bays that yield an ERP (effective radiated power) of 12.5kW, and KDRW(KCRW)/88.7 uses about 5.9kW to produce 12kW ERP through a stack of four antenna bays. Those are on the poles at the right and left ends of this photo, which I shot after the Jesusita Fire in 2009:

All the other stations’ transmitters require less wattage than a microwave oven. Three only put out ten watts. So, given typical modern transmitter efficiencies, I’m guessing the site probably has a 20kW generator, give or take, requiring about 2.5 gallons of propane per hour. So a 500-gallon propane tank (a typical size) will last about 200 hours. Of course, none of that will matter until the next outage, provided electrical service is actually restored now, or soon.

[Update: 3:34 PM Monday 16 January] Two news stories:

Edhat: Gibraltar Road Damage., by Edhat staff, Januraly 11, 2023 12:30 PM. It’s a collection of revealing Gibraltar Road photos that I wish I had seen earlier. Apologies for that. This is the text of the whole story: “A resident of Gibraltar Road shared the below photos from the recent storm damage. A section of the road appears to be washed out with a Tesla trapped under some debris. The Tesla slide is located approximately a quarter mile past the Rattlesnake Canyon trailhead and the washed road is about a mile past the radio tower before reaching the west fork trailhead.” If “mile past” means going uphill toward East Camino Cielo on the ridge, that means travel was (and is) impeded (at the very least) in both directions from the transmitter sites. The photos are dramatic. Please check them out. Noozhawk: Several Radio Stations Still Off the Air After Storm Knocks Out Power to Gibraltar Transmitter Site by Giana Magnoli, by Managing Editor Giana Magnoli, January 16, 2023 | 1:47 pm

From the Noozhawk story:

“… they’ve helicoptered up a new battery and 600 gallons of diesel fuel to the site’s backup generator, but they haven’t been able to get it to work.” I believe this is for lack of the expected banjo valve. (See below.) “Southern California Edison, which supplies power to the transmission towers site, first reported an outage for the Gibraltar Road area at 2:34 a.m. Jan. 9, the day of the big storm.” That was Monday. At least some stations would have switched over to generator power then. “Repair crews haven’t been sent to the site yet, according to the SCE Outage Map, but Franklin said he heard there could be new poles installed this week.” That’s John Franklin, who runs the whole Gibraltar Peak site. “KCLU (102.3 FM) went off the air on Wednesday and was still off as of Monday.KCLU (102.3 FM) went off the air on Wednesday and was still off as of Monday. KJEE (92.9 FM) went down for several days but came back on the air on Thursday.” Note: it’s not on now—at least not on the radios I’m using. “Santa Barbara County spokeswoman Kelsey Gerckens Buttitta said there are cell and radio station towers off Gibraltar Road that requires fuel to operate, and Gibraltar Road and East Camino Cielo Road are closed because of slides, debris and slipouts.” Fixing those roads will be very difficult and time-consuming.

The story also lists signals I reported off as of last night. One correction to that: K250BS/97.9, which relays KTMS/990, is on the air. This I presume is because it’s at the KTMS/KTYD site. All the signals from that site (which is up the road from Gibraltar Peak) are still up. I assume that’s either because they are fed electric power separately from Gibraltar Peak, or because they are running on generator power.

[Update: 11:40 AM Monday 16 January] In a private group discussion with broadcast engineers, I am gathering that a stretch of Gibraltar Road close to the Gibraltar Peak site has collapsed. The location is 34°28’05.2″N 119°40’21″W, not far from the road into the transmitter site. This is not the section marked closed by Santa Barbara County on its map here. It is also not an easy fix, because it appears from one photograph I’ve seen (shared on a private group) that the land under the road slid away. It is also not the section where power lines to the site were knocked out. So we’re looking at three separate challenges here:

Restoring electrical service to Gibraltar Peak, and other places served by the same now-broken lines Repairing Gibraltar Road in at least two places (the one marked on the county map and the one above) Getting generators fueled and fixed.

On that last issue, I’m told that the site with most of the transmitters can be powered by a generator that awaits what is called a banjo valve. The KDB facility requires propane, and stayed up longer than the others on the peak while its own supply held up.

Gibraltar Peak isn’t the highest landform overlooking Santa Barbara. At 2180 feet, it’s about halfway up the south flank of the Santa Ynez Mountains. But it does provide an excellent vantage for FM stations that want the least obstructed view of the market’s population. That’s why more local signals come from here than from any other site in the region.

Except for now: a time that began with the storm last Tuesday. That’s when power lines feeding the peak were broken by falling rocks that also closed Gibraltar road. Here is a list of signals that have been knocked off the air (and are still off, as of the latest edit, on Sunday, January 15 at 11:15PM):

88.7 KDRW, which has a studio in Santa Barbara, but mostly relays KCRW from Santa Monica 89.5 KSBX, which relays KCBX from San Luis Obispo* 89.9 K210AD, which relays KPCC from Pasadena by way of KJAI from Ojai 90.3 KMRO-FM2, a booster for KMRO in Camarillo 91.5 K218CP, which relays KAWZ from Twin Falls, Idaho 93.7 KDB, which relays KUSC from Los Angeles (down after running on generator power for 5 days) 96.9 K245DD, which relays KZSB/1290 AM in Santa Barbara 97.9 K250BS, which relays KTMS/990 AM in Santa Barbara (and is on a KTMS tower, farther up the slope) 98.7 K254AH, which relays KPFK from Los Angeles 102.3 KK272DT, the FM side of KCLU/1340 in Santa Barbara and KCLU/88.3 in Thousand Oaks

KTMS/990AM, KTYD/99.9FM, and K231CR/94.1, which relays KOSJ/1490AM, are still on the air as of Sunday night at 11:15pm. Those are are a short distance farther up Gibraltar Road. (In the other box in the photo above.)

Here is a guide to substitute signals for some of the stations:

KCRW/KDRW can be heard on KCRU/89.1 from Oxnard (actually, Laguna Peak, in Pt. Magu State Park) KDB can be heard on KDSC/91.1 from Thousand Oaks (actually off Sulphur Mountain Road, south of Ojai) KCLU can be heard on 1340 AM from Santa Barbara and 88.3 FM from Thousand Oaks KPCC can be heard on KJAI/89.5 from Ojai (also transmitting from Sulphur Mountai Road) KSBX/KCBX can be heard on 90.9 from Solvang (actually Broadcast Peak) KPFK can be heard on its home signal (biggest in the U.S.) from Mount Wilson in Los Angeles at 90.7 KZSB can be heard on 1290 AM from Santa Barbara KMRO can still be heard on its Camarillo main transmitter on 90.3

The two AM signals (marked green in the top list above) are strong in town and most of the FMs are weak but listenable here and there. And all of them can be heard through their live streams online.

Published stories so far, other than this one:

Gibraltar Peak Towers Out of Commission, in the Santa Barbara Independent. That story says two power poles were knocked out, as reported by John Franklin, who operates the whole transmitter site. (The story also kindly gives me credit for the photo there.) KCLU’s 102.3 FM signal in Santa Barbara is intermittently off the air, on the KCLU website. The image it shows is not of KCLU’s antenna on Gibraltar Peak. Mouse over this photo to see which of the many antennas on Gibraltar Peak radiate which signals (to the best of my knowledge, anyway).

The Independent says the site is a “relay” one. That’s correct in the sense that most of the stations there are satellites of bigger stations elsewhere. But KCLU is local to Santa Barbara (its anchor AM station is here), and the ratings reflect it. I wrote about those ratings a few years ago, in Where Public Radio Rocks. In that post, I noted that public radio is bigger in Santa Barbara than anywhere else in the country.

The most recent ratings (Spring of 2022), in % shares of total listening, are these:

KDB/93.9, classical music, relaying KUSC/91.1 from Los Angeles: 7.9% KCLU/102.3 and 1340 in Santa Barbara (studios in Thousand Oaks), public broadcasting: 7.3% KDRW/88.7 in Santa Barbara (main studio in Santa Monica, as KCRW/89.9): 4.6% KPCC/89.9, relaying KJAI/89.5 and KPCC/89.3 in Pasadena: 1.3% KSBX/89.5, relaying KCBX/90.1 from San Luis Obispo: 0.7%

Total: 21.8%.

That means more than a fifth of all radio listening in Santa Barbara is to noncommercial and public radio.

And, of all those stations, only KDB/KUSC and KCLU-AM are on the air right now.

By the way, when I check to see how public broadcasting is doing in other markets, nothing is close. Santa Barbara still kicks ass. I think that’s an interesting story, and I haven’t seen anyone report on it, other than here.

*Turns out KSBX is off the air permanently, after losing a coverage battle with KPBS/89.5 in San Diego. On December 29, they published a story in print and sound titled Why is 89.5 KSBX off the air? The answer is in the atmosphere. They blame tropospheric ducting, which much of the time makes KPBS come in like a local signal. Also, even though KPBS’s transmitter on Soledad Mountain (really more of a hill) above the coast at La Jolla is more than 200 miles away, it does pump out 26,000 watts, while KCBX puts out only 50 watts—and less in some directions. Though the story doesn’t mention it, KJAI, the KPCC relay on 89.5 for Ojai, is audible in Santa Barbara if nothing else is there. So that also didn’t help. By the way, I’m almost certain that the antenna identified as KSBX’s in the story’s photo (which is also one of mine) is actually for KMRO-2. KSBX’s is the one on the left in this photo here.


Identity Woman

IAPP Event: An Intro for Data Privacy Pros to Self-Sovereign Identity

An event hosted by the International Association of Privacy Professionals (IAPP) asked me to give a talk on the subject of self-sovereign identity and provide a foundational overview for privacy professionals. The following are some of the primary issues discussed throughout the event: The Panel was put together by Katharina Koerner, the Principal Technology Researcher at […] The post IAPP

An event hosted by the International Association of Privacy Professionals (IAPP) asked me to give a talk on the subject of self-sovereign identity and provide a foundational overview for privacy professionals. The following are some of the primary issues discussed throughout the event: The Panel was put together by Katharina Koerner, the Principal Technology Researcher at […]

The post IAPP Event: An Intro for Data Privacy Pros to Self-Sovereign Identity appeared first on Identity Woman.


Hyperonomy Digital Identity Lab

Hyperledger Identity Implementers WG – January 12, 2023

Mission Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what you need; leave the rest. Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity … Continue reading →

Mission

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Other Web 7.0 Resources

Also check out Welcome to Web 7.0 (including a link to the whitepaper): https://hyperonomy.com/2022/12/12/welcome-to-web-7-0/

Today’s Presentation

Click here to download the presentation:

didcomm-arm-3-0-48-20minDownload

Wednesday, 11. January 2023

Doc Searls Weblog

Heavy Weather

Most of California has just two seasons: rain and fire. Rain is another name for Winter, and it peaks in January. In most years, January in California isn’t any more wet than, say, New York, Miami or Chicago. But every few years California gets monsoons. Big ones. This is one of those years. The eighteen gallon storage […]

Most of California has just two seasons: rain and fire. Rain is another name for Winter, and it peaks in January. In most years, January in California isn’t any more wet than, say, New York, Miami or Chicago. But every few years California gets monsoons. Big ones. This is one of those years.

The eighteen gallon storage tub in our yard is sixteen inches deep and serves as a rain gauge:

Yesterday morning it was less than half full. While it gathered rain, our devices blasted out alerts with instructions like this:

So we stayed home and watched the Web tell us how the drought was ending:

Wasn’t long ago that Lake Cachuna was at 7%.

So that’s good news. The bad news is about floods, ruined piers and wharfsdowned trees, power outages, levee breaches. The usual.

It should help to remember that the geology on both coasts is temporary and improvisational. The East Coast south of New England and Long Island (where coastal landforms were mostly dumped there or scraped bare by glaciers in the geologic yesterday) is a stretch of barrier islands that are essentially dunes shifted by storms. Same goes for the Gulf Coast. The West Coast looks more solid, with hills and mountains directly facing the sea. But Pacific storms in Winter routinely feature waves high as houses, pounding against the shores and sea cliffs.

Looking up the coast from Tijuana, within a few hundred years Coronado and Point Loma in San Diego, La Jolla, all the clifftop towns up the coast to Dana Point and Laguna, Palos Verdes Peninsula, Malibu and Point Dume, Carpinteria, the Santa Barbara Mesa and Hope Ranch, all of Isla Vista and UCSB, Pismo and Avila Beaches, all of Big Sur and the Pacific Coast Highway there, Carmel and the Monterey Peninsula, Aptos, Capitola and Santa Cruz, Davenport, Half Moon Bay, Pacifica, the headlands of San Francisco, Muir and Stimson Beaches and Bolinas in Marin, Fort Bragg in Mendicino County, and Crescent City in Humbolt—all in California—will be eaten away partially or entirely by weather and waves. Earthquakes will also weigh in.

The photo up top is of La Conchita, a stupidly located town on the South Coast, west of Ventura, four days after a landslide in 2005 took out 13 homes and killed 10 people. All the land above town is a pile of former and future landslides, sure to slide again when the ground is saturated with water. Such as now or soon.

So that’s a long view. For one that spans the next week, visit windy.com and slide the elevation up to FL (flight level) 340 (34000 feet):

That yellow river of wind is a jet stream hauling serious ass straight across the Pacific and into California. Jet streams are why the headwinds and tailwinds you see on seat-back displays showing flight progress on planes often say 100mph or more. Look at Windy before you fly coast to coast or overseas, and you can guess what the flight path will be. You can also see why it may take as little as five hours to get from Dulles to Heathrow, or more than seven hours to come back by a route that touches the Arctic Circle. Your plane is riding, fighting or circumventing high altitude winds that have huge influences on the weather below.

To see how, drop Windy down to the surface:

Those eddies alongside the jet stream are low pressure centers full of the moisture and wind we call storms. They spin along the sides of the jet stream the way dust devils twist up along the sides of highways full of passing trucks. Those two storm centers are spinning toward California and will bring more wind and rain.

Beside the sure damage those will bring, there will be two benefits. One is that California will be as green as Ireland for a few months. The other is that wildflowers will bloom all over the place.

The Death Valley folks are hedging their bet, but I’d put money on a nice bloom this Spring. Watch for it.

Bonus link: There’s An Underground City Beneath Sacramento In Northern California That Most People Don’t Know About. Excerpt: “…Old Sacramento was built up during the time of the gold rush, but the frequent flooding of this area obliterated its first level time and time again, until finally, the city abandoned that level altogether. It’s both fascinating and creepy to tour the abandoned level…”

Tuesday, 10. January 2023

Identity Woman

DWeb 2022 Talk: Decentralized Identity Open Standards

At the invitation of the organizers of DWeb Camp 2022, I delivered a session on the subject of three open standards for decentralized identities. Decentralized identifiers, verifiable credentials, and decentralized identifier communication were discussed throughout this session as three of the most important developing standards for decentralized identity. Decentralized Identifiers: I went through

At the invitation of the organizers of DWeb Camp 2022, I delivered a session on the subject of three open standards for decentralized identities. Decentralized identifiers, verifiable credentials, and decentralized identifier communication were discussed throughout this session as three of the most important developing standards for decentralized identity. Decentralized Identifiers: I went through a variety […]

The post DWeb 2022 Talk: Decentralized Identity Open Standards appeared first on Identity Woman.

Monday, 09. January 2023

Hyperonomy Digital Identity Lab

Unification

Suppose you have 3 football teams: What does it mean to unify these 3 organizations? Is it possible? What would it take to unify these 3 organizations? Let’s start with a common cause/purpose/vision/mission. Without these, the goal of unification is … Continue reading →

Suppose you have 3 football teams:

Buffalo Bills NFL football team Machester United UK football team Local high school football team

What does it mean to unify these 3 organizations? Is it possible? What would it take to unify these 3 organizations?

Let’s start with a common cause/purpose/vision/mission. Without these, the goal of unification is impossible to attain.


DIDComm Agents vs. Identity Wallets: Litmus Test

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Application Scenario How would you create a #unified #decentralized software system whose scope includes: a. Global Air Traffic Control #ATC b. Aircraft Crew Scheduling #CrewSched … Continue reading →

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Application Scenario

How would you create a #unified #decentralized software system whose scope includes:

a. Global Air Traffic Control #ATC

b. Aircraft Crew Scheduling #CrewSched

c. Baggage Handling, and Tracking #BagBash

d. Passenger Ticketing, Seating, and Credentialing #PTSC

e. Food and Beverage Management #FoodBev

…using decentralized technologies like:

Decentralized Identifiers DIDComm Agents Verifiable Credentials

Meet Freddy…


Which design represents the smarter choice?

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Anyone who has owned a phone and a wallet knows the answer.

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Figure 1. Which design represents the smarter choice?

Anyone who has owned a phone and a wallet knows the answer.


Phil Windleys Technometria

Defining Digital Identity

Summary: The following is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023. The family therapist Salvador Minuchin declared, "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is of our psychological identity. A digital ident

Summary: The following is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023.

The family therapist Salvador Minuchin declared, "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is of our psychological identity. A digital identity contains data that uniquely describes a person or thing but also contains information about the subject's relationships to other entities.

To see an example of this, consider the data record that represents your car, stored somewhere in your state or country's computers. This record, commonly called a title, contains a vehicle identification number (VIN) that uniquely identifies the car to which it belongs. In addition, it contains other attributes of the car such as year, make, model, and color. The title also contains relationships: most notably, the title relates the vehicle to a person who owns it. In many places, the title is also a historical document, because it identifies every owner of the car from the time it was made, as well as whether it's been in a flood or otherwise salvaged.

While fields as diverse as philosophy, commerce, and technology define identity, most are not helpful in building, managing, and using digital identity systems. Instead, we need to define identity functionally, in a way that provides hooks for us to use in making decisions and thinking about problems that arise in digital identity.

Joe Andrieu, principal at Legendary Requirements, writes that "identity is how we recognize, remember, and respond to specific people and things. Identity systems acquire, correlate, apply, reason over, and govern information assets of subjects, identifiers, attributes, raw data, and context." This definition is my favorite because it has proven useful over the years in thinking through thorny identity issues.

The identity record for a car includes attributes that the system uses to recognize it: in this case, the VIN. The title also includes attributes that are useful to people and organizations who care about (that is, need to respond to) the car, including the owner, the state, and potential buyers. The government runs a system for managing titles that is used to create, manage, transfer, and govern vehicles (or, in Andrieu's formulation, remember them). The system is designed to achieve its primary goal (to record valuable property that the state has an interest in taxing and regulating) and secondary goals (protecting potential buyers and creating a way to prove ownership).

Digital identity management consists of processes for creating, managing, using, and eventually destroying digital records, like the one that contains your car title. These records might identify a person, a car, a computer, a piece of land, or almost anything else. Sometimes they are created simply for inventory purposes, but the more interesting ones are created with other purposes in mind: allowing or denying access to a building, the creation of a file, the transfer of funds, and so on. These relationships and the authorized actions associated with them make digital identities useful, valuable, and sometimes difficult to manage.

Photo Credit: Plate - WHOS_CAR from Lone Primate (CC BY-NC-SA 2.0)

Tags: identity ldid


Hyperonomy Digital Identity Lab

My Favorite Microsoft Stories

0. Alias Upfront for Windows What are your favorite Microsoft stories? Here’s my zero’th favorite… (#0 in my #FavoriteMicrosoftStories series) #THT – Alias #Upfront for Windows – “The most innovative new graphics product for Windows 3.0” – Bill Gates, CEO, … Continue reading →

0. Alias Upfront for Windows

What are your favorite Microsoft stories? Here’s my zero’th favorite… (#0 in my #FavoriteMicrosoftStories series)

#THT – Alias #Upfront for Windows – “The most innovative new graphics product for Windows 3.0” – Bill Gates, CEO, Microsoft.

YouTube: https://www.youtube.com/watch?v=Ihisz51Qiew

[Alias] seized the opportunity to purchase the Spacemaker technology and launched UpFront, a low-cost 3D Mac and Windows based package for architects. Alias achieved a major coup by impressing Bill Gates, who mentioned Upfront during a major Microsoft conference as a particularly innovative application under Windows. “In the graphics area, I picked Upfront from Alias Research. It is really an incredible tool for making sure the design is exactly right”, said the Chairman of Microsoft.

James Boritz and I and Ming Mah and Richard Brath and Dan Whitely and Jon Steinberg are the team that brought Alias Upfront for Windows to life – in a back room on the 3rd floor. We also used the same code base to build Alias Upfront for the Mac. Upfront didn’t generate a lot of revenue ($2 million? …or was it $200,000?) but it was Alias Research’s very first desktop software product. We were and are very proud of what we accomplished.

Other members of the cast: Alistair Banks Bill Gates Sara Spalding.

1. IE4 and the Java Jam

What are your favorite Microsoft stories? Here’s one of my mine… (#1 in my #FavoriteMicrosoftStories series)

One of my favorite MS stories involves IE4 and one of the largest banks in Canada, one of Microsoft’s largest customers headquartered here in Toronto, around Spring 1997 – maybe ’96 (May 15th actually …I’ll never forget that date). The bank had committed themselves to using server-side Java before there was any hint of server-side Java apps, J2EE, etc. They were using the IE4 Java VM at a time when we (MS) were happy to have the VM running “dancing elephants” in IE for 24 hours without a crash. We never thought of using the MS VM to run what was at the time “the world’s largest server-side Java application”.

The bank’s Java app would crash again and again. The bank blamed our VM because our VM was the only one that would run the app at all (not the IBM Hersley VM, not the VM from Sun Microsystems).

CharlesF and I ended up going head-to-head on this …for me it was all about what support is MS willing to provide MS’s largest enterprise customers; Charles’ job was to work for BradSi and, among other responsibilities, deflect any distractions that would delay shipping IE4 …remember the days of “The Browser Wars”?

We each chose our teams. Bill Gates was already down the throats and backs of BradSi and Charles to not work on this. SteveB came to Toronto and got a huge strip torn off him – up one side and down the other – by an ex-IBM bank VP whose husband at the time was the IBM account executive for the same bank. SteveB was instantly on my team.

At the MGS in Orlando a few months later, Bill was walking around making himself available for the MS field people that were there (an MS internal WW sales conference). I started to walk up to Bill and to explain my role in the “Java Jam” as we called it and had only gotten a couple words out when SteveB pounced on Bill in front of dozens of MS people – his pointer fingers high up in the air pointing down at Bill – virtually yelling at him to listen to the story I had and to learn what it means to be passionate about our enterprise customers.

Later that evening, I got a call to meet with SteveB in his hotel suite. Just me and him …and PaulMa, CharlesF and the MS Java VM team on the other end of a conference phone. Steve was on one side of an L-shaped couch when I arrived eating cashews or almonds from a can. I sat on the other side of the couch. The phone was in the middle on the coffee table. The call was already in progress.

The best thing about this meeting was that everyone was on the same page. We just needed someone to make a damn decision: yes or no, are we going to fix this problem?

Steve started by asking Paul two questions: 1) “Is what the bank doing on-strategy or off-strategy?” Paul replied “It’s not off strategy”. Then Steve asked question 2: “OK then, what are we doing to help the bank?”

Almost immediately PaulMa offered to call the VP at the bank. I gestured to Steve sitting across from me – no further than 2-3 feet away – and said “We’ve already had a lot of people talking to the bank”. “What we need is a decision. A yes or a no. Are we going to fix this problem or not? That’s all they want to hear”.

Steve immediately leaned over to me and whispered “If Paul is going to call, Paul is going to fix this”. We agreed Paul would call the VP as soon as they could synchronize their schedules.

Afterwards Steve said “If something like this ever happens again, call me right away. Call me directly. Don’t let it happen again”. We shook hands and I left.

The bug (a multi-threading sync data structure being overwritten) was fixed by the end of the following weekend …in time for PaulMa’s concall with the bank’s VP on the Tuesday afternoon.

After that, it used to freak people out in the Canadian sub when SteveB would high-five me in the hallway. It wasn’t worth trying to explain unless you knew the whole story.

What’s your Microsoft story?

Other members of the cast: Bill Gates Charles Fitzgerald Steve Wild Oliver Sharp.

Related: https://www.justice.gov/atr/reply-brief-petitioner-united-states-america-us-v-microsoft-corp

2. Cancellation of the Local Web Storage System (LWSS) Project

What are your favorite Microsoft stories? Here’s my second favorite… (#2 in my #FavoriteMicrosoftStories series)

Who remembers what famous Microsoft event happened on December 17, 2000?

How about the cancellation of the Local Web Storage System (LWSS) project? …preventing it from shipping with “Outlook 10” (aka Office XP).

I remember it well because the next day I was asked to present at the MS Collaboration Partner Advisory Council meeting at the Atlantis Hotel in Nassau. Not a bad gig except about every 45 minutes Robert Ginsberg would start to shake his head and shout out “How could you Microsoft do this to us?”. This went on for the full 2-day event – before and after every presentation.

Robert was right to be upset. Being the cofounder of one of the leading Exchange Server WSS development shops in the world, Robert, and his business partner Andy Sakalian, had invested enormous amounts of time and effort learning about LWSS – inside and out – and had built several tools to help prospective LWSS ISVs build custom solutions on top of LWSS and “Outlook 10”.

Schitt happens – everywhere – but it was always more fun when it happened at Microsoft.

p.s. You’d like an upbeat ending? Andy introduced me to a jeweler from Montreal who taught me how to play Black Jack “the real way” and what finer venue than sitting at the tables at the Altantis Casino. I made USD$900 that evening. I had to stay up until 4am but I did clear $900. ..that’s about $75/hour. Back then there were a lot easier ways to make $900/day …a lot easier.

What’s your Microsoft story?

Other members of the cast: Robert Ginsburg.

Related: http://windowsitpro.com/exchange-server/office-designer-and-local-web-storage-system-absent-office-10-release

3. PDC 2001: Unified Storage Strategy

What are your favorite Microsoft stories? Here’s my third favorite… (#3 in my #FavoriteMicrosoftStories series)

There was a series of events I often attended (before and) after I left the mothership in 2001: the Microsoft Professional Developer Conferences (PDC). They have since gone away. …the conferences I mean. (I think there are still lots of professional Microsoft developers.)

Mary-Jo Foley, an excellent journalist, writer and research analyst, would also attend these events. Disclosure: Mary-Jo Foley is also a friend on Facebook (currently) but we really don’t know each other that well.

At these events, Mary-Jo would often be first in line at the microphones during the Bill Gates executive Q&A session. One of her favorite questions was to ask Bill how Microsoft was progressing with its unified storage strategy based on SQL Server …especially the following question: When was Exchange Server going to ship using a SQL Server-based storage system? Bill always had a good answer.

This was also one of my favorite BillG questions and, unknown to Mary-Jo as far as I know, it became a race for me to try and ask the unified storage question before she could. If I asked it first, she would mention me and my affiliation in her coverage. It worked once. It was bit of game where I was the only knowing player.

What’s your Microsoft story?

4. Microsoft Exchange Server .NET Whitepaper

What are your favorite Microsoft stories? Here’s another one of my favorites… (#4 in my #favoriteMicrosoftStories series)

This story starts in the Fall of 2000 – the very, very early days of .NET/.net/.Net …yes, the days when, even inside Microsoft, there wasn’t a consistent .NET messaging and positioning framework and every product group spelled .NET differently. How many different ways can you spell a 3-letter word that doesn’t meaning anything? We used them all until…

Internally at Microsoft, we had the “.NET police force” who swooped down unexpectedly on presenters, content authors and writers to make sure that whichever way we chose to spell .NET, we had to change it!

And who was the .NET police chief? …none other than my good colleague from the “IE4 Java Jam”: Charles Fitzgerald. (For more information, read https://www.facebook.com/mwherman/posts/10153044551595932)

In the Fall of 2000, I moved into the EC3 team (Enterprise Connectivity Competency Center), a specialty team within MCS Canada formed as part of the acquisition of Linkage, a Toronto ISV who had an entire library of Exchange 5.5. connectors …but I digress.

The nicest thing about EC3 is that we were one of the only MCS practices that figured out how to subcontract ourselves back into the corp product groups. My first assignment was to write a .NET/.net/.Net strategy whitepaper for Thomas Rizzo when he was still on the Exchange Server team (pre SQL Server, pre SharePoint, pre Local Web Storage System). Like the Internet Wave, the .NET wave meant every product group needed to craft some sort of .NET developer story (aka strategy). My task was not only to document the Exchange Server .NET strategy in a whitepaper but I also had to make it up! “Those were the days my friend…”

I started by interviewing a lot of cool folks inside and outside the Exchange Server team: Gordon Mangione, Alex Hopmann, Brent Ingraham, Harry Katz, Keith McCall, Chris Vanden Berg, Thomas Rizzo, Lyle Curry, Jeff Wierer, and Kevin Hunter.

From there, I crafted a fairly compelling story (I mean strategy, sorry!). The whitepaper ended up being 15-20 pages.

I can’t find an original version of the “Developing Microsoft .NET Collaboration Solutions” whitepaper but you can still find the German version on the Microsoft web site: https://msdn.microsoft.com/de-de/library/cc405536.aspx . Google will do a pretty good job helping you translate from German to English.

And what about the .NET police force? Back then, there was no one who knew how to push content faster to the Microsoft.com web site than Tom Rizzo. He litrerally had the Exchange Server .NET whitepaper published overnight. When the .NET police came to our doors, we just smilled. We had already moved on to something else.

What’s your Microsoft story?

Related: https://learn.microsoft.com/de-de/previous-versions/technical-content/cc405536(v=msdn.10)

Members of the cast: Charles Fitzgerald, Gordon Mangione, Alex Hopmann, Brent Ingraham, Harry Katz, Keith McCall, Chris Vanden Berg, Thomas Rizzo, Lyle Curry, Jeff Wierer, and Kevin Hunter.

5. Windows SDK version 0.989

What are your favorite Microsoft stories? Here’s another one of my favorites… (#5 in my #FavoriteMicrosoftStories series)

The one isn’t really very juicy …more of an early milestone. Thank you goes to Bill Vaughn for helping with some of the names and dates.

It was the Spring of 1986. I worked for a Toronto research company/ISV (Optical Recording Corporation) where we were trying to use the very early versions of Windows to create (don’t laugh) an optical disc-based document storage and management system. …and I mean, very, very early versions of Windows: Windows SDK version 0.989, Windows 1.01, Windows 1.02, etc.

Later in the Fall of 1987 (I think), I attended my first Microsoft Windows developer event (MS used a small ‘d’ back then ;-)). It was in a simple Santa Clara hotel meeting room. 5-6 rows of chairs. Less than 100 people. SteveB was the MC and John Butler was the main technical guy as I remember. I remember John for his ponytail. He later went on to play a key role in creating Microsoft University (who remembers that?).

As a giveaway, we received a white cotton book bag with the pale blue Windows logo on it. It had a copy of the Windows “runtime environment” and the SDK in it. Maybe be a copy of the Microsoft C compiler. The entire library of Microsoft “small d” developer documentation was at most 3 small beige PC binders.

What’s your Microsoft story?

6. Orthogonal Defect Classification at Microsoft

What are your favorite Microsoft stories? Here’s another one of my favorites… (#6 in my #FavoriteMicrosoftStories series)

In the Fall of1997, I was honored to present to Billg, Nathan Myhrvold, and about 30 development managers at the Billg Fall 1997 Retreat: Improving the Software Development Processes at Microsoft. My topic was…

Related: https://hyperonomy.com/2021/03/20/orthogonal-defect-classification-odc4msft/


Orie Steele

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers…

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers (DIDs) As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry.
Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers (DIDs)

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at the limitless creativity and skill the Transmute team applies to our work.

We love the web platform at Transmute.

One of our favorite things to do is to build small demos that show how powerful open web standards are, and some of the awesome features that have been developed at W3C to support web developers.

We’re also excited to participate in the Linux Foundation’s newest initiative the Open Wallet Foundation:

https://openwallet.foundation/ https://github.com/openwallet-foundation

As part our helping evaluate open source building blocks for digital wallets we built:

Briefcase

What’s Our Briefcase Made Of?

Briefcase uses Web Cryptography to share encrypted content using URI Fragments.

It also builds on top of DID JWK, which is the simplest DID Method we are aware of and provides a great starting point for building authentic relationships between parties that control private keys.

In order to make did:jwk even cooler, we added an emoji encoding of the JWK Thumbprint URI RFC9278, that takes the last few characters of the thumbprint and converts them to emojis.

We use the same approach in our DID Meme demo for post quantum cryptography, see <link>.

But where to store the public and private keys?

We’ve open sourced a tiny JavaScript wrapper around IndexDB that can store Web CryptoKeys and documents:

https://github.com/openwallet-foundation/Technical-Staging-OWF/pull/3

Once you have a way to store public and private keys, you can use this wonderful library for producing JSON Web Signatures (JWS) and JSON Web Encryptions (JWE).

What to Put in Your Briefcase

You can share a DID Key with a friend, who can encrypt a message to this DID using well supported standards from IETF including RFC7516.

Then share the link to the encryption with the private key controller, who will be able to decrypt the message.

In order to make this more fun, we added a 3D animation of a briefcase opening :)

Putting it all together, we’ve made a simple and easy way to share encrypted structured content built on W3C and IETF standards, and we had a lot of fun doing it!

Orie Steele, Transmute’s CTO and Co-Founder, has managed security concerns for startups and publicly traded companies, building secure web applications in Finance, Energy, and Healthcare.

Connect with Orie on LinkedIn, Twitter, and GitHub

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers… was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Hyperonomy Digital Identity Lab

Brief History of Microsoft’s Disconnected Technology Strategies – circa 2005

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Office conference wrapuphttps://jonudell.net/udell/2005-02-04-office-conference-wrapup.html Herman: It’s nice to see Microsoft consolidating around a smaller set of core technologies, but when it comes to electron

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Office conference wrapup
https://jonudell.net/udell/2005-02-04-office-conference-wrapup.html

Herman: It’s nice to see Microsoft consolidating around a smaller set of core technologies, but when it comes to electronic forms, Word and Excel have their own point solutions, Outlook has its own point solution, InfoPath has its own point solution, Access has its own point solution. In the developer platform you have ASP.NET and WebForms. We’re constantly in the situation where we’re trying to guess which ones are strategic. Can you give us some insight?

Why does Microsoft have so many eforms technologies? …which ones are strategic? Billg and StevenSi offer some answers
https://web.archive.org/web/20051216034728/http://dotnetjunkies.com:80/WebLog/mwherman2000/archive/2005/02/14/54401.aspx

AUDIENCE QUESTION: Michael Herman, from Parallelspace Corporation. It’s nice to see Microsoft consolidating around a smaller set of core technologies. But, when it comes to electronic forms, Word and Excel have their own point solution. Outlook has its own point solution. InfoPath has its own point solution. Access has its own point solution. In the developer platform, you have ASP.NET and WinForms. We’re constantly in a situation we’re trying to guess which ones are strategic. Can you give us some insight?

Microsoft’s varied collaboraboration platform strategies: More comments from Billg and StevenSi
https://web.archive.org/web/20060903014934/http://www.dotnetjunkies.com:80/WebLog/mwherman2000/archive/2005/02/14/54405.aspx

A question from Mark Moore (formerly of KPMG and an early SPS 2001 adopter)…

AUDIENCE QUESTION: A number of us have been on the collaboration path with Microsoft for a long time starting with Outlook and Exchange. A couple of us probably remember a team productivity update. Then SharePoint 2001, SharePoint 2003, Digital Dashboard was in there. In going from point- milestone to milestone on this path, there hasn’t been a lot to leverage moving from one point to the other. Today, in the Whidbey talk I was gratified to hear that the Whidbey Web Parts were going to be backward compatible. I’m hoping that you can assure us that those of us who have been on the path with you for a while, that this cycle of creative destruction is coming to an end.

PDC 05: Are the PDC silos going to present a disconnected view of the Microsoft developer platform?
https://web.archive.org/web/20060211002512/http://dotnetjunkies.com:80/WebLog/mwherman2000/archive/2005/07/14/131275.aspx

For example, in the description for sesssion “Choosing the Right Presentation Technology: Avalon, Windows Forms, ASP.NET, IE, and More”, there is no mention of InfoPath “12” and the forms server demonstrated at TechEd 2005.

I would encourage everyone attending this session to rate it a 1 of 5 if the new InfoPath forms and forms server is not included in the analysis.

Is Microsoft going to present an integrated view of the Microsoft platform or a disconnected one?

Is the PDC going to be one large Microsoft “technology fair” with no strategic intent other than giving each product group a venue to promote their own technology bits? …leaving developers to guess what is strategic and what is not. (50% probability)

Pre-blogging the PDC
https://jonudell.net/udell/2005-08-02-pre-blogging-the-pdc.html

As Microsoft gears up for its annual Professional Developers Conference, Michael Herman — CTO and founder of Parallelspace — is asking some probing questions about the agenda:

Is the PDC going to be one large Microsoft “technology fair” with no strategic intent other than giving each product group a venue to promote their own technology bits? …leaving developers to guess what is strategic and what is not. (50% probability) [Michael Herman: Are the PDC silos going to present a disconnected view of the Microsoft platform?]


Michael asked similar questions at the Office Developers Conference I attended in February. I transcribed one of them — about Microsoft’s hydra-headed electronic forms strategy — in this blog item. Michael blogged the same exchange, and he also zeroed in on another set of questions and answers about unified storage that I transcribed from the February conference.
These questions are interesting, but I find the process itself even more so. The PDC tends to be ahistorical, focusing on futures more than follow-through. In the hallways you see attendees reading the entrails and trying to divine which futures will be strategic, at a level more granular than the grand themes: Windows, NT, Win95, the Internet, tablet PC, .NET, Hailstorm, WinFX.


Damien Bod

Implement the OAUTH 2.0 Token Exchange delegated flow between an Azure AD API and an API protected using OpenIddict

This article shows how to implement the OAUTH 2.0 Token Exchange RFC 8693 delegated flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the OAUTH 2.0 Token Exchange RFC 8693 delegated flow to get a new OpenIddict delegated […]

This article shows how to implement the OAUTH 2.0 Token Exchange RFC 8693 delegated flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the OAUTH 2.0 Token Exchange RFC 8693 delegated flow to get a new OpenIddict delegated access token using the AAD delegated access token. An ASP.NET Core Razor page application using a confidential client is used to get the Azure AD access token with an access_as_user scope. By using the OAUTH 2.0 Token Exchange flow, delegated and application authorization mixing can be avoided and the trust between systems can be reduced.

Code: https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

Setup OAUTH 2.0 Token Exchange RFC 8693 for delegated flows

A Razor page UI application is implemented using Azure AD as the identity provider. This application authenticates using a confidential client against Azure AD. The UI uses Microsoft.Identity.Web to implement the client authentication logic. The application requests an Azure AD delegated access token to use the API which is also protected using Azure AD. This API application needs to use a downstream API which is protected using a separate identity provider and is protected using OpenIddict. The API uses the Azure AD access token to acquire another access token which the OpenIddict protected API accepts. The OAuth 2.0 token exchange RFC 8693 is used to implement this using the delegated flow. Only known Azure AD delegated access tokens can be used. The identity provider which is used to host OpenIddict implements the server logic of token exchange flow. I have kept this separated but I assume this could be integrated into OpenIddict as well. It is important to validate the flow correctly and not just the flow but the mapping logic between the different identities used in the delegated access token. I did not implement the full spec in this demo, just the bits requires for the delegated flow. Impersonation and other such use cases for the RFC 8693 are not supported at present. Maybe I will implement this later.

Implement the OAUTH 2.0 Token Exchange client

The GetApiDataAsync method is used to get an access token for the OpenIddict downstream API and use it to get the data. It uses the GetApiTokenOauthGrantTokenExchange to the get the access token using the token exchange flow and then uses it to call the business API. The configuration values are used as well as the client secret to acquire the new token.

public async Task<List<string>> GetApiDataAsync(string aadAccessToken) { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri( _downstreamApi.Value.ApiBaseAddress); var access_token = await _apiTokenClient .GetApiTokenOauthGrantTokenExchange ( _downstreamApi.Value.ClientId, _downstreamApi.Value.Audience, _downstreamApi.Value.ScopeForAccessToken, _downstreamApi.Value.ClientSecret, aadAccessToken ); client.SetBearerToken(access_token); var response = await client.GetAsync("api/values"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer .DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); if(data != null) return data; return new List<string>(); } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } }

The GetApiTokenOauthGrantTokenExchangeAad is an internal method used to call the OpenIddict identity provider to get the correct access token. This method is only called once per session or as long as the token is valid. This is normally cached once acquired. The method passes the required parameters which match the server settings.

private async Task<AccessTokenItem> GetApiTokenOauthGrantTokenExchangeAad( string clientId, string audience, string scope, string clientSecret, string aadAccessToken) { var tokenExchangeHttpClient = _httpClientFactory.CreateClient(); tokenExchangeHttpClient.BaseAddress = new Uri( _downstreamApiConfigurations.Value.IdentityProviderUrl); var tokenExchangeSuccessResponse = await RequestDelegatedAccessToken .GetDelegatedApiTokenTokenExchange( new GetDelegatedApiTokenOAuthTokenExchangeModel { Scope = scope, AccessToken = aadAccessToken, ClientSecret = clientSecret, Audience = audience, ClientId = clientId, EndpointUrl = "/connect/oauthTokenExchangetoken", GrantExchangeHttpClient = tokenExchangeHttpClient }, _logger); if (tokenExchangeSuccessResponse != null) { return new AccessTokenItem { ExpiresIn = DateTime.UtcNow .AddSeconds(tokenExchangeSuccessResponse.expires_in), AccessToken = tokenExchangeSuccessResponse.access_token }; } _logger.LogError( "no success response from oauth token exchange access token request"); throw new ApplicationException( "no success response from oauth token exchange access token request"); }

The GetDelegatedApiTokenTokenExchange method implements the client business of the OAuth flow. This creates an authentication header using basic authentication as we only want to use a confidential client for this. The parameters are passed as a KeyValuePair and match the defined specifications in the RFC 8693 for the POST body. If the data is returned correctly a success response is returned, otherwise the error response like in the RFC definition with a few extra parameters. The OauthTokenExchangeSuccessResponse is used to get the successful HTTP response from the POST request.

public static async Task<OauthTokenExchangeSuccessResponse?> GetDelegatedApiTokenTokenExchange( GetDelegatedApiTokenOAuthTokenExchangeModel reqData, ILogger logger) { if (reqData.GrantExchangeHttpClient == null) throw new ArgumentException("Httpclient missing, is null"); string credentials = CreateBasicAuthenticationHeader(reqData); reqData.GrantExchangeHttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", credentials); KeyValuePair<string, string>[] oauthTokenExchangeBody = CreateTokenExchangeBody(reqData); var response = await reqData.GrantExchangeHttpClient.PostAsync(reqData.EndpointUrl, new FormUrlEncodedContent(oauthTokenExchangeBody)); if (response.IsSuccessStatusCode) { var tokenResponse = await JsonSerializer.DeserializeAsync<OauthTokenExchangeSuccessResponse>( await response.Content.ReadAsStreamAsync()); return tokenResponse; } if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized) { // Unauthorized error var errorResult = await JsonSerializer.DeserializeAsync<OauthTokenExchangeErrorResponse>( await response.Content.ReadAsStreamAsync()); if (errorResult != null) { logger.LogInformation("{error} {error_description} {correlation_id} {trace_id}", errorResult.error, errorResult.error_description, errorResult.correlation_id, errorResult.trace_id); } else { logger.LogInformation("RequestDelegatedAccessToken Error, Unauthorized unknown reason"); } } else { // unknown error, log logger.LogInformation("RequestDelegatedAccessToken Error unknown reason"); } return null; }

The CreateTokenExchangeBody creates the body. This is implemented for the delegated flow which requests an access token. The subject_token parameter is used to pass the Azure AD access token.

private static KeyValuePair<string, string>[] CreateTokenExchangeBody( GetDelegatedApiTokenOAuthTokenExchangeModel reqData) { // Content-Type: application/x-www-form-urlencoded var oauthTokenExchangeBody = new[] { new KeyValuePair<string, string>("grant_type", OAuthGrantExchangeConsts.GRANT_TYPE), new KeyValuePair<string, string>("audience", reqData.Audience), new KeyValuePair<string, string>("subject_token_type", OAuthGrantExchangeConsts.TOKEN_TYPE_ACCESS_TOKEN), new KeyValuePair<string, string>("subject_token", reqData.AccessToken), new KeyValuePair<string, string>("scope", reqData.Scope) // new KeyValuePair<string, string>("resource", "--optional--") // new KeyValuePair<string, string>("requested_token_type", "--optional--") // new KeyValuePair<string, string>("actor_token", "--optional--") // new KeyValuePair<string, string>("actor_token_type", "--optional--") }; return oauthTokenExchangeBody; }

I created a consts class to implement the specification per defined string types.

public class OAuthGrantExchangeConsts { public const string TOKEN_TYPE_ACCESS_TOKEN = "urn:ietf:params:oauth:token-type:access_token"; public const string TOKEN_TYPE_REFRESH_TOKEN = "urn:ietf:params:oauth:token-type:refresh_token"; public const string TOKEN_TYPE_ID_TOKEN = "urn:ietf:params:oauth:token-type:id_token"; public const string TOKEN_TYPE_SAML1 = "urn:ietf:params:oauth:token-type:saml1"; public const string TOKEN_TYPE_SAML2 = "urn:ietf:params:oauth:token-type:saml2"; public const string GRANT_TYPE = "urn:ietf:params:oauth:grant-type:token-exchange"; public const string ERROR_INVALID_REQUEST = "invalid_request"; public const string ERROR_INVALID_CLIENT = "invalid_client"; public const string ERROR_INVALID_GRANT = "invalid_grant"; public const string ERROR_UNAUTHORIZED_CLIENT = "unauthorized_client"; public const string ERROR_UNSUPPORTED_GRANT_TYPE = "unsupported_grant_type"; public const string ERROR_INVALID_SCOPE = "invalid_scope"; // ... more consts, see the code for the full definitions }

That’s all that is required to implement the client side of the OAuth Token exchange delegated flow. If you require other flow types from this specification, then this needs to be implemented. See the RFC docs for details (In the links below)

Implement the OAUTH 2.0 Token Exchange server

The server part of the flow needs to validate a few different things. The identity provider validates the POST request using BASIC authentication, then it validates the body of the HTTP POST request. The server needs to fully validate the Azure AD access token including the signature, aud and iss as per standard. Once the Azure AD token is validated, the claims can be used to authorize the identity delegated in the access token. Only delegated access tokens should be accepted and so in an Azure AD token V2, you can do this be checking for an oid claim and a scp claim. These claims might be renamed if using the default Microsoft namespaces. The server must match its users to the Azure AD users. You need to be careful when using emails for this. The Azure OID a good claim to use for this.

The server must do the following:

Validate the Basic authentication Validate the body of the POST request as per standard Validate the access token fully Validate the claims, do the authorization Generate the new access token as per standard

Validate Basic authentication

Basic authentication is used so that only confidential clients can use the API. This is not the strongest of authentication methods but it is how the specification recommends sending the clientId and clientSecret. The used authentication is validated using an Authorize attribute and the correct scheme.

[Authorize(AuthenticationSchemes = BasicAuthenticationDefaults.AuthenticationScheme)] [HttpPost("~/connect/oauthTokenExchangetoken"), Produces("application/json")] public async Task<IActionResult> Exchange([FromForm] OauthTokenExchangePayload oauthTokenExchangePayload) { // Implement validate and create AT logic }

Once authenticated, the validation can begin.

Validate payload of POST request

The payload of the HTTP POST request is validated. This checks that the body has the expected values and the ones which are allowed. If any are incorrect, the error parameter of the unauthorized request is returned as defined in the specification.

var (Valid, Reason, Error) = ValidateOauthTokenExchangeRequestPayload .IsValid(oauthTokenExchangePayload, _oauthTokenExchangeConfigurationConfiguration); if(!Valid) { return UnauthorizedValidationParametersFailed( oauthTokenExchangePayload, Reason, Error); } Validate access token and signature

If the payload is validated, then the access token sent using the subject_token parameter is validated. This must be fully validated including the signature. The well known endpoints of the Azure AD identity provider is used to get the public keys of the certificate used to create the JWT token. This is used to validate the token signature. The iss and the aud are validated and checked against the expected values.

// get well known endpoints and validate access token sent in the assertion var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>( _oauthTokenExchangeConfigurationConfiguration.AccessTokenMetadataAddress, new OpenIdConnectConfigurationRetriever()); var wellKnownEndpoints = await configurationManager .GetConfigurationAsync(); var accessTokenValidationResult = ValidateOauthTokenExchangeRequestPayload.ValidateTokenAndSignature( oauthTokenExchangePayload.subject_token, _oauthTokenExchangeConfigurationConfiguration, wellKnownEndpoints.SigningKeys); if(!accessTokenValidationResult.Valid) { return UnauthorizedValidationTokenAndSignatureFailed( oauthTokenExchangePayload, accessTokenValidationResult); }

The ValidateTokenAndSignature method checks and validates the token.

public static (bool Valid, string Reason, ClaimsPrincipal? ClaimsPrincipal) ValidateTokenAndSignature( string jwtToken, OauthTokenExchangeConfiguration oboConfiguration, ICollection<SecurityKey> signingKeys) { try { var validationParameters = new TokenValidationParameters { RequireExpirationTime = true, ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(1), RequireSignedTokens = true, ValidateIssuerSigningKey = true, IssuerSigningKeys = signingKeys, ValidateIssuer = true, ValidIssuer = oboConfiguration.AccessTokenAuthority, ValidateAudience = true, ValidAudience = oboConfiguration.AccessTokenAudience }; ISecurityTokenValidator tokenValidator = new JwtSecurityTokenHandler(); var claimsPrincipal = tokenValidator .ValidateToken(jwtToken, validationParameters, out var _); return (true, string.Empty, claimsPrincipal); } catch (Exception ex) { return (false, $"Access Token Authorization failed {ex.Message}", null); } } Validate claims and authorize the access token

Now that the token is validated, the returned claimsPrincipal can be used to check and authorize the identity from the access token. The token must be validated that it is a delegated token and must contain a scp claim and an oid claim. The scp is what we added to use the service. We added an access_as_user claim. I would avoid roles as roles can be used for application tokens as well. I matched the name claim with the email from to the identity in the second IAM system. Using the OID claim would be a more trusted way of doing this.

// get claims from aad token and re use in OpenIddict token var claimsPrincipal = accessTokenValidationResult.ClaimsPrincipal; var isDelegatedToken = ValidateOauthTokenExchangeRequestPayload .IsDelegatedAadAccessToken(claimsPrincipal); if (!isDelegatedToken) { return UnauthorizedValidationRequireDelegatedTokenFailed(); } var name = ValidateOauthTokenExchangeRequestPayload .GetPreferredUserName(claimsPrincipal); var isNameAndEmail = ValidateOauthTokenExchangeRequestPayload .IsEmailValid(name); if(!isNameAndEmail) { return UnauthorizedValidationPrefferedUserNameFailed(); } // validate user exists var user = await _userManager.FindByNameAsync(name); if (user == null) { return UnauthorizedValidationNoUserExistsFailed(); }

The delegated access token check is validated using the oid and the scp claims. Sometimes the claims get changed using the namespaces from Microsoft. I added a fallback check to validate both.

public static bool IsDelegatedAadAccessToken(ClaimsPrincipal claimsPrincipal) { // oid if magic MS namespaces not user var oid = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/objectidentifier"); // scp if magic MS namespaces not added var scp = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/scope"); if (oid != null && scp != null) { return true; } oid = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "oid"); scp = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "scp"); if (oid != null && scp != null) { return true; } return false; } Generate new access token

A new access token is created using the same certificate as the defualt one used by OpenIddict. This makes it possible to validate the token using the well known endpoints.

// use data and return new access token var (ActiveCertificate, _) = await Startup.GetCertificates(_environment, _configuration); var tokenData = new CreateDelegatedAccessTokenPayloadModel { Sub = Guid.NewGuid().ToString(), ClaimsPrincipal = claimsPrincipal, SigningCredentials = ActiveCertificate, Scope = _oauthTokenExchangeConfigurationConfiguration.ScopeForNewAccessToken, Audience = _oauthTokenExchangeConfigurationConfiguration.AudienceForNewAccessToken, Issuer = _oauthTokenExchangeConfigurationConfiguration.IssuerForNewAccessToken, OriginalClientId = _oauthTokenExchangeConfigurationConfiguration.AccessTokenAudience }; var accessToken = CreateDelegatedAccessTokenPayload.GenerateJwtTokenAsync(tokenData); _logger.LogInformation("OBO new access token returned sub {sub}", tokenData.Sub); if(IdentityModelEventSource.ShowPII) { _logger.LogDebug("OBO new access token returned for sub {sub} for user {Username}", tokenData.Sub, ValidateOauthTokenExchangeRequestPayload.GetPreferredUserName(claimsPrincipal)); } return Ok(new OauthTokenExchangeSuccessResponse { expires_in = 60 * 60, access_token = accessToken, scope = oauthTokenExchangePayload.scope });

The claims are added like in the RFC specification.

public static string GenerateJwtTokenAsync(CreateDelegatedAccessTokenPayloadModel payload) { SigningCredentials signingCredentials = new X509SigningCredentials(payload.SigningCredentials); var alg = signingCredentials.Algorithm; //{ // "alg": "RS256", // "kid": "....", // "typ": "at+jwt", //} var subject = new ClaimsIdentity(new[] { new Claim("sub", payload.Sub), new Claim("scope", payload.Scope), new Claim("act", $"{{ \"sub\": \"{payload.OriginalClientId}\" }}", JsonClaimValueTypes.Json ) }); if(payload.ClaimsPrincipal != null) { var name = ValidateOauthTokenExchangeRequestPayload.GetPreferredUserName(payload.ClaimsPrincipal); var azp = ValidateOauthTokenExchangeRequestPayload.GetAzp(payload.ClaimsPrincipal); var azpacr = ValidateOauthTokenExchangeRequestPayload.GetAzpacr(payload.ClaimsPrincipal); if(!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("name", name)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azp", azp)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azpacr", azpacr)); } var tokenHandler = new JwtSecurityTokenHandler(); var tokenDescriptor = new SecurityTokenDescriptor { Subject = subject, Expires = DateTime.UtcNow.AddHours(1), IssuedAt = DateTime.UtcNow, Issuer = "https://localhost:44318/", Audience = payload.Audience, SigningCredentials = signingCredentials, TokenType = "at+jwt" }; tokenDescriptor.AdditionalHeaderClaims ??= new Dictionary<string, object>(); if (!tokenDescriptor.AdditionalHeaderClaims.ContainsKey("alg")) { tokenDescriptor.AdditionalHeaderClaims.Add("alg", alg); } var token = tokenHandler.CreateToken(tokenDescriptor); return tokenHandler.WriteToken(token); }

Start all the applications and if everything is configured correctly with your Azure AD tenant, the data from the OpenIddict protected API can be used and displayed in the Azure AD UI.

Links

https://documentation.openiddict.com/configuration/application-permissions.html

https://datatracker.ietf.org/doc/html/rfc8693

https://www.youtube.com/watch?v=Ue8HKBGkIJY&t=

https://github.com/damienbod/OnBehalfFlowOidcDownstreamApi

https://www.rfc-editor.org/rfc/rfc6749#section-5.2

https://github.com/blowdart/idunno.Authentication/tree/dev/src/idunno.Authentication.Basic

Sunday, 08. January 2023

Heres Tom with the Weather

Correction: check_webfinger!

Mastodon is not the fediverse and in my check_webfinger! post, I’m afraid I made that assumption. In particular, I concluded So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document. which is not necessarily true if

Mastodon is not the fediverse and in my check_webfinger! post, I’m afraid I made that assumption. In particular, I concluded

So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document.

which is not necessarily true if you consider the fediverse outside of Mastodon.

Instead, it seems that I should have said that the domain component of your identifier is determined by the domain component of the subject field returned in the webfinger response from the domain that serves your actor document when mastodon makes its 2nd webfinger request which is done in the check_webfinger! method.

def check_webfinger! webfinger = webfinger!("acct:#{@username}@#{@domain}") confirmed_username, confirmed_domain = split_acct(webfinger.subject)

In the code above, the @domain passed to webfinger! is the domain of the server providing the activitypub actor document but the confirmed_domain can be different (e.g. your personal domain) if your original “well known” webfinger document was not pointing to a Mastodon server for providing the actor document.

Therefore, if you have a static personal website, it is not necessary to also host the actor document there as long as the fediverse node providing the actor document is smart enough to provide your personal domain in the subject when mastodon makes a webfinger call to it. A caveat is that such a fediverse node accomodating personal domains would not be able to distinguish between bob@a.com and bob@b.com when mastodon webfingers server.com for bob@server.com.

Saturday, 07. January 2023

Identity Woman

Forbes Quotes me on Social Media’s Future considering Safety & Identity

I was cited in an article that was published in Forbes. The article was part of a series that was assessing the activities of 2022 on Twitter, the crazy policies of a new CEO, and the ramifications on the future of social media. The article’s central emphasis was on the question of whether or not, […] The post Forbes Quotes me on Social Media’s Future considering Safety & Identity appeared f

I was cited in an article that was published in Forbes. The article was part of a series that was assessing the activities of 2022 on Twitter, the crazy policies of a new CEO, and the ramifications on the future of social media. The article’s central emphasis was on the question of whether or not, […]

The post Forbes Quotes me on Social Media’s Future considering Safety & Identity appeared first on Identity Woman.


Heres Tom with the Weather

Friday, 06. January 2023

@_Nat Zone

ツイッターの2.3億人情報流出事件のリスクを考えてみる

2023年の1月6日未明、Bloombergの報道…

2023年の1月6日未明、Bloombergの報道「 ツイッター、2.3億人超える情報流出か1が流れてきたので、これを題材にこのリスク面をちょっと考えてみたいと思います。

もくじ

事件概要 悪用の可能性・リスク リスク1:メアドおよび電話番号を使って表アカウントとサブ垢、裏垢が名寄せされる リスク2:この情報を使ってフィッシングされる リスク3:プロファイリングに使われる リスク4:スパムに悩まされる/DDoS攻撃 識別子問題 事件概要

Bloomberg英語版の2023年1月6日 5:17 JSTの報道によると、twitterから漏洩したとみられる2.3億人を超えるユーザの情報が、掲示板「BreachForum」に掲載されたとのことです。入っている情報は、Eメールアドレス、ツイッター・ハンドル(@ から始まる名前。わたしの場合は @_nat )、フルネーム(←なんのことやら)とのことですが、このエントリーを書いていてふと思い立ってpiyologさんを見に行ったら「Twitterから流出したとみられる約2億件のデータについてまとめてみた」2にずっと詳しくまとめてありました。それによると

ユーザー名称 ユーザー名 フォロワー数 アカウント作成日 メールアドレス

が入っているようです。piyokangoさん、さすが。逆に、Bloomberg、ちゃんと書け。

このデータは新たに抜かれたものではなくて、2021年6月から2022年1月まで存在していたTwitter APIの「脆弱性」を使って抜かれたのだろうとのこと。この脆弱性自体は2022年1月にtwitter社に報告され、即時に修正されたもので、twitter社の2022年8月5日付けの報告3によると、「もしだれかがEメールアドレスないしは電話番号をTwitterのシステムに送ると、Twitterのシステムはその人に、それらが紐付けられたtwitterアカウントがある場合それを返す4」というものだったようです5

このAPI脆弱性が実際に悪用されたのはすでに2022年7月にわかっていました。上記で抜けるのはツイッター・ハンドルだけのように報告書からは読めますが、ツイッター・ハンドルがわかれば、ハンドルに結び付けられたTwitterプロフィールの「名前」もわかりますから、BreachForumに掲載されたのはそれまでに抜かれたデータに、公開情報としてある「名前」を結びつけたものかもしれません。そもそも2022年12月23日にはこれに加えて電話番号もある約4億件のデータが「20万ドルで独占販売又は6万ドルで複数販売」6されていたようですから、ここから重複をとったりしてクレンジングしたデータかも知れません。より詳しい内容は piyolog 記事7を御覧ください。

悪用の可能性/リスク

まず最初にいっておきますが、パスワードは漏れていないのでこれ直接の不正ログインのリスクはありません。ただし、これを機にログインを見直し、FIDO認証器やワンタイムパスワード(OTP)のを設定するのは有用ですからぜひやってください。実際にこれは上記8/5付Twitter社報告でも推奨しています。(逆に、Bloombergさん、「ログイン中に「アカウント」タブでパスワードを変更すべきだ」と高度認証の前にこれを持ってくるのは何?そういうとこだぞ。)

ではリスクに入ります。まず、2022年8月発表の脆弱性の悪用リスクから考えます。

リスク1:メアドおよび電話番号を使って表アカウントとサブ垢8、裏垢9が名寄せされる

リスク内容:メールアドレスあるいは電話番号をキーにサブ垢、裏垢を結び付けられるのは、一部個人にとっては最大のリスクでしょうね。これによって、サブ垢の過去のツイートを辿ってあれやこれや言われたり場合によっては脅迫されるとかも出てくるでしょう。元の脆弱性の報告でもこうした名寄せを最大のリスクとして挙げていました。

対策:うーん。起きてしまった分にはなかなか難しいです。サブ垢・裏垢削除は、まだ魚拓など取られていない前提であればある程度効果が見込めます。ただ、逆にツイ消しやアカ消ししていることをなじられる可能性もあります。やましいことがあるんだろうと。

今後に関しては、サブ垢・裏垢を作る場合には、メアドも携帯電話番号も別のものを使いましょう。なお、このブログエントリーを書こうと思ったのは、この辺の深堀(後述)が目的ですので、ぜひそちらも御覧ください。

リスク2:この情報を使ってフィッシングされる

リスク内容: たとえそれが公開されていたり漏れていたりする情報であったとしても、本人にとって正しく見える情報を提示されて危機感を煽られると、つい詐欺に騙されてしまいがちになります。「あなたの2015年1月5日開設のツイッターアカウント「@_nat10(メールアドレス: nat@example.com)のパスワードが漏洩しました。次のサイトに行って至急パスワードをリセットしてください」みたいに登録してある携帯電話番号 080-0987-654311 にSMSでメッセージがきたりしたら、ついやっちゃいますよね?

対策:ブルンバーグが勧めるようなパスワードの変更は意味がありません。Twitterが勧めるような2要素認証は、「セキュリティ・キー」を選択すればほぼ確実な効果が見込まれます。テキストメッセージや認証アプリを使ったOTPはフィッシングには脆弱ですのでこのリスクの対策にはなりません。(過去に漏洩したパスワードによる攻撃などには効果がありますが)

リスク3:プロファイリングに使われる

リスク内容: 漏れた情報およびそこから引っ張ってこれる情報と、自社でもともと持っていた情報をメアド or 電話番号で紐づけて、ターゲティング用のプロファイリングに利用される。

対策:現実的な、つまり費用対効果が見込める対策はあんまり無いです。なにか思いついたらコメントに書いてください。

リスク4:スパムに悩まされる/DDoS攻撃

リスク内容:一般人の場合、漏れたメアドや電話番号にスパムが来て、作業や自由時間に割り込みが入ったり、見落としてはいけないメッセージを見落としたりする、というような感じでしょうか。有名人の場合、DDoSアタックになりえますね。

対策:せっかく重要な相手にしか開示していないメアド・番号で見落としが無かった、という方には残念なおしらせになりますね。現在開示している相手をホワイトリストに入れて、あとは叩き落とすとか、番号を変えるとかしないとならないかも知れません。特に電話の場合は、局側での対応が可能でないと、その電話番号は使い物にならなくなる可能性があります。

その他まだまだ精査すればあるでしょうが、とりあえずこんなところで。思いついたものがあったら、コメントに書いていただければ幸いです。

識別子問題

このブログを書き出したときには「ツイッター、2.3億人超える情報流出事件を題材に識別子問題を考えてみる」というタイトルで、識別子問題を深掘しようと思っていたのですが、だいぶ長くなったしもう朝の3時近くて疲れたので、今回はここまでにします。書こうかなと思っていたのは、

識別子とクレデンシャルの混同問題:当初の「脆弱性」って、実はこの辺からくる仕様上のバグだったんじゃないのという感想を持ったので、ここを書こうと思ったのがそもそものこのエントリ執筆のきっかけです。そこまで行く前に力尽きたけど。「そんなバカな」と思うでしょうけど、この混同ってわりとあるんですよ。特に、パスワードリセットにこれらを使っているような場合。まぁ、twitterの場合はそうじゃなかったと信じたいところですが。 アカウント間名寄せ問題:リスクの項で触れたはなしが主になります。これに加えて、次の話もあります。 仮名アカウントのときのバックアップ認証手段の話 再利用可能識別子を使った場合のバックアップ認証手段の混同問題 Twitterによるメアド・電話番号の目的外利用問題:バックアップ認証手段として取得していたこれらを、ターゲティングに使っていたという話。 詳細はこの辺→ Twitter faces $250 million FTC fine for misusing emails and phone numbers (2020-08-04)

識別子については、昨年暮れに新設した日本語Youtubeチャンネルで解説を始めています(まだ2023年1月7日段階で第1回しかアップロードしてません)。

この第1回では、「識別子の分類とリスク:識別子徹底攻略①」12と題して識別子の分類とリスクについてお話しています(下に掲載しておきました)。これを見ていただくとわかりますが、メールアドレスも電話番号も基本的に再利用可能識別子になります。また、無指向性識別子でもありますし、多くの場合継続的識別子でもあるでしょう。3年毎にメアドと携帯電話番号を変えるって人、ほぼいませんよね。こういったことを勘案しながら、この事件から得られる教訓を書こうと思っていたのです。が、それはまた別の機会にということで。

それではまた!

Thursday, 05. January 2023

Identity Woman

Quoted in IEEE article about Worldcoin and their shift to Digital ID.

I was asked to offer my perspective on the risks associated with the biometric data of Worldcoin, which was included in an article Spectrum IEEE published. A crypto currency, Worldcoin, aspires to become the most globally and uniformly distributed cryptocurrency ever by allocating the same modest number of coins to every individual on Planet. The […] The post Quoted in IEEE article about Worldco

I was asked to offer my perspective on the risks associated with the biometric data of Worldcoin, which was included in an article Spectrum IEEE published. A crypto currency, Worldcoin, aspires to become the most globally and uniformly distributed cryptocurrency ever by allocating the same modest number of coins to every individual on Planet. The […]

The post Quoted in IEEE article about Worldcoin and their shift to Digital ID. appeared first on Identity Woman.

Wednesday, 04. January 2023

MyDigitalFootprint

Peak Paradox and #privacy

I have explored privacy and identity in a previous post, taken from the perspective of the individual.  This post is from the perspective of the state/ nation/ law. I commented on Michael Becker's LinkedIn post about his list of words and definitions for the identity/ privacy space. I commented that everyone remains fixated on making their particular solution work to a problem the user has
I have explored privacy and identity in a previous post, taken from the perspective of the individual. 

This post is from the perspective of the state/ nation/ law. I commented on Michael Becker's LinkedIn post about his list of words and definitions for the identity/ privacy space. I commented that everyone remains fixated on making their particular solution work to a problem the user has not got which is "#privacy." Whilst every language and country has an ideal notion of privacy, the wide variety of cultures means there is no unified “concept of privacy”, even if privacy is explicitly named or defined in their specific language law or culture.

I asked #chatGPT, the “AI” bot from Google, the question, “how would a capitalist and socialist view privacy?” 

“Capitalists would see privacy as an important aspect of individual liberty and autonomy and they view privacy as a property right that can be traded or sold, and they may support policies that allow companies to collect and use personal data as long as they have obtained the necessary consent.

A socialist would define privacy as the right to personal autonomy and control over one's own thoughts, feelings, and personal information. It is the ability to make decisions about how and with whom one shares personal information, without interference or coercion from external forces such as the government or capitalist institutions. In a socialist society, privacy would be protected as a fundamental human right and would be respected and upheld by the community as a whole.”

StJohn Deakins added to the comments by suggesting we move on from ideals of the political spectrum to examples.  

Korea:  dictatorship with command economy:  Kim owns you and owns your data, including your thoughts (rights to brain data will become a seriously scary "thing" in the next decade)

China: one-party state, ruling on behalf of 'the people’ with a state-controlled market economy: the party owns you and your data (on behalf of the Chinese people)

USA: individualist democracy with a free market (where companies have the same legal rights as individuals).  You own your data, and others own it, too (lots of them).

EU:  social democracy with a free market (where markets serve people, and people choose how). You own your data (as a human right). You, in theory, participate in choosing how.


It is remarkably common that concepts can be well-defined in language and law, but there remains to be a wide variance in the understanding and use. Words and language are dynamic and never stationary, and we should not try to own or control them,  defining for a context is critical.   Just consider love, truth, safety, honesty, work, trust, beauty and risk.  

Plotting Privacy on the Peak-Paradox framework makes me consider how the different interruptions are framed by the purposes presented in the mapping.  It is evident that the interruption for privacy must vary as we move about our understanding of what we are optimising for. Within Peaks, such as the Peak Individual Purpose, there is a creation of dilemmas, for example, Mark Zuckerberg, who wants privacy for himself but not for any of his #meta users.

To add an additional layer to the complexity onion of privacy, it should be noted that the concept of privacy probably exists in each of us until there is an external demand or constraint.  As unpacked in this post, where I pick up on the work of two behavioural scientists, John Darley and Daniel Batson, who published in 1973 “From Jerusalem to Jericho: A study of Situational and Dispositional Variables in Helping Behavior.” It was an investigation into the psychology of prosocial behaviour. The research demonstrates that when we introduce time pressure, our dispositional factors (I like privacy) stand for nothing and that we will act differently because of the constraints. 

When you design for privacy, you need to understand what the user is optimising for and be aware that your design will change what privacy means to the users because it is a constraint.


Tuesday, 03. January 2023

MyDigitalFootprint

We can be very good at answering questions, but why don't we challenge them?

A problem (among many) with data is that many people ask questions that are easy.  How many and who clicked this button? These are easy to ask, occupy time, fill in KPI cards and are often easy to answer. Why do so few kick back to ask if it is the right question?  Why did they click the button? Oh, we don’t have that data! But we can create constraints that mean we get biased data as

A problem (among many) with data is that many people ask questions that are easy.  How many and who clicked this button? These are easy to ask, occupy time, fill in KPI cards and are often easy to answer. Why do so few kick back to ask if it is the right question?  Why did they click the button? Oh, we don’t have that data!

But we can create constraints that mean we get biased data as we don’t understand human behaviour in context. 

----

In 1973 two behavioural scientists, John Darley and Daniel Batson published "From Jerusalem to Jericho: A study of Situational and Dispositional Variables in Helping Behavior." It was an investigation into the psychology of prosocial behaviour.

Darley and Batson picked students who were studying to be priests at the Princeton Theological Seminary to determine how situational factors influenced prosocial behaviour. Hypothesis: When someone is kind to another, is that because he or she has some innate qualities that lead to kindness—or because some situational factors simply determine and allow for kind behaviours.

The famous study was across three days in late fall; they had a bunch of seminary students come to a building, meet with a researcher, and fill out a bunch of surveys. The surveys partly addressed if the students were religious primarily for intrinsic or for extrinsic reasons (with “intrinsic reasons” being like “I am motivated to do good in the world” and “extrinsic reasons” being like “I really want to get into heaven.”). Then the participants were told that they needed to prepare a brief talk about The Good Samaritan — which is a story about how a hapless victim on the side of the road was just passed by holy individuals, whilst a non-holy Samaritan took the time to stop and help the fellow out. The story's context is significant as the participants were told that they needed to walk to a nearby building to meet up with another team member and then give their sermon. However, using random selection, the student was told that they:

Had plenty of time, and were early.

Were on time, but should head over now so as not to be late, or

We're running late, and really needed to run without delay - no excuse.

Obviously, the situation was rigged, and all participants found a fallen stranger (an actor) in a narrow alleyway who acted sick on the ground and in need of help. The narrow alleyway was selected as there was a choice: help this guy or step over him!

The “time constraint” crafted behavioural change as:

63% of participants in the “early” condition stopped to help the stranger.

45% of participants in the “on-time” condition stopped to help the stranger.

10% of participants in the “late” condition stopped to help the stranger.

60% of the participants were unwilling to help the “victim.” This is ironic because the participants were Princeton students studying to be priests and about to give a talk on the lessons of the Good Samaritan, but this was manipulated because of the constraints.

A side note before the core finding is that dispositional factors (what you believed) had no bearing on helping behaviour. In other words, people who reported as religious for intrinsic reasons were no more likely than others to stop to help.

When it comes to human behaviour, we have a strong bias toward thinking that people do what they do because of internal traits that drive their behaviours (Dunning, Ross & Nisbett, 1990). The Overconfidence Effect in Social Prediction.  Data shows us that dispositional factors are relatively weak predictors of what we do, whilst situational factors (which we cannot see or measure and often seem benign or inconsequential) play a powerful role in shaping our behaviours.


We can only answer the question we have data to, but that does not mean the answer is right or the data is a good predictor because we don't understand the constraints. 

CEO Take Away

If data supports your decisions, who is accountable and responsible for ensuring it answers the question we want and is not just data without context?  In the next board agenda, put an item as part of AOB, “Do we understand situational bias in our data?” If there is no debate or looks of bewilderment, perhaps it is time to ask better questions of those who think that the data is accurate. 






Monday, 02. January 2023

Phil Windleys Technometria

Better Onboarding with Verifiable Credentials

Summary: Making onboarding easier would result in better customer experience, more customers, and reduced support costs. Last week a friend referred me to a question on Guru.com about devices for connected cars. Since I used to do Fuse, he figured I might be able to help. I was happy to. Unfortunately, Guru wasn't so happy to let me. You can't answer a question at Guru.com witho

Summary: Making onboarding easier would result in better customer experience, more customers, and reduced support costs.

Last week a friend referred me to a question on Guru.com about devices for connected cars. Since I used to do Fuse, he figured I might be able to help. I was happy to. Unfortunately, Guru wasn't so happy to let me.

You can't answer a question at Guru.com without registering, enrolling, and onboarding. Fair enough. So I started down the path. Here's their process:

Enter name and email on first screen. Choose whether you're en employer or freelancer and set your password. Be sure to follow their password conventions. Then agree to the terms of service and agree to get emails (or not). Enter the four-digit code that was sent to the email address you gave in (1). Solve the captcha. Choose whether to use 2FA or security questions to secure your account. I chose 2FA. Verify your phone number using SMS or WhatsApp (they recommend WhatsApp). I chose SMS. Enter the 4 digit code they send. Continue with 2FA. I'm not sure why this screen shows up twice. Logout and log back in. Scan the QR code to set up a TOTP authenticator. Enter the one-time code from the authenticator app. Upload a photo and enter a mailing address (yes, they're required).

Congratulations! You've gone through Guru's twelve step program and you're registered! I went through all this just to discover I can't answer questions unless I pay them money. I bailed.

As I was going though this, I couldn't help thinking how much easier it could be using verifiable credentials.

Enter an email. Scan the QR code they present using my smart wallet to establish a DID connection. Verify information about myself that they ask for using verifiable credentials.

Credentials asserting your verified email and phone number would be easy enough to get if I don't already have them. And they've not verifying address and photo anyway, so there's no need for anything but a self-asserted credential for that. Admittedly, if I've never used verifiable credentials before they need to coach me on getting a wallet and the phone and email address credential. But they're already doing that for the authenticator app in step 10 above.

Guru's registration process is one of the most arduous I have encountered. If I were them and unwilling to use verifiable credentials, I'd at least split it up and let people add their photo, address, and authenticator app after they're already on board. Guru.com (and lots of other web sites) have to be shedding potential customers at every step in their onboarding process. I wonder if they keep track of abandoned registrations and where it happens? Does anyone? I'd love to know the numbers.

Verifiable credentials could make the onboarding experience a breeze, get more customers in the door, and reduce the cost of customer support calls associated with it.

Photo Credit: Abandoned Road from Tim Emerich (CC0)

Tags: verifiable+credentials ssi identity web


Damien Bod

Azure AD Multi tenant Azure App registration consent

This article looks at Azure Active directory and consent with multi-tenant Azure App registrations. Consent works different depending on the user type, the tenant policies and the required permissions. It is sometimes hard to understand why a user cannot login or where the consent has to be given for a specific Azure App registration which […]

This article looks at Azure Active directory and consent with multi-tenant Azure App registrations. Consent works different depending on the user type, the tenant policies and the required permissions. It is sometimes hard to understand why a user cannot login or where the consent has to be given for a specific Azure App registration which exposes permissions or business functions.

Code: https://github.com/damienbod/AspNetCoreTenantSelect

Setup

An Azure App registration was setup as a Web client using a secret which uses some basic delegated permissions. The Azure App registration is a multi-tenant application. You can verify that the Azure App registration is a multi-tenant one by opening the manifest file and checking the signInAudience property.

"signInAudience": "AzureADandPersonalMicrosoftAccount",

You can also check this directly in the portal using the authentication blade.

When using a delegated multi-tenant Azure App registration you do NOT give consent using the Azure App registration. You give consent to a user or an administrator using the Enterprise application created for your tenant for the Azure App registration. Each multi-tenant Azure App registration can have multiple Enterprise applications each which would require consent depending on the user type, the tenant policies and the permissions required.

An ASP.NET Core application was implemented to authenticate using Azure AD and because it’s a multi-tenant application, any Azure AD user from any tenant could authenticate using the application. When using multi-tenant applications, you need to restrict the access by validating the acr claims, the user and other such authorization checks.

Microsoft.Identity.Web is used to implement the ASP.NET Core authentication. This hides and renames the OpenID Connect standards which are used. These packages are optimized for ASP.NET Core and uses MSAL.

services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd"));

The Micorosoft.Identity.Web package uses the AzureAd configuration per default which can be renamed if required. The TenantId is set to common for the multi-tenant access.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", // Use common for multi tenant applications "TenantId": "common", "ClientId": "086233fd-a846-47e0-a057-7aa05b5a5479", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/SignoutCallbackOidc" // "ClientSecret": "add secret to the user secrets" }, Grant consent as an admin

When you authenticate from the application for the first time as an Azure administrator user, a consent screen which allows you to Consent of behalf of your organization is displayed. If you check the checkbox, all users and administrators can use the application. If you do not check this checkbox and you give only your consent, only you can use the application and other administrator users can give consent as well. No Azure AD user can access with this consent.

Grant consent as a user

When an Azure AD user uses the application (not an administrator) and no admin consent has already been set for the tenant, the consent screen, is displayed for the user if only not admin delegated permissions are required and the tenant policy allows this. A user could then give a user consent for this account.

Normally the user requires an administrator to give admin consent for all user before a user of a specific tenant can access the application.

Enterprise application

If an administrator has already given a user consent for their own account, an Azure Enterprise application is created inside the tenant. You can view all the user consents already given in the permissions blade.

Grant consent for the whole tenant using the portal

You could give consent for all users using this application. This would be required if no consent has been given for all users but the IT admin has already consented for a single account.

Remove consent using the Enterprise application

It is really important to remove consent if no longer used. This can be done in the Enterprise application created from the Azure App registration for the specific tenant. This can be done in the properties blade.

Consent can be hard to setup on a specific tenant because this depends on different setups. The account type, the Azure AD policies and the required permissions all change how the consent is displayed, blocked or can be given. You should also verify the tenant where the multi-tenant Azure App registration is implemented.

Links:

https://learn.microsoft.com/en-us/azure/active-directory/develop/application-consent-experience

https://learn.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview

Saturday, 31. December 2022

Jon Udell

Instance-qualified Mastodon URLs

In Lists and people on Mastodon I showed how I added a list column to the following tab of the Mastodon browser I’m building. That was a step in the direction of easier and more powerful list management. It enables me to see whether the people I follow are assigned to lists, and to consider … Continue reading Instance-qualified Mastodon URLs

In Lists and people on Mastodon I showed how I added a list column to the following tab of the Mastodon browser I’m building. That was a step in the direction of easier and more powerful list management. It enables me to see whether the people I follow are assigned to lists, and to consider who should be on a list (or perhaps on a different list).

Today, as I began to use that new affordance in earnest, I discovered a new challenge. In order to assign someone to a list, or change a list assignment, I clicked the link in the account_url column to open that person’s profile in the Mastodon web app. That was fine for accounts on my home server, mastodon.social. An account URL like Shelley Powers’ https://mastodon.social/@burningbird brings me to Shelley’s profile on my home server where the list manager is available.

But if I’m following someone elsewhere, like Ward Cunningham at https://mastodon.radio/@k9ox, the account URL brings me to Ward’s profile on that server where the list manager isn’t available. In order to assign Ward to a list I had to capture his account URL, paste it into the search box in my home server’s web app, and then click the resulting link: https://mastodon.social/@k9ox@mastodon.radio.

That got old real fast, so I adjusted the following tab to display the latter flavor of URL which I’ll call an instance-qualified URL.

Steampipe provides a few ways to make that adjustment. As a user of the dashboard, you can use Postgres’ regular expression functions to do the transformation in the SQL query that drives the view. But you’d rather not have to. It’s much nicer if the plugin does that for you, so the SQL can just refer to a column called instance_qualified_url.

I chose the latter approach. As the author of a Steampipe plugin you want to make life as easy as possible for users of the plugin. When you’re the author of both the plugin and the dashboard, as I am in this case, you can enjoy a nice virtuous cycle. As the dashboard evolves you discover ways to improve the plugin, which leads to more use of the dashboard, which suggests further opportunities to improve the plugin. I have been greatly enjoying the coevolution of these two components!

Adding a new column to a Steampipe table

To make the change, I extended the structure that defines the columns of the tables mapped from Mastodon Account API. A Steampipe plugin defines columns using a list of structs like this.

..., { Name: "url", Type: proto.ColumnType_STRING, Description: "URL for the account.", }, ...,

That struct says: “When the name of a top-level field in the API response is url, tell Steampipe to make a database column with that name and with the Postgres type text.”

You can also transform values in API responses to synthesize new columns that don’t appear in API responses. Here’s the struct I added for this case.

..., { Name: "instance_qualified_account_url", Type: proto.ColumnType_STRING, Description: "Account URL prefixed with my instance.", Transform: transform.FromValue().Transform(instanceQualifiedAccountUrl), }, ...

That one says: “Send the API response to the transform function instanceQualifiedAccountUrl, and use its result as the value of the column.

Here’s the function.

func instanceQualifiedAccountUrl(ctx context.Context, input *transform.TransformData) (interface{}, error) { url := input.Value.(*mastodon.Status).Account.URL qualifiedUrl := qualifiedUrl(ctx, url) return qualifiedUrl, nil }

It delegates the real work to another function.

func qualifiedUrl(ctx context.Context, url string) string { plugin.Logger(ctx).Debug("instanceQualifiedUrl", "server", homeServer, "url", url) re := regexp.MustCompile(`https://([^/]+)/@(.+)`) matches := re.FindStringSubmatch(url) if len(matches) == 0 { return url } person := matches[1] server := matches[2] qualifiedUrl := fmt.Sprintf("%s/@%s@%s", homeServer, server, person) plugin.Logger(ctx).Debug("instanceQualifiedUrl", "qualifiedUrl", qualifiedUrl) schemelessHomeServer := strings.ReplaceAll(homeServer, "https://", "") qualifiedUrl = strings.ReplaceAll(qualifiedUrl, "@"+schemelessHomeServer, "") plugin.Logger(ctx).Debug("qualifiedUrl", "qualifiedUrl", qualifiedUrl) return qualifiedUrl }

Why? Two different sets of column definitions need the same transformation. instanceQualifiedAccountUrl works with responses from the Account API. But account URLs also appear in the Status API that drives timeline views. Those use a different transform function, instanceQualifiedStatusUrl, to do the same transformation for a different API response.

From account URLs to status URLs

The instanceQualifiedAccountUrl column solved the original problem. I was able to remove my plugin-author hat, put on my dashboard-author hat, and refer to account urls as instance-qualified URLs in all the tabs that display them. Any such link now leads to a profile that I view through the lens of mastodon.social and that enables me to use the web app’s list manager directly, without the cumbersome copy/paste/search procedure.

My happy dance didn’t last long, though. Newly sensitized to that copy/paste/search friction, I realized it was still happening when I try to reply to items that appear in a timeline view. Here is a recent example: https://techpolicy.social/@mnot/109610641523489182.

That’s the URL displayed in the dashboard. When I click it I land on Mark’s server and can view the item, but if I try to reply I’m confronted with the dreaded copy/paste/search operation.

No problem! I’ll use a similar transform! Not so fast. I can form an URL like https://mastodon.social/@mnot@techpolicy.social/109610641523489182 but it doesn’t go anywhere.

If I do the copy/paste/search operation, I land on a similar-but-different URL: https://mastodon.social/@mnot@techpolicy.social/109610641692667630. It has the same structure but a different toot ID. This URL is also the one that appears in the web app’s home timeline, which is why I can reply directly from that view.

I’m out of my depth here so I’ll just end with an appeal for help. It makes sense that a home server will assign its own id to an item fetched from a foreign server, and that the web app will use that id. But I’m not seeing a way to aquire that id directly from the API. I suspect it’s possible to acquire it by way of search, but doing that for every item in a timeline will quickly exhaust the tight budget for API requests (just 300 every 5 minutes).

So, Lazy Mastodon, am I just stuck here or is there a way to transform foreign status URLs into instance-relative status URLs?

Update: Solved!

After chatting with Jari Pennanen I took another look and realized the needed ID was available in the API response after all, I just wasn’t using it (facepalm). And in fact there are two flavors of the ID — one for original toots, another for boosts. Columns for both cases are added here and the tweak to make the dashboard use them here.

Here is the result.

Thanks for being my rubber duck, Jari! The instance-qualified toot and reblog URLs make this dashboard massively more useful.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Mike Jones: self-issued

OAuth DPoP Specification Addressing Area Director Review Comments

This week Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the Area Director review comments received. Thanks to Roman Danyliw for his useful review! As Brian wrote, updates in this version of the specifiation were: Updates from Roman Danyliw’s AD review DPoP-Nonce now included in HTTP header […]

This week Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the Area Director review comments received. Thanks to Roman Danyliw for his useful review!

As Brian wrote, updates in this version of the specifiation were:

Updates from Roman Danyliw’s AD review DPoP-Nonce now included in HTTP header field registration request Fixed section reference to URI Scheme-Based Normalization Attempt to better describe the rationale for SHA-256 only and expectations for how hash algorithm agility would be achieved if needed in the future Elaborate on the use of multiple WWW-Authenticate challenges by protected resources Fix access token request examples that were missing a client_id

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-12.html

Scott Maces IT Journal

One World 65 International Adapter: One versatile little brick

Recently I had a chance to test an evaluation unit of the OneWorld 65 International Adapter, made by OneAdaptr.com. This was only the second time I had ever traveled out of the country with tech in tow. The previous time, I was toting a MacBook laptop computer, and purchased an Apple-specific adapter kit basically built to plug into half of the 10-watt USB adapter. Not as versatile, and a great de

Recently I had a chance to test an evaluation unit of the OneWorld 65 International Adapter, made by OneAdaptr.com. This was only the second time I had ever traveled out of the country with tech in tow. The previous time, I was toting a MacBook laptop computer, and purchased an Apple-specific adapter kit basically built to plug into half of the 10-watt USB adapter. Not as versatile, and a great deal bulkier, but it did the job for me back in 2018.

The OneWorld 65 is a nice progression and compression of key functions of an international adapter, from the jigsaw-puzzle-looking box of Apple stuff to a simple brick. It can charge up to 6 devices simultaneously and is said to work in more than 200 countries. (I will never travel that much, so I'm taking OneAdapter.com's word for it.)

In keeping with how charging interfaces are evolving, the OneWorld 65 has two standard USB-C charging ports, 2 USB-A charing ports, and a 65-watt USB-C charging port for the newer laptops that support USB-C.

In late November and early December, I traveled to Austria, eventually joining the American Beethoven Society on a tour of musical landmarks, including a week in Vienna, and a side trip to Salzburg. Pictured here is the One World 65 in action in my Vienna hotel. To use, I selected between European Union, UK, or U.S. plugs (in Austria, it was EU all the way) and found the OneWorld 65 to plug easily into all such power outlets in various hotel rooms. The USB ports are on the bottom of the unit, and one is on the right side. I didn't come close to using all six outlets at once, but it's great to have the capacity.

I had one issue, and it wasn't a deal-breaker. I attempted to plug some 110-volt U.S. plugs into the outlet provided on the top of the front as pictured, but was unable to do so. When I returned to the U.S., I contacted OneAdaptr.com, and they suggested I use more force to insert such plugs. I tried various 110-volt plugs at home, and while I did have success with one such plug, I could not insert other plugs I tried, no matter how much force I applied. The blue plastic inside acts as a protection guard, and perhaps with repeated use, I will find it easier to insert any such plug.

Despite this issue, I recommend the OneWorld 65 for international travel. It's great to see so much versality in such a small package selling for less than $70. It travels well, and I will put it through its paces again in 2023.

Friday, 30. December 2022

Bill Wendels Real Estate Cafe

Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings?

“What if buyer and seller real estate commissions are decoupled?” That recent question on LinkedIn was first asked & answered 31 years ago when Steve… The post Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings? first appeared on Real Estate Cafe.

“What if buyer and seller real estate commissions are decoupled?” That recent question on LinkedIn was first asked & answered 31 years ago when Steve…

The post Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings? first appeared on Real Estate Cafe.

Thursday, 29. December 2022

Jon Udell

How many people in my Mastodon feed also tweeted today?

I lasted tweeted on Dec 22. (It was, unsurprisingly, a link to a blog post about Mastodon.) Today I wondered what percentage of the people who appear in my Mastodon timeline today also appeared on Twitter today. To start, I wrote this query which tries to match Twitter and Mastodon usernames. When it finds a … Continue reading How many people in my Mastodon feed also tweeted today?

I lasted tweeted on Dec 22. (It was, unsurprisingly, a link to a blog post about Mastodon.) Today I wondered what percentage of the people who appear in my Mastodon timeline today also appeared on Twitter today.

To start, I wrote this query which tries to match Twitter and Mastodon usernames. When it finds a match, it reports the day on which that person last tweeted.

with mastodon as ( select substring(username from 1 for 15) as username, -- twitter names are max 15 chars 'from:' || substring(username from 1 for 15) as query -- we will query twitter using, e.g., 'from:judell' from mastodon_toot where timeline = 'home' limit 500 ) select m.username as mastodon_person, t.author->>'username' as twitter_person, max(to_char(t.created_at, 'YYYY-MM-DD')) as last_tweet_day from mastodon m left join twitter_search_recent t -- see https://hub.steampipe.io/plugins/turbot/twitter/tables/twitter_search_recent on t.query = m.query group by mastodon_person, twitter_person order by last_tweet_day desc

This is my favorite kind of Steampipe query: two different APIs, each represented as a Postgres table, combined with a SQL JOIN.

The result looks like this, with nulls for failed matches.

+-----------------+-----------------+----------------+ | mastodon_person | twitter_person | last_tweet_day | +-----------------+-----------------+----------------+ | AlanSill | null | null | | Colarusso | null | null | | ... | | williamgunn | null | null | | xian | null | null | | ... | | futurebird | futurebird | 2022-12-29 | | glynmoody | glynmoody | 2022-12-29 | | ... | | khinsen | khinsen | 2022-12-23 | | blaine | blaine | 2022-12-23 | +-----------------+-----------------+----------------+

Next I created a table from the above query.

create table public.mastodon_twitter as -- sql as above

And then ran this query.

select last_tweet_day, count(*) from mastodon_twitter where last_tweet_day is not null group by last_tweet_day order by last_tweet_day desc

Here’s the result.

+----------------+-------+ | last_tweet_day | count | +----------------+-------+ | 2022-12-29 | 36 | | 2022-12-28 | 6 | | 2022-12-27 | 1 | | 2022-12-26 | 1 | | 2022-12-25 | 2 | | 2022-12-23 | 2 | +----------------+-------+

The 500 toots represented here were created by 93 people who tooted today.

select count(*) from mastodon_twitter +-------+ | count | +-------+ | 93 | +-------+

Of those 93 people, 48 have matching usernames.

select count(*) from mastodon_twitter where last_tweet_day is not null +-------+ | count | +-------+ | 48 | +-------+

Of the 48 with matching usernames, 36 also tweeted today.

So there’s my answer: 75% of the people who appeared in my Mastodon home timeline (when I sampled it just now) also appeared on Twitter today.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Orie Steele

DID MEME

DID Meme Updates: Using a Joke Project to Experiment with Decentralized Identifiers Inside of Images As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industr

DID Meme Updates: Using a Joke Project to Experiment with Decentralized Identifiers Inside of Images

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at the limitless creativity and skill the Transmute team applies to our work.

Why DID Meme?

https://didme.me

https://github.com/OR13/didme.me

I’ve made a number of updates to a joke project we’ve been using to explore experimental concepts.

https://twitter.com/OR13b/status/1594880701987430405

Benefits over Previous Versions

It builds on did:jwk which is much simpler than did:key.

See the method spec for more details:

https://github.com/quartzjer/did-jwk

It uses a better steganography library that uses PNGs to transport hidden data:

https://github.com/paulmillr/steg

It uses an experimental library for post quantum cryptography, focused on representations for keys and signatures for Dilithium, Falcon and Sphincs:

https://github.com/transmute-industries/did-jwk-pqc

It uses an older weekend project we built that provides an emoji alphabet for base64url encodings:

https://github.com/OR13/demojid

We’ve enjoyed hiding public keys in images that have been generated from transformer models that are so popular on Twitter.

Here’s an example:

Orie Steele, Transmute’s CTO and Co-Founder, has managed security concerns for startups and publicly traded companies, building secure web applications in Finance, Energy, and Healthcare.

Connect with Orie on LinkedIn, Twitter, and GitHub

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

DID MEME was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 28. December 2022

Foss & Crafts

54: Oops!

Everyone goofs sometimes. Today we talk accidents... some happy, some not! Links: Decaf coffee and history of penicillin, your pop-sci "accidents of history" stories of the day. Look, this is admittedly kind of a fluff episode. Have we linked to Worse is Better before? We did? In the lisp episode? And here's the Terminal Phase episode

Everyone goofs sometimes. Today we talk accidents... some happy, some not!

Links:

Decaf coffee and history of penicillin, your pop-sci "accidents of history" stories of the day. Look, this is admittedly kind of a fluff episode. Have we linked to Worse is Better before? We did? In the lisp episode? And here's the Terminal Phase episode

Tuesday, 27. December 2022

Jon Udell

My belated introduction to Super Mario Brothers

On a hike today my friend noticed a clump of Amanita Muscaria and took away this glorious specimen. As we headed down the trail he said: “I can never see one of these without hearing the music from Super Mario Brothers — beep, boop.” He went on to explain that the spotted red mushroom was … Continue reading My belated introduction to Super Mario Brothers

On a hike today my friend noticed a clump of Amanita Muscaria and took away this glorious specimen. As we headed down the trail he said: “I can never see one of these without hearing the music from Super Mario Brothers — beep, boop.” He went on to explain that the spotted red mushroom was a coveted power-up that made Mario grow large.

I knew nothing about this. I have never played Super Mario Brothers, or any other iconic game, or really any computer-based game at all, except briefly in the early 1990s when I played Rogue during an office craze that lasted a few weeks, and around the same time played Putt-Putt with our young kids.

This seems unusual for someone like me. I have spent my adult life deeply engrossed in computer-based activity, and am often described to others, by friends and family, as a “computer guy.” That makes me a geek and/or nerd by definition. But I’ve never been comfortable with either of those terms.

Over the years I’ve known a great many others in the geek/nerd tribe who self-identify as gamers. It always puzzles me. I’m perfectly happy to spend long hours in front of a screen reading or writing or coding. But games? I simply lack that gene. It baffles me how other members of the tribe — especially adults who like me already put in long hours in front of screens — could then pile up more screen time playing games, when they could be outside hiking, or cycling, or gardening, or doing really anything else at all.

Perhaps this explains my reluctance to self-identify as a member of the geek/nerd tribe. If I really belonged I’d be gamer. Since I’m not I must be an impostor. I wonder if there’s a word — in some other language than English probably — for when you share certain core attributes with a tribe but not others?

Friday, 23. December 2022

Heres Tom with the Weather

check_webfinger!

The notes I made in Mastodon Discovery skipped over a noteworthy step. In general, after mastodon fetches and parses the “well known” webfinger document (the so-called JSON Resource Descriptor), there is a 3 step process to learn about the actor referenced in that document. fetch_resource check_webfinger! create_account As mentioned previously, in the first step, a very compre

The notes I made in Mastodon Discovery skipped over a noteworthy step. In general, after mastodon fetches and parses the “well known” webfinger document (the so-called JSON Resource Descriptor), there is a 3 step process to learn about the actor referenced in that document.

fetch_resource check_webfinger! create_account

As mentioned previously, in the first step, a very comprehensive json document for the actor is fetched and in the third step, an account is created for that actor if does not already exist. However, between those two steps, mastodon does another webfinger lookup since, for instance, the domain serving the actor document may be a different domain than the one that originally served the first “well known” webfinger document. Prior to this check, some instance variables are set:

@uri = @json['id'] @username = @json['preferredUsername'] @domain = Addressable::URI.parse(@uri).normalized_host

The @uri instance variable is the location of the actor document and the @domain instance variable is the domain that serves the actor document. After these variables are set, the check is performed:

check_webfinger! unless only_key

This check enforces that the domain component of your identifier is the domain that serves your actor document. (It inspects the subject of the “well known” document and if the username and domain of the subject match the instance variables above, the ‘self’ resource link is required to be the same as the @uri instance variable. If the subject does not match, one more webfinger lookup for the redirection is allowed.)

So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document. It seems if your domain is a static site and you want to be known by an identifier associated with your domain, your domain needs to serve the actor document in addition to “well known” webfinger document.

Thursday, 22. December 2022

Heres Tom with the Weather

IndieWeb and Mastodon

Thanks to EFF-Austin for the opportunity to talk along with Jon Lebkowsky about the relationship between Indieweb and the Fediverse. (Slides and video) The meetup Indieweb and Mastodon: The Time is Now was coincidentally in the same room as IndieWebCamp Austin 2020.

Thanks to EFF-Austin for the opportunity to talk along with Jon Lebkowsky about the relationship between Indieweb and the Fediverse. (Slides and video)

The meetup Indieweb and Mastodon: The Time is Now was coincidentally in the same room as IndieWebCamp Austin 2020.


Aaron Parecki

How to create a keyboard shortcut to export the current slide in Keynote

Lately I've been using Apple Keynote to create graphics for using in videos and blog posts. It's a quick way to arrange things on a page, copying and pasting most things just works, and there are enough built in shapes and tools to get the point across. However, after spending a full day creating graphics for a video, I found myself frustrated by the number of clicks required to export a single sli

Lately I've been using Apple Keynote to create graphics for using in videos and blog posts. It's a quick way to arrange things on a page, copying and pasting most things just works, and there are enough built in shapes and tools to get the point across. However, after spending a full day creating graphics for a video, I found myself frustrated by the number of clicks required to export a single slide at a time.

I posted a quick note about this, and immediately got a lot of helpful replies! A lot of people suggested it should be possible to automate with KeyboardMaestro, a tool I was not previously familiar with. It was also suggested that I ask on the KeyboardMaestro forums, so I did, and also quickly received a very detailed reply!

In the course of reading through these replies, I realized that the thing they all had in common was AppleScript! One reply in particular from Chris contained a pretty detailed AppleScript that appears to have been adapted from the iworkautomation website. So I took that as a starting point and wanted to see how I could make it work without also using KeyboardMaestro.

Create a Quick Action in Automator

First, open Automator and create a new Quick Action.

In the new action that is created, set the "Workflow receives" option to no input, and choose Keynote.app in the application list.

Then search for the AppleScript action and drag it into the empty grey area.

Next, replace the (* Your script goes here *) text with the following AppleScript.

AppleScript to export the current slide as a PNG in Keynote set the defaultDestinationFolder to (path to downloads folder) tell application "Keynote" activate try set currentSlideNumber to the slide number of (get the current slide of the front document) --- skip all but the current slide tell the front document set skipped of ¬ (every slide where the slide number is not equal to currentSlideNumber) to true end tell --- capture the document name set documentName to the name of the front document if documentName ends with ".key" then ¬ set documentName to text 1 thru -5 of documentName --- create an empty folder to temporarily store the image tell application "Finder" set newFolderName to documentName & " slide " & currentSlideNumber set incrementIndex to 1 repeat until not (exists folder newFolderName of defaultDestinationFolder) set newFolderName to documentName & "-" & (incrementIndex as string) set incrementIndex to incrementIndex + 1 end repeat set the targetFolder to ¬ make new folder at defaultDestinationFolder with properties ¬ {name:newFolderName} set the targetFolderHFSPath to targetFolder as string end tell --- tell Keynote to export the current slide export the front document as slide images to file targetFolderHFSPath with properties ¬ {image format:PNG, skipped slides:false} --- set all the slides back to unskipped tell the front document set skipped of every slide to false end tell --- move the file to the destination folder and delete the temporary folder tell application "Finder" set folderContents to every item of targetFolder repeat with theFile in folderContents set fileExtension to name extension of theFile set fileName to the name of theFile as string set splitCharacters to ".001" set ATID to AppleScript's text item delimiters set AppleScript's text item delimiters to splitCharacters set newFileName to text item 1 of fileName set AppleScript's text item delimiters to ATID set newFileName to newFileName & "." & fileExtension set name of theFile to newFileName -- display dialog quoted form of (name of theFile as string) end repeat move files of entire contents of targetFolder to defaultDestinationFolder with replacing delete targetFolder --- open the destination folder tell application "Finder" set thePath to file (defaultDestinationFolder & newFileName as string) activate reveal the thePath end tell end tell on error errorMessage number errorNumber display alert "EXPORT PROBLEM" message errorMessage error number -128 end try end tell Setting up a keyboard shortcut

In the Automator app, save this quick action with a name like ExportCurrentSlide. You can quit Automator now.

Now open System Preferences, open the Keyboard options, and go to the Shortcuts tab, then click on Services. Your new action should appear at the bottom of the list.

Click on the "none" at the right side of the action and you can record a keyboard shortcut for the action. I used "Shift Command 9" for mine.

Close this, and go to Keynote to test it out!

Choose the slide you want to export and press your keyboard shortcut! You should see a quick sequence of hiding all the slides, un-hiding them, and then the Downloads folder should pop up with your file name including the Keynote file name and slide number!

There you go! A pure AppleScript solution, no third party apps needed! I just finished setting this up this morning and I'm already so much happier exporting slides now!


Jon Udell

Lists and people on Mastodon

I hadn’t thought to use Mastodon lists until I read the Frustration with lists chapter of Martin Fowler’s Exploring Mastodon, in which he writes: I like lists because they allow me to divide up my timeline to topics I want to read about at different times. They are frustrating because the tools to manage them … Continue reading Lists and people on Mastodon

I hadn’t thought to use Mastodon lists until I read the Frustration with lists chapter of Martin Fowler’s Exploring Mastodon, in which he writes:

I like lists because they allow me to divide up my timeline to topics I want to read about at different times. They are frustrating because the tools to manage them in Twitter are very limited, so it’s more hassle to set up the kind of environment I’d like. Mastodon also has lists, sadly its current management tools are equally bad.

This seemed like a good challenge for Steampipe. To tackle it, I first needed to add some new tables to the plugin to encapsulate the list APIs: mastodon_list and mastodon_list_account. I’ll save that story for another time. Here I’ll just show that together they enable queries like this.

select l.title as list, array_agg(a.username order by a.username) as people from mastodon_list l join mastodon_list_account a on l.id = a.list_id group by l.title +--------------+--------------------------------------+ | list | people | +--------------+--------------------------------------+ | Academic | ____, ______, ____, ___ | | Education | ___, ______ ___, ______ | | Energy | ___, ______, ____ __ | | Fediverse | ____ __, | | Humor | ____, ____ __, ____ __ | | Journalism | ___ __, ___ ____, ___, ______ | | Library | __ | | Net | ___ __, _____, ___ __, __ __, ____ | | Science | __, ____ __, ______ | | Software | ____ __, ______, ____ __ | +--------------+--------------------------------------+

That’s a useful view, and I’ve now included it, but it didn’t address Martin’s specific need.

To manage these lists, I really need a display that shows every account that I follow in a table with its lists. That way I can easily see which list each account is on, and spot any accounts that aren’t on a list.

For that I needed to add a list column to the Following tab.

This was the original query.

select url, case when display_name = '' then username else display_name end as person, to_char(created_at, 'YYYY-MM-DD') as since, followers_count as followers, following_count as following, statuses_count as toots, note from mastodon_following order by person

The new version captures the above join of mastodon_list and mastodon_list_account, and joins that to the mastodon_following (people I follow) table. It’s a left join which means I’ll always get all the people I follow. If you’re not on a list, your list column will be null.

with data as ( select l.title as list, a.* from mastodon_list l join mastodon_list_account a on l.id = a.list_id ), combined as ( select d.list, f.url, case when f.display_name = '' then f.username else f.display_name end as person, to_char(f.created_at, 'YYYY-MM-DD') as since, f.followers_count as followers, f.following_count as following, f.statuses_count as toots, f.note from mastodon_following f left join data d on f.id = d.id ) select * from combined order by person

That query drives the new version of the Following tab.

It’s pretty sparse, I’ve only just begun adding people to lists. And honestly I’m not sure I’ll want to keep doing this curation, it’s the kind of thing that can become a burden, I need to play around some more before I commit. Meanwhile, the default sort puts unlisted people first so they’re easy to find.

To provide a better way to find people who are on lists, I expanded the List tab in a couple of ways. It had included a dropdown of lists by which to filter the home timeline. Now that dropdown has counts of people on each list.

input "list" { type = "select" width = 2 sql = <<EOQ with list_account as ( select l.title from mastodon_list l join mastodon_list_account a on l.id = a.list_id ), counted as ( select title, count(*) from list_account group by title order by title ) select title || ' (' || count || ')' as label, title as value from counted order by title EOQ }

I also used this query to expand the List tab.

select l.title as list, array_to_string( array_agg( lower(a.username) order by lower(a.username)), ', ') as people from mastodon_list l join mastodon_list_account a on l.id = a.list_id group by l.title

The result is the list / people table on the right.

I know that some won’t cotton to this SQL-forward programming model. But for others who will, I wanted to show a few detailed examples to give you a sense of what’s possible at the intersection of Mastodon and Steampipe.

If you’re not tuned into SQL (like I wasn’t for a very long time), here’s your takeaway: as SQL goes, this stuff is not too scary. Yes there are joins, yes there’s an array_agg that transposes a column into a list. It’s not beginner SQL. But lots of people know how to use join and array_agg in these ways, lots more could easily learn how, and with SQL ascendant nowadays these are skills worth having.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/

Wednesday, 21. December 2022

@_Nat Zone

識別子の分類とリスク:識別子徹底攻略①〜これであなたも識別子のプロ!

新Youtubeチャンネル「デジタルIDとプライバ…

新Youtubeチャンネル「デジタルIDとプライバシー」に新しいビデオを公開しました。

識別子の分類とリスク:識別子徹底攻略①〜これであなたも識別子のプロ!」です。ご覧になった方々からはとても好評のようです。

識別子一つでここまで分類でき体系的にまとめられるとは思わなかったので、お世辞と贔屓目抜きにとても良かったです。アカデミック重要 https://t.co/w0eVt3EqeM

— ROCA (@rocaz) December 21, 2022

めちゃめちゃ学びある内容なのは勿論のことですが、nat-sanにリアルタイムで質問できる時間が貴重すぎました。。
あと、無性にレミゼが読みたくなる https://t.co/xKlwO6eqr4

— てらら (@a_terarara) December 21, 2022

レ・ミゼラブルで断片情報からの(誤った)名寄せが語られていたとは驚いた。そういうストーリーだったのか https://t.co/w0eVt3VtgM

— ROCA (@rocaz) December 21, 2022

ぜひご笑覧ください↓

<制作環境のメモ>

今回は、mmhmm Studio を使ってテキストスライドは全てつくり、絵が必要なものだけGoogle slide で作ったものをPowerpointにエクスポートして、それをmmhmm Studio に読み込んで使いました。JPEGではなくPowerpointにエクスポートしたのは、JPEGだとmmhmm Studioに読み込んだ後のサイズ調整が自由にならないからです。

カメラはMacBook Pro のカメラ。マイクはAudio Technica AT2040、オーディオインターフェースは Yamaha AG03 mk2 です。Compressor, EQはかけていません。マイクの出力が小さいので、Gainはmaxにせざるを得ませんでした。ノイズも乗るので、これをKrispを通してmmhmm Studioに入力しています。

こうやって作ったビデオをMP4に書き出し、それを Vrew (動画編集をテキスト編集のようにできるソフト)に読み込んで字幕を作りました。無音部分も検出してくれるので、ダダダっとそれらを切りましたが、ちょっと切り過ぎで音が消えているところもなくはありません。でも通じるからよいか、と。字幕はAIが作ったものなので、わたしの滑舌の悪さから結構誤変換があります。が、心がきれいな方には意味がちゃんと通じるはず!

こうして編集したものをVrewからAdobe premiere pro に書き出し、Premiere pro で最終的なMP4を作りました。字幕はPremiere pro でも作れるのでそれでも良かったのですが、無音部分削除はVrewの方が楽だと思います。

これをYoutube Studio でアップロード、プレミア公開の時間を設定します。このときにカウントダウンの時間とそこで流すビデオを設定します。22時に設定して、5分前くらいから人が来るかなと思って時間を5分に設定したら、なんとカウントダウンは指定した22時から始まるのでした。それ、カウントダウンって言わなくね?罠だ。

プレミア公開1の特徴は、それが流れている間、チャットでコメントを入れることができ、それを後からプレイバックできることです。動画プレミアが終わった後はチャットを足すことはできません。

このチャットプレイバックは、わたしのデフォルト設定では出るようになりませんでした。いろいろ見ると、動画の詳細設定で、コメントをすべて許可するをオンにしないと出ないという情報があったので、それをオンにしたら出るようになりました。

<チャットリプレイを見る方法>

もっとも、これをやっても直ちに画面にチャットリプレイが展開されるわけでは有りません。次のスクリーンショットのように閉じた形で表示されます。

(図)Chat replay は閉じて表示される

見えるようにするには、この「Show chat replay」をクリックしなければなりません。そうすると、こんな感じで流れるようになります。

(図)Chat replay が流れるようにしたこと。 <チャットリプレイは動画を編集すると出なくなる>

チャットリプレイを出すのにはもう一つ重要なことがあります。

前後の余分な時間を削るなどの編集を動画に施すとチャットリプレイできなくなります。

Monday, 19. December 2022

Jon Udell

Create your own Mastodon UX

I’ve been discussing Mastodon UX wishlists with some new acquaintances there. This excerpt from A Bloomberg terminal for Mastodon concludes with part of my own wishlist. In a Mastodon timeline, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s … Continue reading Create your own Mastodon UX

I’ve been discussing Mastodon UX wishlists with some new acquaintances there. This excerpt from A Bloomberg terminal for Mastodon concludes with part of my own wishlist.

In a Mastodon timeline, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s attention. As publishers of feeds it’s wise to consider how a flurry of items can overwhelm a reader’s experience. But it’s also useful to consider ways that feed readers can filter a chatty source. Steampipe’s SQL foundation affords an easy and natural way to do that. Here’s part of the query that drives the list view.

select distinct on (list, person, hour) -- only one per list/user/hour person, url, hour, toot from data order by hour desc, list, person

It was easy to implement a rule that limits each person to at most one toot per hour. Next steps here will be to apply this rule to other views, show the number of collapsed toots, and enable such rules on a per-person basis.

As a warmup exercise, I decided to first add a simple control for boosts that enables me to see my home timeline with or without boosts. To give technically-inclined readers a sense of what’s involved in doing this kind of thing with Steampipe, I’ll describe the changes here. I’m obviously biased but I find this programming environment to be accessible and productive. If it seems that way to you as well, you might want to try out some of the items on your own UX wishlist. And if you do, let me know how it goes!

Here are the original versions of the two files that I changed to add the new feature. First there’s home.sp which defines the dashboard for the home timeline.

dashboard "Home" { tags = { service = "Mastodon" } container { // a text widget with the HTML links that define the menu of dashboards } container { text { // a block that displays the HTML links that form a menu of dashboards } card { // a block that reports the name of my server } input "limit" { width = 2 title = "limit" sql = <<EOQ with limits(label) as ( values ( '50' ), ( '100' ), ( '200' ), ( '500' ) ) select label, label::int as value from limits EOQ } } container { table { title = "home: recent toots" query = query.timeline args = [ "home", self.input.limit ] column "person" { wrap = "all" } column "toot" { wrap = "all" } column "url" { wrap = "all" } } } }

And here’s the new version. It adds an input block called boosts, and passes its value to the referenced query.

dashboard "Home" { tags = { service = "Mastodon" } container { // a text widget with the HTML links that define the menu of dashboards } container { text { // a block that displays the HTML links that form a menu of dashboards } card { // a block that reports the name of my server } input "limit" { // as above } input "boosts" { width = 2 title = "boosts" sql = <<EOQ with boosts(label, value) as ( values ( 'include', 'include' ), ( 'hide', ' ' ), ( 'only', ' 🢁 ' ) ) select label, value from boosts EOQ } } container { table { // as above args = [ "home", self.input.limit, self.input.boosts ] } } }

Steampipe dashboards are built with two languages. HCL (Hashicorp configuration language) defines the UX widgets, and SQL fills them with data. In this case we’re selecting static values for the boosts input. But any Steampipe query can run there! For example, here is the input block I use on the dashboard that filters the timeline by the list to which I’ve assigned people.

input "list" { type = "select" width = 2 title = "search home timeline" sql = <<EOQ select title as label, title as value from mastodon_list order by title EOQ }

Now here is the referenced query, query.timeline, from the file query.sp which contains queries used by all the dashboards.

query "timeline" { sql = <<EOQ with toots as ( select account_url as account, case when display_name = '' then user_name else display_name end as person, case when reblog -> 'url' is null then content else reblog_content end as toot, to_char(created_at, 'MM-DD HH24:MI') as created_at, case when reblog -> 'url' is not null then '🢁' else '' end as boosted, case when in_reply_to_account_id is not null then ' 🢂 ' || ( select acct from mastodon_account where id = in_reply_to_account_id ) else '' end as in_reply_to, case when reblog -> 'url' is not null then reblog ->> 'url' else url end as url from mastodon_toot where timeline = $1 limit $2 ) select account, person || case when in_reply_to is null then '' else in_reply_to end as person, boosted || ' ' || toot as toot, url from toots order by created_at desc EOQ param "timeline" {} param "limit" {} }

And here is the new version of that query.

query "timeline" { sql = <<EOQ with toots as ( // as above ), boosted as ( select $3 as boost, boosted, account, in_reply_to, person, toot, url from toots order by created_at desc ) select account, person || case when in_reply_to is null then '' else in_reply_to end as person, boosted || ' ' || toot as toot, url from boosted where boost = boosted or boost = 'include' or boost = 'n/a' EOQ param "timeline" {} param "limit" {} param "boost" {} }

The original version uses a single CTE (aka common table expression aka WITH clause), toots, to marshall data for the concluding SELECT. The new version inserts another CTE, boosts, into the pipeline. It uses $3 to reference param "boost" {} which maps to the self.input.boosts passed from home.sp

The SQL code is all standard. Postgres is the engine inside Steampipe, and I sometimes use Postgres-specific idioms, but I don’t think any of those are happening here.

The HCL code may be unfamiliar. Steampipe uses HCL because its core audience are DevSecOps pros who are familiar with Terraform, which is HCL-based. But its a pretty simple language that can be used to describe all kinds of resources. Here the resources are widgets that appear on dashboards.

The other thing to know, if you want to roll up your sleeves and try building your own dashboards, is that the developer experience is — again in my biased opinion! — pretty great because if you’re using an autosaving editor you’ll see your changes (to both HCL and SQL code) reflected in realtime.

To illustrate that, here’s the screencast we included in our blog post introducing the dashboard system.

Not shown there, because we wanted to focus on the happy path, is realtime feedback when your SQL queries provoke Postgres errors. The experience feels very much like the one Bret Victor champions in Inventing on Principle. The core principle: “Creators need an immediate connection to what they’re creating.”

Here’s the wrong way that too often constrains us.

If there’s anything wrong with the scene, or if I go and make changes, or if I have further ideas, I have to go back to the code, and I edit the code, compile and run, see what it looks like. Anything wrong, I go back to the code. Most of my time is spent working in the code, working in a text editor blindly, without an immediate connection to this thing, which is what I’m actually trying to make.

And here is the right way.

I’ve got this picture on the side, and the code on the side, and this part draws the sky and this draws the mountains and this draws the tree, and when I make any change to the code, the picture changes immediately. So the code and the picture are always in sync; there is no compile and run. I just change things in the code and I see things change in the picture.

We want to work the right way wherever we can. The experience isn’t available everywhere, yet, but it is available in Steampipe where it powerfully enables the experimentation and prototyping that many of us are inspired to do as we delve into Mastodon.

If you want to try this for yourself, please check out the setup instructions for the plugin that maps Mastodon APIs to Postgres tables, and the dashboards that use those tables, and ping me (on Mastodon if you like!) with any questions you may have.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/

Sunday, 18. December 2022

Jon Udell

A Bloomberg terminal for Mastodon

As I mentioned last time, the Steampipe dashboard for Mastodon has evolved in unexpected ways. I imagined that the components — a

As I mentioned last time, the Steampipe dashboard for Mastodon has evolved in unexpected ways. I imagined that the components — a plugin that maps Mastodon APIs to Postgres foreign tables, and a suite of views that query the APIs — would combine to enable a broad overview of activity in the Fediverse. That didn’t pan out for two reasons.

First, I learned that the Mastodon community didn’t appreciate the kind of surveillance required for such analysis. That was the original community, I should stress, and things have changed dramatically, but I want to respect the original ethos. Plenty of people will, nevertheless, crawl and index the Fediverse, but I don’t need to put my shoulder to that wheel. And if I did I’d be pushing Steampipe out of its sweet spot: realtime acquisition, querying, and visualization of API-sourced data.

Second, Mastodon’s API allows 300 requests every 5 minutes. You can use Steampipe in batch mode to defeat that limit, and you can store data permanently in its Postgres database, but that cuts across the grain with respect to both Steampipe and Mastodon. All Mastodon clients are subject to the same API rate limit. If you use the web app, or one of the phone apps, you will likely never have seen a message announcing that you’ve hit the limit and need to wait a few minutes. I never saw that message until I started querying the API with Steampipe while also using the web app.

So if Mastodon culture and tech resist deep data mining, and the system is optimized for clients that live within an API budget of 300 requests every 5 minutes, what kind of Mastodon client could Steampipe enable? It wouldn’t be a conventional client because Steampipe is a read-only system. The path forward would be some kind of reader, or browser, that augments the interactive apps.

The outcome, so far, is a suite of dashboards that display tabular views (along with some charts) of the home, local, and federated timelines, of my toot history and my favorites, of my follows and followers, of my notifications, of searches for terms, people, and hashtags, and of the timelines formed by the lists to which I’ve assigned people I follow. These are all HTML tables rendered by Steampipe’s dashboard server. The columns are all sortable, and the cells of the tables can contain only links or plain text.

Given that the toot content returned from the Mastodon API is HTML, the plain-text-only constraint felt, initially, like a blocker. No images? No links in toot content? What good is that?

Some constraints are worth embracing, though, and that may prove true here. The views created this way put a lot of information onto the screen. Here’s my default view in the stock client.

At a glance I can see three items on the home timeline, and if I want to scroll through 100 items I can only do so awkwardly in small gulps.

Here’s my home timeline in the Steampipe dashboard. I can see a dozen items at a glance, and can easily scan 100 items in gulps of that size.

When I described this effect to Greg Wilson he gave me the title for this post: “That sounds like the Bloomberg terminal for Mastodon.” I’ve never used one, and I’m aware that its design is often derided as a UX disaster, but as I understand it the product is built to enable traders to scan fast-moving data feeds from many different sources. In that sense I do think it’s an interesting and useful comparison.

The underlying principle is one I’ve learned from Edward Tufte: present information at maximum density. Our brains are built to take in a lot of information at a glance, and if it’s organized well we can do that very effectively. It feels like that’s happening for me when I scan these densely-packed views of Mastodon activity.

To enhance the effect, I’ve begun to apply filters. In a Mastodon timeline, for example, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s attention. As publishers of feeds it’s wise to consider how a flurry of items can overwhelm a reader’s experience. But it’s also useful to consider ways that feed readers can filter a chatty source. Steampipe’s SQL foundation affords an easy and natural way to do that. Here’s part of the query that drives the list view.

select distinct on (list, user_name, person, hour) -- only one per list/user/hour person, url, hour, toot from data order by hour desc, list, person

It was easy to implement a rule that limits each person to at most one toot per hour. Next steps here will be to apply this rule to other views, show the number of collapsed toots, and enable such rules on a per-person basis.

There are always links into the Mastodon web app, and I follow them when I want to view images, boost someone, or reply to someone. The dashboards help me scan a lot of Mastodon activity quickly, and decide which items I want to interact with. Your 500-character toot is all you’ve got to grab my attention, and I’ll only see it as an unformatted chunk of plain text. That’s a pretty severe constraint, and not everyone will want to embrace it, but it’s working pretty well for me so far.

I expect that our dashboard system will support formatted text and images in cells of HTML tables. When it does I’d like to make it an option you can turn on or off in Mastodon dashboards. What should the default be? I suspect I’ll want plain text and no images, especially if image captions can appear along with the text of toots. Some of the original Mastodon cultural norms aren’t surviving the onslaught of new people, but writing descriptions of images is one that’s held up so far, and it’s a wonderful thing. So write a short thoughtful post, write a caption for your image if you include one, and if you capture my attention I’ll click through to view and interact.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/

Friday, 16. December 2022

Bill Wendels Real Estate Cafe

Boston Realty Party: What should be thrown overboard to reform the real estate cartel?

OPEN LETTER to Consumer Advocacy Community: (Excerpt from email to legal aid organization) As #PriceCorrrection2022 unfolds, the magnitude of consumer harm is growing and regulatory… The post Boston Realty Party: What should be thrown overboard to reform the real estate cartel? first appeared on Real Estate Cafe.

OPEN LETTER to Consumer Advocacy Community: (Excerpt from email to legal aid organization) As #PriceCorrrection2022 unfolds, the magnitude of consumer harm is growing and regulatory…

The post Boston Realty Party: What should be thrown overboard to reform the real estate cartel? first appeared on Real Estate Cafe.

Phil Windleys Technometria

Wallets and Agents

Summary: This post is adapted from my forthcoming book, Learning Digital Identity, from O'Reilly Media. Our physical wallets are, historically, for holding currency. But that may be the least interesting use case for wallets. Many of the things people put in their wallets represent relationships they have and authorizations they hold. Most people don't often leave home without thei

Summary: This post is adapted from my forthcoming book, Learning Digital Identity, from O'Reilly Media.

Our physical wallets are, historically, for holding currency. But that may be the least interesting use case for wallets. Many of the things people put in their wallets represent relationships they have and authorizations they hold. Most people don't often leave home without their wallet.

But the analogy to a physical wallet can only take us so far, because as physical beings, our natural capabilities are multitude. In the digital world, we need tools to accomplish almost anything useful. The name wallet1 for the software we use to interact digitally doesn't do the tool justice.

A digital identity wallet is a secure, encrypted database that collects and holds keys, identifiers, and verifiable credentials (VCs). The wallet is also a digital address book, collecting and maintaining its controller's many relationships. The wallet is coupled with a software agent that speaks the protocols necessary to engage with others.

Wallets and agents are not the same thing, even though they're often conflated. Agents are tools for taking action. Wallets are where stuff is stored. Still, most people just say "wallet," even when they mean "wallet and agent." For this post, when I say "wallet" I mean wallet and when I say "agent" I mean agent.

Identity agents are software services that manage all the stuff in the wallet. Agents store, update, retrieve, and delete all the artifacts that a wallet holds. Beyond managing the wallet, agents perform many other important tasks:

Sending and receiving messages with other agents Requesting that the wallet generate cryptographic key pairs Managing encrypted data interactions with the wallet Performing cryptographic functions like signing and verifying signatures Backing up and retrieving data in the wallet Maintaining relationships by communicating with other agents when DID documents are updated Routing messages to other agents The relationship between identity wallets and agents (click to enlarge)

This figure shows the relationship between an agent, a wallet, and the underlying operating system. While most current implementations pair a single agent with a single wallet, the presence of an API means that it's possible for one agent to use several wallets, or for multiple agents to access one wallet. Some specialized agents might not even need a wallet, such as those that just perform routing, although most will at least need to store their own keys.

The key-management functions in the wallet includes actions on cryptographic keys like generation, storage, rotation, and deletion. Key management is performed in cooperation with the operating system and underlying hardware. Ideally, the operating system and hardware provide a secure enclave for key storage and a trusted execution environment for performing key-management functions.

The basic functions shown in the diagram might not seem to have much to do with identity. Identity-related activities like authentication and credential exchange are built on top of these basic functions. The agent can issue, request, and accept VCs. The agent also presents and verifies credentials. Specialized messages perform these activities.

Agents and Credential Exchange

Agents speak a protocol called DIDComm (DID-based communication) that provides a secure communications layer for the exchange of identity information via verifiable credentials (VCs). Agents speak DIDComm to each other without a third-party intermediary (i.e., they're peer-to-peer). Because of DIDComm's flexibility and the ability to define protocols on top of DIDComm messaging, it promises to be as important as the identity layer it enables. The DIDComm protocol is governed by the DIDComm specification, hosted at the Decentralized Identity Foundation. The current ratified version is 2.0.

The specification's opening sentence states that "the purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs." Note that the specification describes DIDComm as a communications methodology. This means that DIDComm is more than just a way to send a message or chat with someone else. DIDComm messaging allows individual messages to be composed into application-level protocols and workflows. This makes DIDComm messaging a foundational technology for performing different kinds of interactions within the framework of trust that a DID-based relationship implies.

To enable the exchange of verifiable credentials, the agent, using the wallet as secure storage, performs three primary activities:

Exchanging DIDs with other agents Requesting and issuing credentials Requesting and presenting credential proofs

The agent does these activities using protocols that run on top of DIDComm. DIDComm's job is to create a secure, mutually authenticated channel for exchanging DIDComm messages. The protocols that operate inside of it, carry out specific activities.

Exchanging DIDs

Agents take care of the tedious and tricky job of exchanging DIDs between parties who want to communicate so that people don't have to get entangled in the details of how DIDs work: how they're created, stored, and validated. Or the work that's necessary when one of the parties needs to rotate keys. The DIDComm v2 spec is capable of exchanging DIDs without a separate protocol so the process can be automated by smart identity agents working on behalf of the various parties.

Requesting and Issuing Credentials

Requesting and issuing credentials is defined in Aries RFC 0036: Issue Credential Protocol 1.0. The protocol "formalizes messages used to issue a credential." The protocol describes four primary messages: propose-credential, offer-credential, request-credential, and issue-credential. The protocol also defines the state machine that the agent operates in response to these messages. These messages combined with the state machine allow the credential issuer and the credential holder to engage is the ceremonies necessary for the issuer to issue a credential to the holder.

Requesting and Presenting Credential Proofs

Requesting and presenting credential proofs is defined in Aries RFC 0037: Present Proof Protocol 1.0. The protocol formalizes and generalizes message formats used for presenting a proof of the attributes in a credential. The protocol describes three primary messages: propose-proof, request-proof, and present-proof. The protocol also defines the state machine that the agent operates in response to these messages. These messages and state machine allow the credential holder and the credential verifier to engage in the ceremonies necessary for the holder to present a credential proof to the verifier.

The Nature of Wallets and Agents

Agents and wallets, working together, perform the work necessary for people, businesses, and devices to create mutually-authenticated, secure connections and use those connections to exchange verifiable credentials. People, businesses, and devices all have different needs and so they'll use different agents and wallets.

People will generally use agents and wallets running on smart phones, laptops, or other personal devices. Your Amazon Alexa, for example could have an agent/wallet pair installed on it to act on your behalf. Most people will have agents on every device. Most of these will have wallets associated with them. Wallets will use device secure enclaves to store sensitive cryptographic information. People will also have agents and wallets in the cloud. All of the agents and wallets under a person's control will interoperate with each other and perform different roles. For example, cloud-based agents are needed to route DIDComm messages to devices that may not have a routable IP address. Businesses will use enterprise agents that are integrated with other enterprise systems like CRM, ERP, and IAM systems. The wallets associated with these will be more sophisticated than personal wallets since they have to manage DIDs and their associated keys that various employees, departments, and processes use. The ability to delegate authority and permission actions will be more rigorous than is needed in a personal wallet. A large business might operate thousands of enterprise agents for various business purposes. Devices will use agents with associated wallets to create relationships and perform credential exchange with the device owner, other devices, their manufacturer, and other people or companies. How they operate and their sophistication depend in great measure on the nature of the device and its expected function. I wrote about the reasons for using agents as part of IoT devices in The Self-Sovereign Internet of Things.

Despite differences that these agents exhibit, they all run the same protocols and use DIDComm messaging. There are no intermediaries—the connections are all peer-to-peer. Every agent works on behalf of the entity who controls it. To get a feel for how they might interoperate, see Operationalizing Digital Relationships and SSI Interaction Patterns.

DIDComm-capable agents can be used to create sophisticated relationship networks that include people, institutions, and things. The relationships in that network are rich and varied—just like relationships in the real world. Smart agents allow people, business and devices to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. The agent serves as flexible digital tool that people can use to manage their digital life.

Notes I've heard various people object to the term wallet, but so far, no one has come up with anything else that has stuck, so for now, wallet is the word the industry uses.

Tags: ssi identity wallets agents verifiable+credentials didcomm

Thursday, 15. December 2022

Altmode

Civil Rights Journey Day 10: Montgomery to Home

Thursday, October 27, 2022 With the other members of our group having already departed, we joined our friends Liz and Peter to explore a few more places in Montgomery. We started our day at the Civil Rights Memorial Center, part of the Southern Poverty Law Center headquarters. The Memorial Center wasn’t large, but had some […]

Thursday, October 27, 2022

With the other members of our group having already departed, we joined our friends Liz and Peter to explore a few more places in Montgomery. We started our day at the Civil Rights Memorial Center, part of the Southern Poverty Law Center headquarters. The Memorial Center wasn’t large, but had some excellent multimedia presentations on the civil rights struggle, both historical and ongoing.

From there we walked over to the former Greyhound bus station associated with the Freedom Riders. The station is now a small museum, the Freedom Rides Museum, and has signs on the outside with a detailed history. While there, a Black woman drove up to take some video of the bus station and we had a somewhat surprising conversation with her. She was not entirely pleased with the placement of all of the civil rights museums and displays in Montgomery — she thought that the Black community needed to strengthen themselves and didn’t need to be constantly reminded of their struggles.

Freedom Rides Museum

From there we walked to the Rosa Parks Museum a few blocks away. The museum tour consisted of a couple of multimedia presentations describing the background and events of the day she refused to give up her seat on a city bus, which gave us a much more detailed understanding of the events of that day and the questionable legality of her arrest. This was followed by a short self-guided tour of artifacts relating to the tension between the Black community and the police.

It was then about time to make our way to the airport for our flights home. We returned to our hotel to retrieve our bags and got a Lyft ride to the airport. The Montgomery Regional Airport is a fairly small airport consisting of about five gates located only about 15 minutes from downtown. TSA inspection was quick and we had a small lunch in the airport’s one restaurant. Co-located with the airport is an Alabama Air National Guard base, which surprised us as several pairs of fighter planes — F-16s I think — took off with considerable noise and speed.

This article is the final installment in a series about our recent travels to the US South. To see the introductory article in the series, click here.

Wednesday, 14. December 2022

Altmode

Civil Rights Journey Day 9: Montgomery

Wednesday, October 26, 2022 Today was the last day of the official tour (we are staying a day longer) and a very powerful day. After breakfast at the hotel, we went to the Legacy Museum, a large and modern museum established by the Equal Justice Initiative to tell the story of Black experience from enslavement […]

Wednesday, October 26, 2022

Today was the last day of the official tour (we are staying a day longer) and a very powerful day. After breakfast at the hotel, we went to the Legacy Museum, a large and modern museum established by the Equal Justice Initiative to tell the story of Black experience from enslavement to mass incarceration (the current situation). The museum proceeded roughly chronologically from the slave trade, slavery, the Civil War, reconstruction, Jim Crow segregation, and lynchings, to the current disproportionate sentencing and incarceration of Black people. The images and words used in the museum were very strong and intentionally uncomfortable for many. We were somewhat prepared by our recent experience, but the story was very uncomfortable nonetheless.

Following the museum, we traveled to another of Equal Justice Initiative’s projects, the National Memorial for Peace and Justice. The memorial, which is outdoors, includes hundreds of large corroded iron rectangular blocks that, by county, list the names and dates of lynching victims up to 1950. There was also a section to recognize counties and states that had put up markers recognizing the deaths of these victims. I was struck by the number of markers and names of people who had been lynched; this presentation of the names helps one appreciate the magnitude of the lynching problem.

Memorial blocks at National Memorial for Justice and Peace Sculptures at Michelle Browder’s studio

We then traveled to the studio (for lack of a better name) of Michelle Browder, who led us on our tour yesterday, for lunch and discussion. After lunch, Michelle showed us her artwork, which included large sculptures and an old car that had been intricately decorated by use of a plasma cutter on its body. Her artwork and advocacy were recently highlighted in People Magazine.

We then sat down for a final debrief on our memories of the week before the first of our co-travelers had to leave for the airport.

Afterward, our friends Peter and Liz joined us in visiting the Museum of Alabama, located in the state archives building near the capitol. The museum was well presented, but we were now better equipped to read between the lines of many of the descriptions. For example, commerce often meant slave trade.

After returning to the hotel to clean up, we went to a nearby brewpub, the Tower Taproom. It had quite good pub food, and an interesting array of self-service taps that were enabled using a card they issued at the cash register. The taps measured beer (a good selection, plus a few wines and ciders) by the ounce, and you were charged for the amount you poured.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.


@_Nat Zone

日本語Youtubeチャンネル「#デジタルIDとプライバシー」はじめました

長くやるやる詐欺状態だった日本語Youtubeチャ…

長くやるやる詐欺状態だった日本語Youtubeチャネルを開設しました。ぜひご登録お願いいたします。チャンネル名はとりあえず「デジタルIDとプライバシー」。本当は「〜攻略法」とか「わかる〜」とかアクションワードを入れたほうが良いらしいのですが、良いのが思いつかないので今はこれで。ご提案お待ちしております。

https://www.youtube.com/@NatSakimura100

がっちりビデオを作るのは結構腰が重くなるので配信してそこから切り出すようなスタイルにしようかなと思っていますが、一本もビデオが上がっていないのも寂しいので1本上げてあります。アイデンティティのラテン語の語源からデジタルアイデンティティまで説き起こしております。10分ほどです。お昼休みのお供にどうぞ。

配信でのテーマですね、質問に答えるようなのがやりやすいかなと思っていますが…、これも募集しておりますので奮ってご提案ください。

ツイッターのハッシュタグは #デジタルIDとプライバシー です。今のところは。


reb00ted

The Push-Pull Publish-Subscribe Pattern (PuPuPubSub)

(Updated Dec 14, 2022 with clarifications and a subscriber implementation note.) Preface The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name. The Problem The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems: When a subscr

(Updated Dec 14, 2022 with clarifications and a subscriber implementation note.)

Preface

The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name.

The Problem

The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems:

When a subscriber is temporarily not present, or cannot be reached, sent events are often lost. This can happen, for example, if the subscriber computer reboots, falls off the network, goes to sleep, has DNS problems and the like. Once the subscriber recovers, it is generally not clear what needs to happen for the subscriber to catch up to the events it may have missed. It is not even clear whether it has missed any. Similarly, it is unclear for how long the publisher needs to retry to send a message; it may be that the subscriber has permanently gone away.

Subscriptions are often set up as part of the following pattern:

A resource on the Web is accessed. For example, a user reads an article on a website, or a software agent fetches a document. Based on the content of the obtained resource, a decision is made to subscribe to updates to that resource. For example, the user may decide that they are interested in updates to the article on the website they just read. There is a time lag between the time the resource has been accessed, and when the subscription becomes active, creating a race condition during which update events may be missed.

While these two problems are not always significant, there are important circumstances in which they are, and this proposal addresses those circumstances.

Approach to the solution

We augment the publish-subscribe pattern in the following way:

All events, as well as the content of the resource that whose changes are supposed to be tracked are time-stamped. Also, each event identifies the event that directly precedes it (that way, the subscriber can detect if it missed something). Alternatively, a monotonically increasing sequence number could be used.

The publisher stores the history of events emitted so far. For efficiency reasons, this may be shortened to some time window reaching to the present, as appropriate for the application; for example, all events in the last month. (Similar to how RSS/Atom feeds are commonly implemented.)

The publisher provides a query interface to the subscriber to that history, with a “since” time parameter, so the subscriber can obtain the sequence of events emitted since a certain time. (Actually, since “right after” the provided time not including the provided time itself.)

When subscribing, in addition to the callback address, the subscriber provides to the publisher:

a time stamp, and a subscription id.

Further, the actual sending of an event from the publisher to the subscriber is considered to be a performance optimization, rather than core to the functionality. This means that if the event cannot be successfully conveyed (see requirements above), it is only an inconvenience and inefficiency rather than a cause of lost data.

Details About the race condition

The future subscriber accesses resource R and finds time stamp T0. For example, a human reads a web page whose publication date is April 23, 2021, 23:00:00 UTC.

After some time passes, the subscriber decides to subscribe. It does this with the well-known subscription pattern, but in addition to providing a callback address, it also provides time stamp T0 and a unique (can be random) subscription id. For example, a human’s hypothetical news syndication app may provide an event update endpoint to the news website, and time T0.

The publisher sets up the subscription, and immediately checks whether any events should have been sent between (after) T0 and the present. (It can do that because it stores the update history.) If so, it emits those events to the subscriber, in sequence, before continuing with regular operations. As a result, there is no more race condition between subscription and event.

When sending an event, the publisher also sends the subscription id.

About temporary unavailability of the subscriber

After a subscription is active, assume the subscriber disappears and new events cannot be delivered. The publisher may continue to attempt to deliver events for as long as it likes, or stop immediately.

When the subscriber re-appears, it finds the time of the last event it had received from the publisher, say time T5. It queries the event history published by the publisher with parameter T5 to find out what events it missed. It processes those events and then re-subscribes with a later starting time stamp corresponding to the last event it received (say T10). When it re-subscribes, it uses a different subscription id and cancels the old subscription.

After the subscriber has re-appeared, it ignores/rejects all incoming events with the old subscription id.

Subscriber implementation notes

The subscriber receives events exclusively through a single queue for incoming events. This makes implementing an incoming-event handler very simple, as it can simply process events in order.

The event queue maintains the timestamp of the last event it successfully added. When a new event arrives, the queue accepts this event but only if the new event is the direct follower of the last event it successfully added. If it is not, the incoming event is discarded. (This covers both repeatedly received events and when some events were missed.)

The subscriber also maintains a timer with a countdown from the last time an event was successfully added to the incoming queue. (The time constant of the timer is application-specific, and may be adaptive.) When the timeout occurs, the subscriber queries the publisher, providing the last successful timestamp. If no updates are being found, nothing happens. If updates are being found, it is fair to consider the existing subscription to have failed. Then:

The subscriber itself inserts the obtained “missed” events into its own incoming event queue from where they are processed. The subscriber cancels the existing subscription. The subscriber creates a new subscription, with the timestamp of the most recent successfully-inserted event. Observations

Publishers do not need to remember subscriber-specific state. (Thanks, Kafka, for showing us!) That makes it easy to implement the publisher side.

From the perspective of the publisher, delivery of events to subscribers that can receive callbacks, and those that need to poll, both works. (It sort of emulates RSS except that a starting time parameter is provided by the client, instead of a uniform window decided on by the publisher as in RSS)

Subscribers only need to keep a time stamp as state, something they probably have already anyway.

Subscribers can implement a polling or push strategy, or dynamically change between those, without the risk of losing data.

Publishers are not required to push out events at all. If they don’t, this protocol basically falls back to polling. This is inefficient but much better than the alternative and can also be used in places where, for example, firewalls prevent event pushing.

Feedback?

Would love your thoughts!

Tuesday, 13. December 2022

Altmode

Civil Rights Journey Day 8: Selma to Montgomery

Tuesday, October 25, 2022 After getting up and packing for today’s trip to Montgomery, we loaded up the bus and made the very short trip to the Selma Center for Nonviolence, Truth, and Reconciliation (SCNTR). We had breakfast at the Center, followed by training, focusing on the meaning and application of nonviolence, from the Center’s […]

Tuesday, October 25, 2022

After getting up and packing for today’s trip to Montgomery, we loaded up the bus and made the very short trip to the Selma Center for Nonviolence, Truth, and Reconciliation (SCNTR). We had breakfast at the Center, followed by training, focusing on the meaning and application of nonviolence, from the Center’s director, Ainka Jackson. One of the major themes was the difference between nonviolence, which is an active role, from the passive role of non-violence (or avoidance of violence).

Lunch was also at SCNTR and featured a powerful presentation from Callie Greer on nonviolence and forgiveness. Many years ago, her son was shot and killed in an argument with one of his peers. When the perpetrator was put on trial, she asked in her victim statement that he not be sentenced to death or to life in prison, but rather be given a minimum sentence. The judge, stunned, complied. She met the perpetrator after he had served his prison time and asked for his mother to contact her. Callie and the perpetrator’s mother continue talking to this day.

Another situation that Callie related was that her daughter found a breast lump. Due to the lack of appropriate healthcare, her cancer was not diagnosed until it had progressed too far and the daughter died. She is finding it difficult to forgive the healthcare system (specifically, the lack of Medicaid in Alabama) for this.

Alabama River and Pettus Bridge seen from SCNTR Dexter Avenue King Memorial Baptist Church

After the lunch discussion, we rode the bus across the Pettus bridge and on to Montgomery, Alabama. Downtown Montgomery is a considerably more attractive capital than downtown Jackson, probably due to their focus on attracting conventions and other travelers. When we arrived, we met Michelle Browder, who in addition to being our tour guide is an entrepreneur. She is focused on telling a more complete story about the role of Black women as test subjects in the development of gynecological treatment, and how they were victimized in undergoing this experimentation.

Michelle led us on a tour that began at the waterfront of the Alabama River, and gave us a lesson on reading between the lines of the historical markers in town. She pointed out in particular three markers that highlighted the slave trade and Montgomery’s role in it. There were to be more of these markers, but of course the city stopped the project.

After the tour, we returned to check into the hotel (the Renaissance, probably the nicest hotel in town), and gathered at Central, a nearby restaurant that is also among the best. This was the final group dinner of the trip, because most of the group will be leaving tomorrow afternoon.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Monday, 12. December 2022

Altmode

Civil Rights Journey Day 7: Jackson to Selma

Monday, October 24, 2022 After a buffet breakfast at our hotel, we had an hour or so with Mike Espy, former Secretary of Agriculture, Congressman, and Senate candidate. Secretary Espy discussed the political climate in Mississippi in quite some detail, and made it yet more clear that Mississippi politics are more complicated than one might […]

Monday, October 24, 2022

After a buffet breakfast at our hotel, we had an hour or so with Mike Espy, former Secretary of Agriculture, Congressman, and Senate candidate. Secretary Espy discussed the political climate in Mississippi in quite some detail, and made it yet more clear that Mississippi politics are more complicated than one might think, and civil rights is not a lost cause.

We then boarded our bus for the three-hour bus ride to Selma, Alabama. Selma today is a somewhat depressed town of about 18,000 having one of the highest murder rates in the state. In the 1960s, Selma was an important site for the civil rights movement. In 1965, attempts were made to march from here to the state capitol in Montgomery to demand voting rights. Initially this Freedom March was blocked by police and resulted in considerable injury to the protesters. Eventually a successful march was held following issuance of a court order.

Upon arrival, we visited the By the River Center for Humanity, a mixed-use community facility. They served us a delicious lunch which was followed by a talk by JoAnne Bland, one of the participants in the Freedom March. She shared with us some of her vision for Selma, and led us in a bit of singing and dancing as well.

JoAnne Bland, talking about Selma Newly restored St. James Hotel

We then walked across the famous Edmund Pettus Bridge, and visited the National Voting Rights Museum just across the river from Selma. The museum had artifacts and pictures from the Freedom March, as well as considerable material on subsequent visits to Selma by presidents, particularly Bill Clinton and Barack Obama.

After a short break at our hotel (the St. James hotel, recently renovated by Hilton), we took a short walk to The Revolt Selma, a new restaurant opened by a Black entrepreneur, for a buffet-style steak dinner.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Sunday, 11. December 2022

Werdmüller on Medium

Things I’ve learned about parenting

It’s the hardest thing I’ve ever done. Continue reading on Medium »

It’s the hardest thing I’ve ever done.

Continue reading on Medium »


Altmode

Civil Rights Journey Day 6: Jackson, Mississippi

Sunday, October 23, 2022 We started earlier than usual today in order to catch the 8 am service at New Horizons Church, a majority-Black church in a former strip mall in southern Jackson. The facility was beautifully adapted to its use as a church, and we were warmly welcomed. Most of the music was presented […]

Sunday, October 23, 2022

We started earlier than usual today in order to catch the 8 am service at New Horizons Church, a majority-Black church in a former strip mall in southern Jackson. The facility was beautifully adapted to its use as a church, and we were warmly welcomed. Most of the music was presented by a children’s group, which was very talented.

After the service, we met with the Senior Pastor, Bishop Ronnie Crudup. He described the formation of the church and also went into considerable detail about the political climate in Jackson and more generally in Mississippi. One story that struck me was that the Governor, rather than distribute federal aid from the American Recovery Act, sent at least some of the money back to the federal government saying that it was not needed. This struck me as simply cruel. In any case it is organizations like New Horizons that are developing the community for future leadership.

We then went to a local restaurant for brunch with some local women leaders (and teenage leaders) who are working in the area of voting rights and getting the Black community, and Black women in particular, to participate in the political process and specifically to vote.

After brunch we took our bus to the Mississippi Civil Rights Museum in downtown Jackson. The museum is, perhaps surprisingly, funded by the State of Mississippi. We started out with a short talk by Hezekiah Watkins, one of the original Freedom Riders. We then had some time — although not enough — to explore the museum, which took us from the days of slavery to the present, with an emphasis on the mid-20th century civil rights movement.

Medgar Evers home Mississippi Civil Rights Museum

We then took the bus to the Medgar Evers home. Evers, the field secretary for the Mississippi NAACP, was shot and killed there in 1963. The house has recently been designated as a national monument, but unfortunately wasn’t open for tours when we were there.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Saturday, 10. December 2022

Jon Udell

Browsing the Fediverse

A month ago, when the Great Discontinuity happened, I started working on a Steampipe plugin to enable SQL queries against the Mastodon API, along with a companion Steampipe “mod” (suite of dashboards) to display and chart the results of those queries. I expect these dashboards will soon be available in Steampipe Cloud, where it will … Continue reading Browsing the Fediverse

A month ago, when the Great Discontinuity happened, I started working on a Steampipe plugin to enable SQL queries against the Mastodon API, along with a companion Steampipe “mod” (suite of dashboards) to display and chart the results of those queries.

I expect these dashboards will soon be available in Steampipe Cloud, where it will take just a few seconds to pop in your Mastodon access token (from, e.g., https://mastodon.social/settings/applications/new) and begin using the dashboards.

Meanwhile, if you’re so inclined, you can find the plugin here and the dashboards here. If you’re reasonably technical you can pretty quickly and easily install Steampipe, clone these repos, build the plugin, and start using the dashboards.

Why would you want to? My own motivation, originally, was to do Mastodon analytics. I thought Steampipe’s SQLification of the API would be a handy way to discern and monitor activity trends during a period of extraordinary flux. And that’s proven to be true, to a limited extent. Here’s a snapshot of the dashboard that uses the instance activity API.

I’m watching this chart with great interest. Where does it go from here? I’m not going to hazard a guess. Everything’s up in the air right now, and anything could happen.

But as I added tables to the plugin to encapsulate more of the Mastodon API, and added dashboards to visualize those tables, my focus shifted. I began to see the suite of dashboards as a Mastodon reader/browser that complements the web and phone clients, and that’s how I mainly use them now.

I think the key benefit is one of Edward Tufte’s core principles: information density. Each of these dashboards shows more activity than you can see at a glance in the web or phone interfaces. I find this very helpful for searching and browsing. When I see items of interest that I want to interact with, I click through to the web app in order to boost, reply, or favorite.

Will this way of browsing Mastodon appeal to you? To get a feel for what it’s like, here are snapshots of some of the dashboards I’ve built so far.

dashboard.Favorites

dashboard.Following

dashboard.Home

dashboard.List

dashboard.Me

dashboard.Notification

dashboard.PeopleSearch

dashboard.StatusSearch

dashboard.TagSearch

For me, at least, this approach has become an effective way to browse the fediverse, find interesting people, read what they boost, and keep track of my own activity.

If you are dev-minded, by the way, please note that these dashboards are just one way to skin the results of queries against the plugin. Any SQL client can connect to Steampipe’s Postgres endpoint. You could use dashboards like Metabase or Grafana, or you could embed Steampipe as a component in an app.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Doc Searls Weblog

How the species killing the planet can save it

We live in the Anthropocene, a geological epoch defined by the influence of one species over everything else, including the planet itself. That species is ours, and we are a pestilential one, altering, consuming, and wasting everything we can. Specifically, our civilizations have advanced on the planet like a cancer, parasitically metabolizing materials we call […]

We live in the Anthropocene, a geological epoch defined by the influence of one species over everything else, including the planet itself. That species is ours, and we are a pestilential one, altering, consuming, and wasting everything we can.

Specifically, our civilizations have advanced on the planet like a cancer, parasitically metabolizing materials we call “resources” (without their permission) as if their sums were not finite. Oil and coal will be gone in a few hundred years. Uranium, titanium, tungsten, helium, lithium and other members of the periodic table may be gone far sooner, thanks to our boundless appetites. And yes, we can raise crops of corn and other plants to make fuel for cars and jets, but only at the many costs of monoculture on the biodiversity required for sustaining every form of life.

I bring all this up because we’ll be talking about it on Monday at this month’s Ostrom Workshop salon at Indiana University and live on the Web. Our speaker will be Vinay Gupta (@leashless), inventor of the Hexayurt refugee shelter, founder and CEO of Mattereum, a progenitor of Ethereum, and source of wisdom on all that and much else. The title of his talk is the headline above. His case is that we have to get our per-person environmental consumption down by about 10x. Or else.

It helps that there are people and regions in the world providing living examples of how that is done. Vinay is deeply familiar with those and will share what he knows in ways that help us co-think and work to save the planet’s butt, along with our own.

The salon is at 2 PM Eastern time. It is also free, but you need to register first, here.

If this follows the pattern of our prior salons—all with the theme Beyond the Web—the presentation and discussion will be lively, informative, and productive. See you there.


Altmode

Civil Rights Journey Day 5: Incarceration Justice

Saturday, October 22, 2022 Today we are off to Jackson, Mississippi. We are also changing focus from history (enslavement) to contemporary civil rights issues. After breakfast we had a visit from three community members involved in trying to improve the criminal justice system in Louisiana. First we heard from Will Snowden of the Vera Institute. […]

Saturday, October 22, 2022

Today we are off to Jackson, Mississippi. We are also changing focus from history (enslavement) to contemporary civil rights issues. After breakfast we had a visit from three community members involved in trying to improve the criminal justice system in Louisiana. First we heard from Will Snowden of the Vera Institute. He spoke about their initiative to reduce the population in Orleans Parish jails following Hurricane Katrina from about 7200 to 1000, largely by observing the distinction between jails (typically short-term detainees awaiting trial) and prisons (long-term convicted detainees). This distinction gives long-term detainees the potential to have access to education and training programs to help them develop. Vera Institute is working more broadly to improve equity in the criminal legal system in Louisiana as well.

We then heard from Everett “Buck” Ofray and Louis Gibson, both former long-term detainees who had been convicted of second-degree murder. They described their journey from arrival at Louisiana’s notorious Angola Prison, forced to labor initially without pay and later for only 20 cents an hour. Held without parole, they had to challenge the system to obtain access to education and vocational opportunities. Eventually they worked their way into more and more responsible jobs like maintaining aircraft for the State Police. Changes in law allowed them to eventually be paroled. They described their process of re-integration into society; both are now working to assist other recent detainees. Louis is also working as a paralegal. This was truly an inspirational talk.

We then loaded into our bus for the three hour drive to Jackson, Mississippi for the next phase of our journey. We stopped along the way for a picnic lunch and to discuss what we have learned so far.

This is also homecoming weekend, a big deal in Jackson, so we expect everything to be crowded. We had dinner this evening at the Char Restaurant, again a very enjoyable meal. Tomorrow will be an early and full day, so we are turning in early.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Friday, 09. December 2022

Altmode

Civil Rights Journey Day 4: Bayou

Friday, October 21, 2022 After breakfast at our hotel, we got on the bus and went east to visit one of the bayous. Along the way, we stopped and were introduced to a native American couple who explained the many ways that the oil industry in the area had impacted their lives. In addition to […]

Friday, October 21, 2022

After breakfast at our hotel, we got on the bus and went east to visit one of the bayous. Along the way, we stopped and were introduced to a native American couple who explained the many ways that the oil industry in the area had impacted their lives. In addition to needing to relocate, they were severely affected by the Gulf of Mexico oil well disaster, and continue to endure health threats from toxic emissions from the refineries, many of which have been made legal through lobbying efforts on behalf of the oil companies.

We continued east through St. Bernard Parish, and as we did, we saw many skeletons of dead oak trees. These have apparently died due to increased salination of the underground aquafer. Many homes were seen that were propped up on platforms typically 20 feet above ground. Many of these homes were owned not by locals but by sport fishermen as second homes or as rentals. As we looked out into the waterway, it was fairly obvious which boars belonged the visiting sport fishermen and which were owned by locals. As our hosts explained, the waterway is always in a state of transition. There have been efforts to open new channels from the Mississippi River to the Gulf of Mexico, and other projects that would change the salinity of that portion of the delta. Nobody seems to be sure how long this will last.

Returning to New Orleans, we had lunch at a very pleasant restaurant, Carmo. Our next stop was at a youth development organization called Son of a Saint. Housed in a newly renovated former ice house, Son of a Saint provides mentorship services to boys who have lost their fathers. Engagement with them begins in middle school and extends through their educational career and beyond. They have a very impressive track record with the boys they have mentored.

Son of a Saint Portion of StudioBE interior

We then traveled to StudioBE, a warehouse art gallery space featuring the art of “BMike”, a local urban artist. As you might expect, BMike’s preferred medium is spray paint, but the amount of expression and detail he is able to impart is truly impressive. But spray paint is not his only medium: there were exhibits in many forms, even simulated video games. Along with the art were words that were in many cases quite profound. This visit definitely broadened my art tastes.

After a short break at the hotel, the group went to dinner at the Windsor Court Hotel, where we had stayed on our visit to New Orleans about 10 years ago. It was a wonderful dinner, definitely too much food but all of it delicious.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.


Jon Udell

When your database is an HTTP client

Here are three things I once hoped — but no longer expect — to outlive: 1. PDF files 2. passwords 3. The occasional need to scrape data from web pages PDF files and passwords are topics for another day, but web scraping is timely. Today I was asked to corral data from the Steampipe blog, … Continue reading When your database is an HTTP client

Here are three things I once hoped — but no longer expect — to outlive:

1. PDF files

2. passwords

3. The occasional need to scrape data from web pages

PDF files and passwords are topics for another day, but web scraping is timely. Today I was asked to corral data from the Steampipe blog, and the path of least resistance was (of course!) to extract it from the site.

I was all warmed up for the exercise because we’d just published a post dramatically entitled “Why build an HTTP client into a database? So you can ingest web data directly!” In that post I show three solutions enabled by the Net plugin’s net_http_request table.

Since the dawn of the web, scraping has worked this way: Use a script to fetch the data, then save it for retrieval and analysis. You might use the script language to query the data, or the query language of a database.

A couple of years ago I found a way to unify those ingredients: Run the script inside the database. You can do a lot with Postgres’ built-in procedural language, and even more if you activate Python inside Postgres. I went deeply into both and explained why in an earlier episode.

PL/Python was great for advanced uses, and I used it for a bit of web scraping too. It all worked fine, and I never thought to ask this question: “What if the database is the HTTP client, and SQL the common way to reason over data coming through that pipe?”

The examples in the post show what that’s like. In its simplest form you write a query like this.

select response_status_code, jsonb_pretty(response_headers) as headers from net_http_request where url = 'https://steampipe.io'

The result is like this.

+----------------------+-------------------------------------------------------+ | response_status_code | headers | +----------------------+-------------------------------------------------------+ | 200 | { | | | "Age": [ | | | "45557" | | | ], | | | "Date": [ | | | "Fri, 09 Dec 2022 06:46:40 GMT" | | | ], | | | "Etag": [ | | | "W/\"614a142998557b388e053bfa4408cf70\"" |

The response_status_code is a regular Postgres column, the headers column is a JSONB column that you can index into (e.g. headers ->> 'etag'). If you also select the response_body column of the net_http_request table you’ll get another regular Postgres column containing the text of the web page. If it’s HTML text, you can use regular expressions to match patterns in it. If it’s JSON text you can use Postgres’ JSON functions to query and transform it.

You can join fields nested in the JSON with other Postgres columns. And those other columns can belong to tables populated in any of the ways Steampipe plugins populate tables: from JSON or GraphQL API endpoints, from CSV or Terraform or Yaml files, from anywhere really. As a developer writing Steampipe queries (and flowing results into dashboards) you see all of these sources as tables, you query them individually in all the ways Postgres can, and you join across diverse sources in a common way.

Of course web pages are structured in ways that regular expressions can’t easily grok. It’s easy to match links, but parsing HTML tag names and attributes is a job for a real parser. I’d made a start on an HTML plugin for Steampipe. There were already two tables: one to extract links from a web page, one to transform HTML tables to CSV format. So today, when tasked with tabulating blog metadata, I added a third table to enable these queries.

-- find the title select page, tag_name, tag_content from html_tag where page = 'https://steampipe.io/blog/selective-select' and tag_name = 'title' -- list the meta tags select page, tag_name, tag_attrs from html_tag where page = 'https://steampipe.io/blog/selective-select' and tag_name = 'meta'

That’s a powerful way to reason over HTML data! It was easy for me to extend the HTML plugin in this way, and I assure you that I’m no 10x programmer. The Steampipe plugin SDK and the wonderful goquery package are doing all the heavy lifting. I just had to stitch the components together, and if you’re any kind of programmer, with or without Go experience, you could pretty readily do the same.

Thursday, 08. December 2022

Altmode

Civil Rights Journey Day 3: Whitney Plantation

Thursday, October 20, 2022 Our day began with a briefing in a conference room in our hotel. Our leaders spoke for almost two hours on the history of slavery in the US and specifically in New Orleans as background for our visit to Whitney Plantation, about an hour’s bus ride from town. We made it […]

Thursday, October 20, 2022

Our day began with a briefing in a conference room in our hotel. Our leaders spoke for almost two hours on the history of slavery in the US and specifically in New Orleans as background for our visit to Whitney Plantation, about an hour’s bus ride from town. We made it to the plantation about noon, and had a picnic lunch there.

Whitney Plantation is different from most historical sites because it is presented from the viewpoint of the enslaved people, not the plantation owners. We were very fortunate that Yvonne, one of our tour leaders, had worked for Whitney Plantation until very recently and was able to tailor our tour to the theme of our journey. The tour included not only the Big House of the plantation, but also the areas where many of the enslaved people worked, such as the cooking areas, and their living quarters. We also were introduced to the evolution of the plantation, from early days farming indigo to sugar cane farming. There were memorial walls commemorating the many enslaved people who worked at the plantation, giving us an idea of the scale of the slavery at this one plantation.

Living quarters for enslaved people Memorial walls showing names of enslaved people who had worked at the plantation

While at the Whitney, Konda Mason, an entrepreneur who leads an organization called Jubilee Justice, spoke to us about the work they are doing. Jubilee Justice is working to improve the business climate for Black rice farmers in Louisiana and Arkansas. One of their main initiatives is to establish a cooperative rice mill in order to give the local farmers a more equitable and cost effective way to get their rice to market.

After returning from the Whitney, we took a short break and then had dinner at 9 Roses with several leaders from the local Vietnamese community. New Orleans has a significant Vietnamese community, dating from the 1975 evacuation during the Communist takeover of South Vietnam. Some are engaged in fishing and shrimping, which had been their occupations in Vietnam. Our dinner table included a leader from the real estate industry who was part of that 1975 migration and a Catholic priest who had relatively recently moved to the United States. It was informative to see the similarities and differences between these generations of immigrants. The food was excellent!

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Wednesday, 07. December 2022

Altmode

Civil Rights Journey Day 2: Exploring the French Quarter

Wednesday, October 19, 2022 With the official part of our tour beginning at 3 pm, we had much of the day to explore the French Quarter of New Orleans on our own. We met up with our friends Liz and Peter and started at Café du Monde for their traditional breakfast of beignets and coffee. […]

Wednesday, October 19, 2022

With the official part of our tour beginning at 3 pm, we had much of the day to explore the French Quarter of New Orleans on our own. We met up with our friends Liz and Peter and started at Café du Monde for their traditional breakfast of beignets and coffee. Beignets are fried choux pastries with piles, and I mean piles, of powdered sugar on top. While we were eating, a brass band assembled on the sidewalk next to the Café and started playing. They were very good, a classic New Orleans moment.

The four of us headed over to the nearby Jean Lafitte National Historical Park and Preserve Visitor Center. We got there in time for their 10 am talk. It began with an interesting discussion of the Acadian expulsion from the Canadian Maritimes, and subsequent arrival of many of them in the New Orleans area. This was followed by a talk we were less interested in, discussing medicinal properties of native plants. Afterwards we enjoyed a number of interesting exhibits in the visitor center.

While we were listening to the talks, Liz and Peter went through the Louisiana state museum in The Cabildo, a notable building next to the iconic St. Louis Cathedral. This is the location where the Louisiana Purchase was signed. We understand admission was $10, and not necessarily worth the cost.

We met up with Liz and Peter again for lunch. We had planned to get muffuletta sandwiches, but weren’t sure where to get them since the traditional place, Central Grocery and Deli, was closed due to damage from Hurricane Ida. We found a place in the French Market, Alberto’s, that had good muffulettas but perhaps not quite as rich in olives as those at Central Grocery.

After lunch, we all went to the Historic New Orleans Collection museum on Royal Street near our hotel. Admission was free, and there was plenty to see. Visiting this museum helped us better follow the series of events as control of New Orleans passed from France to Spain, back to France, and finally to the United States. Definitely worth the visit.

At 3 pm our tour officially started. We gathered in a conference room of the hotel for introductions and a brief orientation. We then walked north to Congo Square, just outside the French Quarter. Congo Square was the location where slaves who might be given a Sunday off (due to the strong Roman Catholic influence in the area) would gather, dance and sing, and trade wares. We met up with Luther Gray and Jamilah Peters-Muhammad of the Congo Square Preservation Society who told us of the considerable history of the Square. They taught us drumming and dancing in the traditional style, or at least as close as we could come to that.

We were then met by Erin, an official New Orleans tour guide, who led us on a whirlwind walking tour of some of the French Quarter. Erin had a lot of information, and talked fast to get as much of it to us as possible.

After a few minutes to get cleaned up, the group walked to Galatoire’s, a well known (and fancy) restaurant in the French Quarter. Our group had a private room and a selection of menu items to choose from. I had to try the turtle soup, which resembled a thin chili and was very tasty. My fish entrée and Kenna’s crab hollandaise were both quite good.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Wednesday, 07. December 2022

Identity Woman

Save the Date: APAC Digital Identity unConference, March 1-3, 2023

We are really thrilled to announce the first APAC Digital Identity unConference March 1-3, Bangkok, Thailand. Registration is now LIVE! Fostering Innovation and collaboration between emerging digital identity companies across the APAC region.  Welcome reception in the evening of Wednesday March 1 March 2-3 full conference days.  The event is inspired by the Internet Identity […] The po

We are really thrilled to announce the first APAC Digital Identity unConference March 1-3, Bangkok, Thailand. Registration is now LIVE! Fostering Innovation and collaboration between emerging digital identity companies across the APAC region.  Welcome reception in the evening of Wednesday March 1 March 2-3 full conference days.  The event is inspired by the Internet Identity […]

The post Save the Date: APAC Digital Identity unConference, March 1-3, 2023 appeared first on Identity Woman.

Wednesday, 07. December 2022

Doc Searls Weblog

When Clouds Crash

Rackspace is in a crater right now, on fire. So are many of its customers. I’m one of them. What happened, Rackspace says, was “the result of a ransomware incident.” Damaged, lost or destroyed is its Hosted Exchange business. On that cloud platform, companies and individuals around the world run their email and much else. It’s quite […]

Rackspace is in a crater right now, on fire.

So are many of its customers. I’m one of them.

What happened, Rackspace says, was “the result of a ransomware incident.” Damaged, lost or destroyed is its Hosted Exchange business. On that cloud platform, companies and individuals around the world run their email and much else.

It’s quite a saga.

The first report from Rackspace came at 11:49pm Pacific (where I am) on Thursday, posted on its Incidents page:

We are investigating an issue that is affecting our Hosted Exchange environments. More details will be posted as they become available.

Updates got more wordy as the outage continued. About a day later, a long posting said,

We proactively powered down and disconnected the Hosted Exchange environment while we triaged to understand the extent and the severity of the impact. After further analysis, we have determined that this is a security incident.

They also offered a lifeline of sorts:

At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice. At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice. To activate, please use the below link for instructions on how to set up your account and users.

For reasons not worth going into, this was useless to me. But I’m also just one guy, and almost all of Rackspace’s customers are businesses with more to lose.

Getting help from Rackspace quickly became difficult or impossible, while details about the situation were minimal, until this morning (6 December), when they said it was a ransomware incident. Which countless tweets had been suggesting from the start.

Here is what Rackspace said it was doing about it:

Alongside our internal security team, we have engaged a leading cyber defense firm to investigate. Our investigation is still in its early stages, and it is too early to say what, if any, data was affected. If we determine sensitive information was affected, we will notify customers as appropriate.

No FBI? CIA? NSA? DHS? Police? My guess is that at least two of those have been notified because it would make sense for them to be involved. But I’m sure abundant caution on Rackspace’s part is the reason we’re not hearing about any of that.

As for notifying customers, good luck. In a posting two days ago, Rackspace said,

For those who are finding the process challenging and are awaiting support, we ask for your patience as we increase staff to help every customer. Since our last update, we have mobilized roughly 1000 support Rackers to reduce wait times and address ticket queues. We will continue to accelerate and deploy even more resources to further help customers.

Search for Rackspace+wait on Twitter to see how that’s going.

Yesterday morning I chose the “we’ll call you back asap” option at the Rackspace support number, after calling them fruitlessly before that. Since then, crickets. Meanwhile, I’ve been working almost non-stop on moving my email hosting to Hover, which answers the phone quickly and is always helpful.

Of course, I haven’t been able to export anything from Rackspace, and I have growing doubts that I ever will. If the failure is total, many details of my life in the digital world will be lost forever.

One bit of good fortune is that my wife and sister, who both also have searls.com email addresses, were on Rackspace’s basic non-Exchange email service. Since that was still working today, we could export their mail archive from Rackspace as .mbox files, and start new accounts for them on Hover. (Ironically, I moved to Rackspace’s Exchange service because I punched through the basic service’s 25Gb limit on storage, and they sold me on the Exchange service’s better security.)

Ramsomware is bad shit. If you’re Rackspace or one of its customers, there is plenty to fear.

But the larger story here isn’t about Rackspace or its customers. It’s about the cloud business, which is beyond massive.

I’ve been looking for examples of cloud failures that are bigger than this one. Here’s a list from five years ago. Here is one from July of this year. Here is one from August. Most list disruptions lasting hours. This one has been going on for five days with no end in sight.

So let’s imagine that Rackspace and its customers are stuck in that crater, and it just keeps burning. For years. What are the lessons from that?

[Later, on 10 December…] This report by Kevin Beaumont on the Rackspace catastrophe (which continues) is long on technical and administrative details that nobody else seems to be reporting, and is damning to Microsoft as well.


Jon Udell

Mastodon, Steampipe, and RSS

I was determined to write my Mastodon #introduction today. To get started I used the tag search in the dashboard I’m building. The idea was to look at a bunch of other #introduction posts to get a feel for how mine should go. When you search specifically for hashtags, the Mastodon search API returns this … Continue reading Mastodon, Steampipe, and RSS

I was determined to write my Mastodon #introduction today. To get started I used the tag search in the dashboard I’m building.

The idea was to look at a bunch of other #introduction posts to get a feel for how mine should go. When you search specifically for hashtags, the Mastodon search API returns this information.

"hashtags": [ { "name": "introduction", "url": "https://mastodon.social/tags/introduction", "history": [ { "day": "1574553600", "uses": "10", "accounts": "9" }, // ... ] },

A first version of the dashboard, having only this data to work with, just listed the names of tags matching the search term along with corresponding URLs. Here was the initial query.

select name, url from mastodon_search_hashtag where query = 'introduction'

That produced a list of links, like https://mastodon.social/tags/introduction, to home pages for variants of the tag. These are useful links! Each goes to a page where you can see who is posting to the tag.

To make this view slightly more useful, I tapped the third element of the API response, history, in a revised query.

with data as ( select name, url, ( jsonb_array_elements(history) ->> 'uses' )::int as uses from mastodon_search_hashtag where query = 'introduction' ) select name, url, sum(uses) from data group by name, url order by sum desc

These results help me decide which variant to use.

+-------------------+---------------------------------------------------+------+ | name | url | sum | +-------------------+---------------------------------------------------+------+ | introduction | https://mastodon.social/tags/introduction | 1816 | | introductions | https://mastodon.social/tags/introductions | 218 | | introductionpost | https://mastodon.social/tags/introductionpost | 19 | | introductionfr | https://mastodon.social/tags/introductionfr | 6 |

But I still need to visit each link’s page to explore how it’s being used. It would be nice to surface more context in the dashboard, and I found a nifty way to do it, but first let’s dwell on the revised query for a minute. Postgres’ JSON features are powerful and it’s often a challenge (at least for me) to visualize how they work.

The Postgres jsonb_array_elements() function is what’s called a set-returning function. Here it unpacks Postgres’ JSON representation of the list of history structures returned from the Mastodon API. In its simplest form, the function call jsonb_array_elements(history) produces a temporary table with per-tag, per-day data.

select name, jsonb_array_elements(history) as history from mastodon_search_hashtag where query = 'introduction' +--------------------------------+----------------------------------------------------+ | name | history | +--------------------------------+----------------------------------------------------+ | introduction | {"accounts":"16","day":"1670371200","uses":"19"} | | introduction | {"accounts":"250","day":"1670284800","uses":"269"} | | introduction | {"accounts":"259","day":"1670198400","uses":"274"} | | introduction | {"accounts":"253","day":"1670112000","uses":"270"} | | introduction | {"accounts":"245","day":"1670025600","uses":"269"} | | introduction | {"accounts":"345","day":"1669939200","uses":"383"} | | introduction | {"accounts":"307","day":"1669852800","uses":"339"} | | introductionsfr | {"accounts":"0","day":"1670371200","uses":"0"} | | introductionsfr | {"accounts":"0","day":"1670284800","uses":"0"} | | introductionsfr | {"accounts":"0","day":"1670198400","uses":"0"} | | introductionsfr | {"accounts":"0","day":"1670112000","uses":"0"} | | introductionsfr | {"accounts":"0","day":"1670025600","uses":"0"} |

history is a JSONB column that holds an object with three fields. The revised query uses Postgres’ JSON indexing operator ->> to reach into that object and hoist the number of daily uses into its own column, so it can be the target of a SQL SUM function.

OK, ready for the nifty solution? Recall that https://mastodon.social/tags/introduction is the home page for that variant of the tag. There you can see introduction posts from people using the tag. Those posts typically include other tags. In the dashboard shown above you can see that Kathy Nickels is using these: #Music #Art #Equestrian #Nature #Animals. The tags appear in her introduction post.

I didn’t immediately see how to capture them for use in the dashboard. Then I remembered that certain classes of Mastodon page have corresponding RSS feeds, and wondered if the tag pages are members of one such class. Sure enough they are, and https://mastodon.social/tags/introduction.rss is a thing. That link, formed by tacking .rss onto the base URL, provides the extra context I was looking for. Here’s the final version of the query.

with data as ( select name, url, ( jsonb_array_elements(history) ->> 'uses' )::int as uses from mastodon_search_hashtag where query = 'introduction' ), uses as ( select name, url || '.rss' as feed_link, sum(uses) as recent_uses from data group by connection, name, url ) select u.name, r.guid as link, to_char(r.published, 'YYYY-MM-DD') as published, r.categories from uses u join rss_item r on r.feed_link = u.feed_link where recent_uses > 1 order by recent_uses desc, published desc )

The new ingredients, courtesy of the RSS feed, are: guid which links to an individual introduction like Kathy’s, published which is the day the introduction appeared, and categories which has the tags used in the introduction post. Sweet! Now I can scan the dashboard to get a sense of which introductions I want to check out.

The first three queries use the Steampipe plugin for Mastodon, and in particular its mastodon_search_hashtag table which encapsulates the Mastodon API for searching tags. The final version joins that table with the rss_item table provided by the RSS plugin, using the common base URL as the basis of the join.

This delights me in so many ways. When the blogosphere first emerged in the early 2000s, some of us discovered that the RSS protocol was capable of far more than just delivering feeds to RSS readers. The other new hot protocol in that era was XML web services. As an InfoWorld analyst I was supposed to be cheering the latter as an enterprise-grade technology, but I couldn’t help noticing that RSS kept turning out to be a great way to move data between cooperating systems. That’s always been true, and I love how this example reminds us that it’s still true.

I’m equally delighted to show how Steampipe enables this modern exercise in RSS-powered integration. Steampipe was, originally, an engine for mapping results from JSON API endpoints to SQL tables. Over time, though, it has broadened its view of what constitutes an API. You can use Steampipe to query CSV files, or Terraform files, or — as we see here — RSS feeds. Data comes in all kinds of flavors. Steampipe abstracts those differences and brings all the flavors into a common space where you can reason over them using SQL.

And finally, it’s just wonderful to be at the intersection of Mastodon, Steampipe, and RSS in this remarkable moment. I’ll readily admit that nostalgia is a factor. But RSS did bust things wide open twenty years ago, Mastodon’s doing that now, and I love that RSS can help it happen again.

Now I need to write that #introduction!


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Aaron Parecki

OAuth for Browser-Based Apps Draft 12

I just published a revised version of OAuth for Browser-Based Apps based on the feedback and discussion at IETF 115 London!

I just published a revised version of OAuth for Browser-Based Apps based on the feedback and discussion at IETF 115 London!

https://www.ietf.org/archive/id/draft-ietf-oauth-browser-based-apps-12.html

The primary changes are:

Rephrased the architecture patterns to focus on token acquisition Added a new section about the various options available for storing tokens Added a section on sender-constrained tokens and a reference to DPoP Added a section discussing why not to use the Cookie API to store tokens

At this point there are no open issues on GitHub, and I have nothing else I am planning on adding to the document. Please review if you are interested and let me know if you have any further suggestions!

Tuesday, 06. December 2022

Altmode

Civil Rights Journey Day 1: To New Orleans

Continuing with our tradition of publishing a journal of significant travels, this begins a journal of our recent trip to the US South, focused on Race, Equity, and American Identity. Posts will be added approximately daily, and are delayed 7 weeks from the actual events. Tuesday, October 18, 2022 Kenna and I left today on […]

Continuing with our tradition of publishing a journal of significant travels, this begins a journal of our recent trip to the US South, focused on Race, Equity, and American Identity. Posts will be added approximately daily, and are delayed 7 weeks from the actual events.

Tuesday, October 18, 2022

Kenna and I left today on what will be our first fully-guided land trip. This is a thematic trip entitled Race, Equity, & American Identity that was organized by SV2, a local philanthropic organization that Kenna and I are partners in. The trip was planned and will be led by Telos, an organization that is best described by their mission statement:

We form communities of American peacemakers across lines of difference, and equip them to help reconcile seemingly intractable conflicts at home and abroad.

Telos mission statement

We have had a couple of pre-trip meetings with our group. The group consists of about 16 people, which includes our friends Liz and Peter who expressed interest when Kenna told them about the trip. We decided to leave a day early to get acclimated to the time change, to have some time to re-explore New Orleans on our own, and to avoid missing any of the program on the actual first day.

Our travel today was wonderfully drama-free. The ride to the airport was right on time, we met up with Peter and Liz at SFO airport, and our flight was on-time and comfortable. We got to our hotel, the Omni Royal, in the late afternoon with enough time to walk around and explore. We walked around the French Quarter quite a bit, through Jackson Square, and down to the Mississippi River waterfront. The river seemed to be a little low but not as much as we expected.

While walking around, we read various menus to get ideas for dinner. We were in a touristy part of the French Quarter, so many places seemed the wrong style, too expensive or both. Consulting online resources, we decided on a high-rated barbecue place named Pig Out. It was very small and informal, and the food was excellent.

Having had just a few hours of sleep last night, we decided to turn in early, even considering the two hour time change.


Information Answers

GDER Anyone?

I was asked to give a talk last week on how I thought the ‘data intermediaries’ space would evolve over the next few years. I used […]
I was asked to give a talk last week on how I thought the ‘data intermediaries’ space would evolve over the next few years. I used […]

Damien Bod

Implement Feature Management in Blazor ASP.NET Core

The post shows how features toggles or feature switches can be implemented in an ASP.NET Core application using Blazor. The Microsoft.FeatureManagement Nuget package is used to add the feature toggles. Code: https://github.com/damienbod/AspNetCoreFeatures.Toggles Setup The Blazor application is a simple ASP.NET core hosted application using Azure AD for the identity provider. Both the Server and th

The post shows how features toggles or feature switches can be implemented in an ASP.NET Core application using Blazor. The Microsoft.FeatureManagement Nuget package is used to add the feature toggles.

Code: https://github.com/damienbod/AspNetCoreFeatures.Toggles

Setup

The Blazor application is a simple ASP.NET core hosted application using Azure AD for the identity provider. Both the Server and the Client applications need to implement the feature toggle.

The Blazor Server project is an ASP.NET Core application. This uses the Microsoft.FeatureManagement.AspNetCore Nuget package. The Client application is a Blazor WASM project and uses the Microsoft.FeatureManagement package.

Blazor Server

The AddFeatureManagement can be used to added the services to add or remove features. I added a feature specific service which is only added is the feature is activated. This is read from teh server app settings.

services.AddFeatureManagement(); var featureXEnabled = configuration.GetValue<bool>("FeatureManagement:FeatureX"); if(featureXEnabled) { services.AddScoped<FeatureXService>(); }

The FeatureXApiController is used to make the feature business available for the Blazor UI. In this example, the IFeatureManager is used to check if the feature is enabled. If enabled, the business is called, otherwise a 404 is returned.

private IFeatureManager _featureManager; private readonly FeatureXService _featureXService; public FeatureXApiController(IFeatureManager featureManager, FeatureXService featureXService) { _featureManager = featureManager; _featureXService = featureXService; } [HttpGet] public async Task<IActionResult> GetAsync() { var featureX = await _featureManager .IsEnabledAsync(Features.FEATUREX); if(featureX) { return Ok(new List<string> { "some data", "more data", _featureXService.GetFeatureString() }); } return NotFound(); }

The ASP.NET Core Server application uses the default FeatureManagement app settings. Only one feature was added to this application.

{ "FeatureManagement": { "FeatureX": true } } Blazor WASM Client

The AddFeatureManagement is used to add the feature management to the Blazor WASM UI.

builder.Services.AddFeatureManagement();

The feature check can be used by injecting the IFeatureManager interface. The IsEnabledAsync can be used to check if the feature is enabled.

@page "/featurexapi" @inject IAntiforgeryHttpClientFactory httpClientFactory @inject IJSRuntime JSRuntime @using Microsoft.FeatureManagement @using AspNetCoreFeatures.Toggles.Shared; @inject IFeatureManager _featureManager <h1>Data from Feature X API</h1> @if (apiData == null) { <p><em>Loading...</em></p> } else { <table class="table"> <thead> <tr> <th>Data</th> </tr> </thead> <tbody> @foreach (var data in apiData) { <tr> <td>@data</td> </tr> } </tbody> </table> } @code { private string[]? apiData; public bool FeatureXEnabled { get; set; } protected override async Task OnInitializedAsync() { var featureX = await _featureManager.IsEnabledAsync(Features.FEATUREX); if (featureX) { var client = await httpClientFactory.CreateClientAsync(); apiData = await client.GetFromJsonAsync<string[]>("api/FeatureXApi"); } } }

The Blazor WASM requires app settings with the feature definitions. This is deployed to the wwwroot folder in an ASP.NET Core hosted application.

{ "FeatureManagement": { "FeatureX": true } }

The feature switch can also be used in the Razor UI components. The component can used the OnInitializedAsync to check if a feaure is enabled and set a bool value which can be used in the UI.

@using Microsoft.FeatureManagement @using AspNetCoreFeatures.Toggles.Shared @inject IFeatureManager _featureManager <div class="top-row pl-4 navbar navbar-dark"> <a class="navbar-brand" href="">Blazor AAD BFF Cookies</a> <button class="navbar-toggler" @onclick="ToggleNavMenu"> <span class="navbar-toggler-icon"></span> </button> </div> <div class="@NavMenuCssClass" @onclick="ToggleNavMenu"> <ul class="nav flex-column"> <AuthorizeView> <Authorized> <li class="nav-item px-3"> <NavLink class="nav-link" href="" Match="NavLinkMatch.All"> <span class="oi oi-home" aria-hidden="true"></span> Home </NavLink> </li> @if (FeatureXEnabled) { <li class="nav-item px-3"> <NavLink class="nav-link" href="featurexapi"> <span class="oi oi-list-rich" aria-hidden="true"></span> Feature X API call </NavLink> </li> } </Authorized> <NotAuthorized> <li class="nav-item px-3"> <p class="whiteColor">no access</p> </li> </NotAuthorized> </AuthorizeView> </ul> </div> @code { private bool collapseNavMenu = true; public bool FeatureXEnabled { get; set; } protected override async Task OnInitializedAsync() { FeatureXEnabled = await _featureManager.IsEnabledAsync(Features.FEATUREX); } private string? NavMenuCssClass => collapseNavMenu ? "collapse" : null; private void ToggleNavMenu() { collapseNavMenu = !collapseNavMenu; } }

The feature can now be turned on or off per configuration.

The Microsoft.FeatureManagement Nuget package provides many more feature management helpers and is really helpful when you need to support feature toggles or feature switches in a software solution or product.

Links

https://dev.to/karthikchintala/the-comprehensive-guide-to-feature-management-in-aspnet-core-23d1

https://learn.microsoft.com/en-us/azure/azure-app-configuration/quickstart-feature-flag-aspnet-core?tabs=core6x

https://learn.microsoft.com/en-us/dotnet/api/microsoft.featuremanagement?view=azure-dotnet-preview

https://learn.microsoft.com/en-us/azure/azure-app-configuration/use-feature-flags-dotnet-core?tabs=core5x

Monday, 05. December 2022

Werdmüller on Medium

Work I’m proud of

Building a more diverse news media Continue reading on Medium »

Building a more diverse news media

Continue reading on Medium »


MyDigitalFootprint

What is the toughest position? A CEO or a goalie?

Which one of the roles is the most challenging, as both are isolated and unforgiving? In either position, the opposition will not like you when you're doing well.  When doing badly, your supporters are not on your side.  Irrespective of if it is a good or bad day in the office, everyone else knows they can do your job better than you can, and your backers are never happy, always wanting

Which one of the roles is the most challenging, as both are isolated and unforgiving? In either position, the opposition will not like you when you're doing well.  When doing badly, your supporters are not on your side.  Irrespective of if it is a good or bad day in the office, everyone else knows they can do your job better than you can, and your backers are never happy, always wanting more.  So what is the toughest position? 

Being ahead If you're thinking three, four, or five plays ahead, you're not suitable for this role.

There is a reality to both roles that you have to be ahead, to be in the right position at the right time to ensure success, but everyone else just thinks you are lucky.  Anyone can do the job, but those who can see signals and read the play will excel.  

Blame No matter how many great things you do, everyone only remembers you for the one you missed

In either position, you have to accept that you’re the last person in the line of defence, and you will regularly take the brunt of the criticism when it goes wrong. But these abilities, to remain humble and stay positive combined with power, agility, and quickness, make those in these positions the strongest leaders.

Touchline criticism They need an element of insanity, as there is no hiding place.

Criticisms from the sideline can be heard when you stand in one spot; their views are loud and clear.  Abuse is particularly annoying when you know the media and crowd have zero experience with your role and do not know the context. 

The bottom line is that you have to ignore all those distractions. People on the sidelines are fickle. Anything you do — good or bad — will alter their opinions on you pretty quickly. Stick to what you're good at, and you’ll prove them wrong.


Preparation Success depends upon previous preparation; without such preparation, there will surely be failure.

In lower levels, particularly in SMEs and youth football, there’s a tendency to stick the CEO in position or the keeper in goal and pelt multiple objects their way from point blank range. It’s one of the worst ways for a leader to prepare.  A leader will take charge and create proper warmup routines, which others feel do not warm them up and this creates tensions and conflicts.

Messy So there’s an enormous blue frog with your name on it.

The CEO deals with people and all the messy complications people bring to any team.  A goalie has to deal with a muddy puddle that is cold and wet.  Those who lead deal with how they will approach it before they play and not during.


Training Coaching is not the filling of a pot but the lighting of a fire

It’s 10 of them for every 1 of you. Unfortunately, CEOs and goalkeepers are regularly neglected in team coaching and training sessions. With so many players to work on, leaders and keepers don’t receive the attention they need as they are unique and special; however, they are the ones who can lift everyone else.  


Second chance Is it unprecedented for two CEOs or goalkeepers in competition can both enjoy the result?

Being a CEO or goalkeeper is the most unforgiving position and offers little opportunity for redemption. Sales directors and strikers, for example, can make several poor efforts at a goal and then “get into” the game. If they land the deal or score, those poor efforts are entirely forgotten. 

When you step out, you have to be ready because redemption is an uphill struggle. If you make a mistake, you may not get another chance to put it right. You can only keep your head up, stay focused, and learn from it.

Alone on a bad day  Only a CEO can understand the challenges faced by a goalkeeper.

There’s nothing worse than the journey home when you’re kicking yourself over a mistake you made. It’s those key moments that ruin an evening or weekend for everyone. CEOs and goalkeepers truly carry the burden of their errors. No other position holds themselves singularly responsible. With that in mind, it’s unsurprising that some professionals have to quit for a lifestyle with a little less stress. Mental strength, a positive attitude and the determination to go again are required. Wounds will heal. CEO and keepers are easily victimised by their high level of responsibility. It takes guts to be one.

With all that said Only an outstanding leader can make their team victorious.

Saturday, 03. December 2022

@_Nat Zone

ChatGPTに欽定訳聖書のスタイルでIDトークンとアクセストークンを説明させてみた

界隈でもいろいろ説明させてみて遊んでいますが、中で…

界隈でもいろいろ説明させてみて遊んでいますが、中でもAWSのCognite の長であるサラ・チェケッティが面白い文を作らさせていました。

A. Maze. Ing. This one is for @vibronet pic.twitter.com/MQ0vjFqbdk

— Sarah Cecchetti, CIDPRO (@Sarah_Cecc) December 2, 2022
コマンドは

Write a biblical parable in the style of the King James Bible explaining why one should only use access tokens and never ID tokens for access management

(出所)@Sarah_Cecc

つまり

アクセス管理にアクセストークンとIDトークンのみを使用すべき理由を、欽定訳聖書のスタイルで説明しなさい。

この結果がすごく面白い。IDトークンの説明が間違ってるけど、いかにも欽定訳聖書ぽくなっている。DeepL翻訳がこれ。

ある賢い王がWebdomの国で、害を与えようとする者から自分の王国を守ろうとしたとき、そのことが明らかになった。そして王は、正当なアクセス権を持つ者だけが王国の門をくぐることができるよう、アクセス管理システムを導入するよう召使に命じた。

しかし、王のしもべたちが使えるトークンには、アクセス・トークンとIDトークンの2種類があった。しかし、使用人はセキュリティに疎く、どちらを使えばいいのか分からない。

そこで王は参謀を呼び、2つのトークンの違いを説明するように頼んだ。

「アクセス・トークンは一時的なもので、王国内の特定の資源にアクセスするために使われるものです。一方、IDトークンは永久的なものであり、所持者の機密情報を含んでいます。もし、IDトークンが悪人の手に渡れば、王国全体へのアクセスが許可される可能性があるのです」。

王はこれを大いに憂慮し、王国の安全を脅かすことのないよう、アクセストークンをアクセス管理のためにのみ使用するよう召使に命じた。

王は、「すべての人に無制限のアクセスを許可して王国の安全を脅かすよりは、少数の人に限定されたアクセスを許可する方がよい」と宣言した。

王国のしもべたちは、この王様の言葉に耳を傾け、王国は安全な状態を保ちました。

(出所)@Sarah_Cecc によるChatGPT生成結果より 1

ちなみに、同じ質問をわたしが投げてみたら違うバージョンが返ってきた。AIなので結果は安定しないんですよね。ここでもこの「賢い王」は同じ間違いをしているところは安定しているけど。

ある賢い王が、自分の王国に入ろうとする者はすべて、まずアクセストークンを得なければならないと命じた。王国は大きな宝と知恵のある場所であり、王はそれを害そうとする者から守りたかったからである。

しかし、王の命令を聞かず、入城証以外の方法で王国に入ろうとする者がいた。これらの者は愚かな者で、自分が賢いと思って、代わりにIDトークンを使おうとしたのだ。

しかし、王は賢明で、このことを予見していた。そして、王国の門に警備兵を配置し、適切なアクセストークンを持つ者だけが入れるようにしたのである。自分を賢いと思っている愚かな者たちは門で追い返され、そのIDトークンは何の役にも立たなかった。

賢者たちは、王の命令を教訓として、王国に入るためにアクセストークンを得ることだけを目指した。というのも、アクセストークンだけが王国に入るための真の方法であり、他の手段はすべて無駄であったからである。

王は喜んだ、彼の王国は安全で堅固であり、ふさわしい者だけが入ることができたからである。そして、他の手段で王国に入ろうとする者はすべて、その中にある宝と知恵にアクセスすることができず、外に取り残されてしまったのである。

だから、賢い王の王国に入ろうとする者はみな、このたとえ話に注意し、入国のしるしを手に入れよう。そして、他の方法で入ろうとする者はすべて、門で追い返され、IDトークンは何の役にも立たないので、警告を受けるようにしなさい。

(出所)ChatGPTにより生成2

この物語のつづきを書いてみましょうかね。

ある時、男が国の門の前に立った。男は敵国の者であったが、門番はそのことを知らなかった。男はどこかから手に入れた適切なアクセストークンをもっていた。門番はそのことを確かめると男を王国に入れた。かくして、王国の宝と知恵はすべて敵国にわたり、ほどなく王国は滅びたのであった。

(出所)筆者

なぜそうなるかのシナリオはいろいろ考えられますね。ぜひコメントに書くか、twitter にreplyしてみてください!

ChatGPTに欽定訳聖書のスタイルでIDトークンとアクセストークンを説明させてみた https://t.co/rejJ7uj2V7 @_natより

— Nat Sakimura/崎村夏彦 (@_nat) December 3, 2022

Friday, 02. December 2022

Aaron Parecki

The Laws of OAuth

The first law of OAuth states that the total number of authorized access tokens must remain constant in an isolated system.

The first law of OAuth states that the total number of authorized access tokens must remain constant in an isolated system.

The second law of OAuth states that the overall security of the system will always increase or remain constant over time.

The third law of OAuth states that as the security of the system approaches absolute, the ability to grant authorized access approaches zero.

The zeroth law of OAuth states that if two systems are in authorization equilibrium with a third system, they are also in authorization equilibrium with each other. In other words, if two systems have the same access rights and permissions, they are considered to be in a state of authorization equilibrium.

Written by AI.


Heres Tom with the Weather


ian glazers tuesdaynight

Controls Intelligence in the Greater Whole

I was talking to a long time competitor/colleague/client/friend this week about identity governance and a variety of other identity topics. We were commenting that in some regards access certification and access policies have been stuck in bubble of amber: not a lot of innovation save the addition of some cluster analysis (marketed as AI.) In … Continue reading Controls Intelligence in the Greater

I was talking to a long time competitor/colleague/client/friend this week about identity governance and a variety of other identity topics. We were commenting that in some regards access certification and access policies have been stuck in bubble of amber: not a lot of innovation save the addition of some cluster analysis (marketed as AI.) In the course of the conversation I remember that a long time ago I had written a piece on the use of negative policy spaces for access governance. My buddy thought it would be fun to dig it up a repost it. So of I went to find this…

What’s funny (at least to me) is that what follows is a writing sample I used as part of the interview process to get my first analyst job at Burton Group. And that brought back a lot of memories…

So without further adieu, straight out of 2008, I bring you:

Controls Intelligence in the Greater Whole – Using Negative Authorizations to satisfy Audit Requirements and strengthen Positive Authorization Policies Executive Summary

Whether conscious of it or not, no enterprise embarks on a controls exercise, be it controls definition, management, monitoring, or rationalization, unless that exercise addresses audit requirements.   Auditors and regulators have defined the backdrop against which a variety of corporate stakeholders must perform an ever-changing array of maneuvers to prove compliance.  Within this context, controls intelligence platforms and processes have developed to directly satisfy audit requirements.  In contrast, identity management technologies and other “compliance” tools are not truly aware of the constraints and requirements that auditors inflict upon organizations and are fundamentally not designed to meet those needs.  This piece will contrast the difference between controls intelligence platforms and their associated negative authorization policies against identity management technologies and their positive authorization policies, illustrating the appropriate use of both in the eyes of the auditors as well as the enterprise.

The Auditors rise to Power

It sometimes seems as if there were no audit regulations before Sarbanes-Oxley, but of course there were.  Various industries had their own set of regulations and subsequent audit requirements to meet.  SOX, however, does not discriminate among industries and with its introduction all manner of enterprise turned to their auditors for guidance.  The power and importance of the external auditor grew significantly; on one side, their clients were asking for guidance and on the other government and regulators were looking for results.  This put auditors in control and control they did; they defined stringent audit requirements and testing procedures.  They design the obstacle course that their clients have had to maneuver year after year.

Being Negative Never Seemed So Good

Simply put, auditors are interested in what people cannot do.  They want to verify that people who can execute out one task cannot execute another task that is in opposition to the first.  For example, an auditor will examine business systems to validate that the collection of capabilities granted to an inventory clerk does not enable the clerk to see the pay grades of her coworkers, as well as authorize payment for goods she ordered.  Auditors require that the analysis of these collections of capabilities must be executed at the lowest level of the audited system, ensuring that no convoluted permutation of capability components result in conflicts that cannot be detected at higher levels and thus cannot be mitigated.  This analysis of complex systems and processes is expressed using negative authorizations – explicit statements about which capability components are in conflict with others.

Compliance to audit requirements is evidence that the organization has not allowed to transpire what, according to auditors, should not be allowed to happen.  This holds true for both automatable processes and manual ones.  The lack of evidence that the organization required all of its managers to attend sexual harassment training, and thus educated managers appropriately, can be sufficient to run afoul of audit and/or regulatory requirements.  Similarly, evidence that a business system allows the inventory clerk to see her coworkers’ pay grades as well as authorize payment for goods she ordered can be sufficient to violate an audit requirement.  Controls intelligence and controls documentation platforms are designed to discover the existence of such potential violations before the auditor discovers them.  These tools, from a testing and monitoring perspective, utilize negative authorizations to discover, on an on-going as well as episodic basis, potential violations in both manual and automatable processes.

The Downside of being Positive

Identity Management technologies deal with entitlement: what are people of a certain type allowed to, and subsequently enabled, to do.  This holds true for a variety of identity management technologies including user provisioning, bottom-up and top-down role management, as well as entitlement management.  All of these technologies use a form of an entitlement which groups people together and grants them a collection of capabilities.  The explicit granting nature of these entitlements is a form of positive authorizations.

The primary goal of entitlement-based identity management systems is to realize operational efficiencies by granting and managing people’s granted abilities faster with more automated control and finesse.  A secondary goal is the self-service enablement of users in order to reduce cost.  These tools are invaluable to IT and help reduce the overall cost of supporting the enterprise’s user population.  That being said these tools cannot truly meet the needs of auditors on their own.

Auditors are not ignorant to positive authorization-based systems.  They are well aware these systems help provide access to business systems and at the same time these systems can pollute critical business systems with audit exceptions.  One of the inherent reasons that these positive authorization systems can so easily generate problems is that these systems concern themselves with collections of capabilities at the highest level.  This allows entitlements to be built and managed quickly, helping to realize operational efficiencies, but does not allow for the detailed analysis that auditors require and that negative authorization platforms provide.

Using Negative Authorizations to satisfy Audit Requirements

While the efficiencies gained from positive authorization systems, like user provisioning, are great from a bottom-line perspective, those efficiencies are irrelevant to auditors – they don’t care how quickly an enterprise grants access to business applications.  To satisfy audit requirements, an enterprise must be able to demonstrate how people cannot execute certain tasks, how theirs capabilities are not in conflict and must be able to demonstrate this amongst the tiny pieces that comprise a capability.  To accomplish this, the enterprise must rely on negative authorizations and the depth of analysis that controls intelligence platforms provide.

Unaware of their choice, many enterprises choose to codify and modify negative authorizations through manual efforts.  They employ brute force spreadsheet-based analysis of both manual processes and business systems to demonstrate that they have an enforced, auditable collection of negative authorization policies.  Effective to a point and costly to an extreme, these efforts have been tolerably sufficient in the past but with changes to auditor guidance, such as  AS5, and financial and competitive pressures, enterprises are turning to automated solutions, eschewing these manual efforts.  Enterprises that chose controls intelligence platforms, those who automate controls management and monitoring, find benefits in reduced audit preparation costs, easier more predictable audits, and stronger deployments of their positive authorization systems.

Harmonizing of Negative and Positive Authorizations

As previously mentioned, positive authorization systems, like identity management technologies, provide a conduit along which potential exceptions and audit findings travel straight into business systems.  In order to intercept these potential audit exceptions and, in turn, meet audit requirements, enterprises must examine their positive authorization-based policies through the filter of negative authorizations.

There are two key points at which controls intelligence, negative authorizations, can filter positive authorization-based identity management technologies: runtime and design-time.  At runtime, proposed account changes, generated by user provisioning systems, derived from positive authorization-based entitlements, pass through the filter of a controls intelligence platform.  This negative authorization filter highlights potential exceptions that exist in the proposed account change and does so by using the language that the auditors speak at a level of detail that the auditors expect.  This process, known as compliant provisioning, provides a safety-net for critical business applications, preventing potential audit findings from finding their way into the business applications in the first place.

Although compliant provisioning is a highly effective method of preventing potential exceptions, design-time application of negative authorization filters is a superior long term approach.  This approach not only reduces potential audit exceptions, but also increases the value of enterprise identity management deployments.  Design-time application of negative authorizations works as follows: as entitlements are developed in positive authorization systems, they are passed through the filter of a controls intelligence platform.  The negative authorization analysis of these entitlements, episodically and/or continually, highlight flaws in these entitlements and help organizations prevent these “bad” entitlements from generating fundamentally flawed role definitions, provisioning policies and account changes, and access entitlements.  Further, by relying on the filter of negative authorizations at design-time, organizations can avoid costly and time consuming entitlement exercises in which subject matter experts on critical business systems attempt to translate what they know an auditor wants to see (negative authorizations) into convoluted positive authorizations.

Both the runtime and design-time approaches harmonize positive and negative authorizations.  This harmonizing of satisfies audit requirements for complete, detailed negative authorization analysis and reduces potential audit findings.  It also retains all of the operational benefits of identity management systems, reduces the cost of deploying these systems, and increases their time to value.

Recommendations

To satisfy both internal and external audit requirements, the enterprise must consolidate and rationalize its negative authorization policy stores.  This includes transitioning from manual-effort analysis and disjoint controls documentation to continuous, automated controls testing whose results flow into a consolidated controls documentation platform.  These efforts reduce enterprise risk and audit exposure and serves to directly meet audit requirements.

Meanwhile, the enterprise should continue deploying identity management technologies and their positive authorizations in order to realize operational gains.  That being said, the enterprise should not attempt to make these positive authorizations behave like negative authorization systems as the effort will be time consuming, costly, and fundamentally will not meet audit requirements.  

Finally, the enterprise should tie their negative authorizations to their positive authorization systems.  At the minimum the enterprise should deploy compliant provisioning, passing entitlement generated account changes through the filter of a controls intelligence platform for negative authorization analysis.  Ideally, during creation and on-going maintenance, identity management entitlements, in their various forms, should be examined through negative authorization filters weeding out potential audit findings long before they can find their way into business applications.

By using negative authorizations and thus gaining control intelligence, the enterprise can speak the language of the auditor and rise to the challenge of an ever-changing set of audit requirements.  By using negative and positive authorization in harmony, the enterprise can realize operational efficiencies without an increased risk of potential audit findings.

Thursday, 01. December 2022

Foss & Crafts

53: Fediverse reflections while the bird burns

Twitter is burning, and people are flocking to the fediverse. Is the fediverse ready though? How did we get here? Where should we be going? Since Christine is co-author of ActivityPub, the primary protocol used by the fediverse, Morgan decides it's time to get Christine's thoughts recorded and out there... so we hop in the car as we talk all about it! Links: ActivityPub, the protocol which

Twitter is burning, and people are flocking to the fediverse. Is the fediverse ready though? How did we get here? Where should we be going? Since Christine is co-author of ActivityPub, the primary protocol used by the fediverse, Morgan decides it's time to get Christine's thoughts recorded and out there... so we hop in the car as we talk all about it!

Links:

ActivityPub, the protocol which wires the federated social web together, of which Christine is co-author! Be sure to check out the Overview section... it's actually fairly easy to understand!

Some of the implementations discussed (though there are many more):

Mastodon

Peertube

Pixelfed

Pleroma

A lot has been written about Elon Musk's takeover of Twitter. Here's a pretty decent timeline (though it's missing the transphobia stuff).

W3C Social Web Working Group is where ActivityPub was standardized

OcapPub (while not complete, it lays out a lot of the core problems with the way the fediverse has gone)

The Spritely Institute

Previous episodes on Spritely: What is Spritely?, Spritely Updates! (November 2021), and sorta kinda the Terminal Phase episode

The Presentation of Self on a Decentralized Web (PhD dissertation by ActivityPub co-author Amy Guy, partly covers its standardization)

SMTP and XMPP can be seen as decentralized "social networks" before that term took off

OStatus

pump.io is where the pump.io API came from, which is the direct predecessor to ActivityPub

StatusNet / GNU Social

Diaspora

MediaGoblin

APConf videos

Context Collapse

Early writeups from Christine some of these ideas, but are old:

ActivityPub: from decentralied to distributed social networks

magenc

crystal

golem


Orie Steele

Experiments in Interoperability with the DID Actor API

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at the limitless creativity and skill the Transmute team applies to our work.

We Built a Demo Application for the DID Actor API

As part of our work to support the W3C CCG VC API, we built a demo application with some interesting experimental features to support testing the use of “did:key” with both linked data proof and JSON web token-based verifiable credentials.

The W3C CCG VC APIs are a set of RESTful API definitions conforming with the OpenAPI 3.0 Specification (formerly known as Swagger) for the roles of Issuer, Verifier, and Holder as described in the Verifiable Credential Data Model specification. These APIs provide a standard set of interfaces by which interoperability may be tested and verified by various parties who leverage Verifiable Credentials (VCs).

https://api.did.actor/

https://github.com/transmute-industries/api.did.actor

We’ve used this project to test a number of things, and we wanted to share the experiments we found successful and some of the ones that didn’t work out so well.

DID Actor API Results

One of the original motivations for the demo was to be able to test interoperability with https://github.com/digitalbazaar libraries for linked data proof based verifiable credentials and “did:key”.

Unfortunately, due to some quirks with Vercel and Next.js and webpack we were only able to test interop locally.

However, we still found it very useful to be able to create an API facade between our implementations where one small line of code could switch out the implementation used to produce and consumer verifiable credentials.

This is particularly useful when debugging JSON-LD Canonicalization or Document Loader issues, which can arise from differences in implementation of Linked Data Proofs.

Another feature we have been testing is mnemonic and hd path support for key generation. For example, given a mnemonic, key type and hd path, there will be a corresponding public private key pair. In the case of Ethereum and Bitcoin, these keys can be used to derive addresses that support crypto currencies.

We have explored using these patterns to support issuing and verifying verifiable credentials.

In particular, using JsonWebSignature2020 to issue Linked Data Proofs and using VC-JWT to issue compact JWT based verifiable credentials from the same “did:key”. This proves that the key representations support both established and emerging cryptographic suites.

Finally, we have used the demo to communicate to implementers of the VC API one way to conform to the interoperability test suites, and we have added Postman integration tests that demonstrate interoperability.

Orie Steele, Transmute’s CTO and Co-Founder, has managed security concerns for startups and publicly traded companies, building secure web applications in Finance, Energy, and Healthcare.

Connect with Orie on LinkedIn, Twitter, and GitHub

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

Experiments in Interoperability with the DID Actor API was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Doc Searls Weblog

Remembering Bill Swindaman

That was Bill Swindaman on the last day I saw him: June 2nd of this year, at a gathering of friends from the best community I’ve ever known: a real one, of friends living in a place. The place was called Oxbow, and it was a collection of mismatched houses on a short dirt road […]

That was Bill Swindaman on the last day I saw him: June 2nd of this year, at a gathering of friends from the best community I’ve ever known: a real one, of friends living in a place. The place was called Oxbow, and it was a collection of mismatched houses on a short dirt road that skirted a pond off Mt. Sinai Road, north of Chapel Hill, in North Carolina. I lived or hung out there, and with friends who called themselves Oxbovines, from 1974 until I moved to Silicon Valley in 1985. After that, we got together once a year at a beach house until the early ’90s. One thing that kept me coming back was a letter Bill wrote called “Where the hell is Searls?”

Since then we’ve all stayed good friends and in touch. And sometimes rogue planets in our little solar system, such as I, would come through town and we’d get together. That’s what happened in June. It was great to see everybody, but there was bummage in the house, because we all knew Bill had ALS: an awful and fatal disease, diagnosed six months earlier. It was a disease that had claimed David Hodskins, my business partner and a friend for nearly as long, just three months earlier. (I remember David, and some of our business adventures, here.)

At Oxbow, Bill and I would often play one-on-one basketball (he was bigger and better), and shoot the shit about everything. I remember one story he told about his dad, a family doctor in Toledo, Ohio. When his car caught fire on the road for no obvious reason, Doctor Swindaman calmly pulled over to the side, got out, lit a cigarette, and calmly watched the thing burn down. Bill too was known for his calm and love of irony. On one of his long cross-country trips alone, Bill sent me a postcard from Tijuana. All he wrote was “Where the liquor flowed, and the dice were hot.” (Those less elderly that Bill and I might not know the reference.)

As I recall, Bill went to Wittenberg College and got his masters in (I thought it was urban planning, but have heard it was something else) at UNC Chapel Hill. After that, he had a series of jobs that he used to accumulate savings for funding long trips. His last job, as I recall, was working for UNC doing something or other that doesn’t matter as much as the other vocation he took up in recent decades: nature photography. You can see his work at BillSwindamanPhotography.com. Here he is, on the job:

I recognize so many places when I look through his photographs—Death Valley, Comb Ridge, Monument Valley, Arches, Canyonlands—less because I’ve been there than because I’ve shot them from commercial flights zooming by overhead. I envied Bill’s ability to get out and explore these places, while I was too committed to other things. I also respected the quality of Bill’s work. It was, and remains, primo.

We did talk for a while about his maybe coming up to New York, from which we could go out to tidelands and photograph wildlife and other outdoor scenes. I lacked gear and skills to equal Bill’s, but it would have been fun. Alas, as John Lennon said, life is what happens when you’re busy making plans.

When I saw Bill in June, I asked if he was still in shape to keep shooting. He said no, and that he had already sold off all his gear. Yet he was still in good humor, considering the obvious fact that he was done with pretty much everything other than persisting at being his good self.

This morning came an email I hadn’t expected this soon. It was from Jackie Strouble, the wild dear with whom he hooked up back in our Oxbow days. With her permission, I’ll later add here what she wrote. Meantime I hope she doesn’t mind my sharing the photo above, which came with her letter.

And I just hope Bill’s memory for us Oxbovines will be a blessing to the rest of the world.

Wednesday, 30. November 2022

Virtual Democracy

The unreasonable effectiveness of shared null results:

or, if open science were Wordle we might usually get the answer on the first line. This is a blog that compares science (the open kind) to cheating at Wordle. But not in a bad way. This is a blog, so I’ll run the narrative first. I have included links to further readings from The Open … Continue reading The unreasonable effectiveness of shared null results:
or, if open science were Wordle we might usually get the answer on the first line. This is a blog that compares science (the open kind) to cheating at Wordle. But not in a bad way. This is a blog, so I’ll run the narrative first. I have included links to further readings from The Open … Continue reading The unreasonable effectiveness of shared null results:

Monday, 28. November 2022

Jon Udell

Autonomy, packet size, friction, fanout, and velocity

Nostalgia is a dangerous drug and it’s always risky to wallow in it. So those of us who fondly remember the early blogosphere, and now want to draw parallels to the fediverse, should do so carefully. But we do want to learn from history. Here’s one way to compare five generations of social software along … Continue reading Autonomy, packet size, friction, fanout, and velocity

Nostalgia is a dangerous drug and it’s always risky to wallow in it. So those of us who fondly remember the early blogosphere, and now want to draw parallels to the fediverse, should do so carefully. But we do want to learn from history.

Here’s one way to compare five generations of social software along the five dimensions named in the title of this post.

Autonomy Packet Size Friction Fanout Velocity Usenet medium high medium medium low Blogosphere high high high low low Facebook low high low medium high Twitter low low low high high Fediverse high medium high medium medium

These are squishy categories, but I think they surface key distinctions. Many of us who were active in the blogosphere of the early 2000s enjoyed a high level of autonomy. Our RSS readers were our Internet dashboards. We loaded them with a curated mix of official and individual voices. There were no limits on the size of packets exchanged in this network. You could write one short paragraph or a 10,000-word essay. Networking wasn’t frictionless because blog posts did mostly feel like essays, and because comments didn’t yet exist. To comment on my blog post you’d write your own blog post linking to it.

That friction limited the degree to which a post would fan out through the network, and the velocity of its propagation. The architecture of high friction, low fanout, and low velocity was a winning combination for a while. In that environment I felt connected but not over-connected, informed but not overloaded.

Twitter flipped things around completely. It wasn’t just the loss of autonomy as ads and algos took over. With packets capped at 120 characters, and tweets potentially seen immediately by everyone, friction went nearly to zero. The architecture of low friction created an addictive experience and enabled powerful effects. But it wasn’t conducive to healthy discourse.

The fediverse can, perhaps, strike a balance. Humans didn’t evolve to thrive in frictionless social networks with high fanout and velocity, and arguably we shouldn’t. We did evolve in networks governed by Dunbar’s number, and our online networks should respect that limit. We need less friction within communities of knowledge and practice, more friction between them. We want messages to fan out pervasively and rapidly within communities, but less so between them.

We’re at an extraordinary inflection point right now. Will the fediverse enable us to strike the right balance? I think it has the right architectural ingredients to land where I’ve (speculatively) placed it in that table. High autonomy. As little friction as necessary, but not too little. As much fanout and velocity as necessary, but not too much. Nobody knows how things will turn out, predictions are futile, behavior is emergent, but I am on the edge of my seat watching this all unfold.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/


Damien Bod

Sharing Microsoft Graph permissions and solution Azure App Registrations

This article looks at using Microsoft Graph permissions in Azure App registrations and whether you should use Graph in specific Azure App registrations types and if it is ok to expose these with other scopes and roles. Is it ok to expose Graph permissions in public Azure App registrations? Using Graph with public applications As […]

This article looks at using Microsoft Graph permissions in Azure App registrations and whether you should use Graph in specific Azure App registrations types and if it is ok to expose these with other scopes and roles. Is it ok to expose Graph permissions in public Azure App registrations?

Using Graph with public applications

As a rule, I do not allow any Graph permissions to be assigned to an Azure App registration used by a public application apart from the delegated User.Read permission or others like this. The problem with sharing a Graph permission in this way is that you allow the full permission to be shared and not just the specific use case from your application. For example, if I expose the User.ReadWrite.All permission in a public Azure App registration, anyone who acquires an access token for this permission can do everything allowed. My application might only need to allow a user to update the firstname, lastname properties. With this token, I could list out all users on this tenant and share this in an unfriendly way or create and delete users.

If you find a public Azure App registration and Graph permissions other than the User.Read scope, you probably have found a security attack possibility or evaluated permissions problem.

A better setup is to separate the Graph permissions into a different Azure App registration and only allow this to be used in a confidential client.

Using Graph with application App Registrations

When using Graph application permissions, I create a specific Azure App registration to expose the permissions which require a secret or certificate to acquire the access token. You should not share this with an Azure App registration used to expose different APIs. If only using this inside a specific confidential Web client which is not used to expose further APIs, then it is ok to share the Graph permissions and the confidential client definitions in the same Azure App registration.

I never mix application permissions and delegated permissions in the same Azure App registration. If you find yourself doing this, it is probably the result of an architecture error and should be reviewed.

Separating Graph permissions and solution App registrations

When using an Azure App registration to use an specific API role or scope, do not use the same one to expose a Graph permission. This is because the intent of the Azure App registration is to do whatever is allowed in the API exposed using this. If you also allow Graph permissions, the App registrations can be used for two different intentions. The client using this could execute an evaluated privilege attack as the same secret/certificate can be used to acquire both of the permissions or even worse, no secret at all is required to get an access token for the Graph API.

You might not want to allow the client using the API to have the full access to everything exposed with the Graph permission. You probably only want to expose a subset. This can be solved by using an API which validates the specific request and in a trusted environment and the Graph permissions from a separate Azure App registration can be used with a confidential client and only the subset of the Graph features are exposed to the third party. A secret or certificate must be required to get the access token for the Graph permission. It it not possible for the third party application to get an access token for the Graph permission.

What about sharing Graph permissions and third party applications

Shared Graph permissions to third party clients is sharing trust. I would avoid this as much as possible and use a zero trust strategy. Only expose or share what is required. How secure are the secrets or certificates in the other solutions? How easy is it to rotate the secrets?

Links:

https://learn.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-6.0

https://learn.microsoft.com/en-us/graph/tutorials

https://learn.microsoft.com/en-us/training/modules/msgraph-dotnet-core-show-user-emails/

https://developer.microsoft.com/en-us/graph/

https://github.com/AzureAD/microsoft-identity-webhttps://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://learn.microsoft.com/en-us/graph/permissions-reference

https://posts.specterops.io/azure-privilege-escalation-via-azure-api-permissions-abuse-74aee1006f48

https://learn.microsoft.com/en-us/azure/active-directory/develop/secure-least-privileged-access

Using Microsoft Graph API in ASP.NET Core

Wednesday, 23. November 2022

Rebecca Rachmany

L1s Are Doomed Unless….

Reflections following Cardano Summit 2022 With Ethereum now functionally a censored network, many have been asking what L1 they should build on. With that in mind (plus an invitation from the magical Catalyst4Climate group), I attended the two-day Cardano Summit in Lausanne, and asked everyone I could the key question: “Does it work?” Cardano Community: Less Shill, More Goodwill
Reflections following Cardano Summit 2022

With Ethereum now functionally a censored network, many have been asking what L1 they should build on. With that in mind (plus an invitation from the magical Catalyst4Climate group), I attended the two-day Cardano Summit in Lausanne, and asked everyone I could the key question: “Does it work?”

Cardano Community: Less Shill, More Goodwill

Cardano has taken a slower route towards development of its solution, with more than 160 academic papers and the establishment of a line of studies at Edinburgh University to create and maintain a Decentralization Index. They eschew the philosophy of “move fast and break things” for a more methodical and reliable approach to building software.

The community surrounding Cardano reflects this philosophy. The dress was business casual rather than cypherpunk haphazard. Shilling was at a minimum, with Impact-related projects having equal weight with DeFi and technical sessions. The Cardano crowd is intentional about not speaking about other protocols as well as avoiding discussions of the ADA price. In short, compared to other crypto communities, the Cardano crowd came off as a group of mature professionals rather than revolutionaries or slick marketers.

On the one hand, the more mature feel was welcome, reflected in several serious projects with large bodies such as the UN and government institutions. On the downside, it seemed there were fewer truly cutting edge innovations. While Cardano hosts a number of delightful NFT projects, the more practical teams took the floor most often. Self Sovereign Identity projects were abundant, which gives Cardano a huge boost compared to crypto projects who assert that NFTs will somehow suffice as Verifiable Credentials. Live demos included using DID and VC for swapping business contacts and a working PoS system based on a Raspberry Pi.

What, No Elephant?

Remarkably, in the middle of the coldest crypto winter, there was no mention of any of the difficulties that have befallen the rest of the market. With Ethereum becoming a censored network, DAOs under legal scrutiny, Solana failing to live up to its promises, and centralized exchanges crashing the market, now is the perfect time to be speaking about lessons learned.

Yet, there was almost complete silence about the news in the industry and how to avoid problems such as:

NFTs becoming “mutable” by OpenSea. Majority of Ethereum blocks censoring private transactions. Prosecution of DAOs under US regulations. Under-collateralization of assets on exchanges. Implications of the recent ruling against LBRY, declaring their tokens to be securities.

Of course, in the decentralized world, these are sticky problems. If you aren’t a centralized entity, it’s easy to argue that it’s not up to you who opens an NFT marketplace, where and how the validators choose to manage their operations, or how the regulators treat the projects built on your Layer 1.

Whether it’s up to you or not, however, these are real threats. Ignoring them won’t make them go away.

Regulation and Validators

Like every blockchain, Cardano does have a group working with regulators. They do have consortia of their Stake Pool Operators, and Delegators can stake on the nodes they consider most reliable. Unfortunately, this seems inadequate.

While 60% of the nodes in Cardano are on bare metal, 90% of the Relayers are hosted on the major cloud provider. Plus, it’s not clear what “bare metal” means in terms of jurisdiction. If someone has a fully-owned bare metal rack in their basement, and the government of their country says you can’t run Tornado Cash through your node, what do you do? We know the answer to that one.

Today, every L1 should be thinking very carefully about the companies hosting their nodes, as well as the jurisdictions where the servers are located.

Which brings us to regulation. It should be obvious at this point that the regulatory bodies are not friends of private blockchains. It should also be obvious why that is. Nation-states have no interest in a bunch of technologists offering an alternative to the nation’s sovereign monetary system.

You Are Not an Operating System

Cardano, like the other Layer 1s, sees itself as a kind of operating system. As such, L1s don’t govern what is built on them. They don’t decide on the direction of the community, how to prioritize traffic, or what projects to encourage on the platform. Except for the fact that, over time, it’s inevitable that L1s end up prioritizing DeFi because that’s where the money is. Also, individuals inevitably end up complying with laws, no matter how unjust they are, because the consequences of civil disobedience are too high for most people.

Why stand up for LBRY or Ooki? You’ve got your own problems and the regulators haven’t come after you yet. Why not let your nodes live on Alchemy or Infura? You’ll just annoy your community and limit who can be a host. Why speak out about centralized exchanges or disappearing NFTs on OpenSea? Where else will you get that volume of transaction fees?

All of these are examples of the trap of decentralization, free markets, and an industry based on individual game theory.

For any individual in the game, it’s easiest to host a node or relayer on Alchemy or Google. For any individual in the game, it’s easiest to manage their onboarding and offboarding in whatever legitimate or illegitimate way that works for them. For any individual project it’s easiest to call their token a “utility” or “governance” token based on whatever is legal at any given moment. For any individual in the game, it’s easiest to use the popular tools even if they aren’t as decentralized.

For the industry as a whole, these individual game theoretic choices lead to illegalization, censorship, and CBDCs. These individual choices are leading to a bunch of useless L1s and Bitcoin as the last chain standing.

The Bitcoin Maxis may have been right all along, but even the Bitcoin Maxis are hurting these days. The inability of L1s to deliver on their promises damages everyone.

Decentralized Governance: Can It Get Better

Governance is how individuals do things together. From that perspective, DAOs have been another huge failing of the Web3 industry. Hard Forks and Rage Quits are the opposite of democracy. Rather than a Web3 movement, what we have is a lot of noise and competition.

In the next few blogs, I’ll be exploring how Layer 1s and the industry as a whole might approach some of these problems. I mean. Hopefully. Hopefully, I’ll have some practical and constructive ideas over the next few weeks. At this point, I have a lot more questions than answers, and I have a lot more faith than evidence that we as an industry can resolve these issues.

To be perfectly honest, I’m about as happy as a Bitcoin Maxi that I’ve been right all along about these things. Right but Rekt is not a good look. We can do better.

Tuesday, 22. November 2022

Phil Windleys Technometria

A Healthcare Utopia of Rules

Summary: Verifiable credentials have a number of use cases in healthcare. Using them can reduce the administrative burden that people experience at the hands of the bureaucracies that inevitably develop. I have a medical condition that requires that I get blood tests every three months. And, having recently changed jobs, my insurance, and thus the set of acceptable labs, changed recently.

Summary: Verifiable credentials have a number of use cases in healthcare. Using them can reduce the administrative burden that people experience at the hands of the bureaucracies that inevitably develop.

I have a medical condition that requires that I get blood tests every three months. And, having recently changed jobs, my insurance, and thus the set of acceptable labs, changed recently. I know that this specific problem is very US-centric, but bear with me, I think the problems that I'll describe, and the architectures that lead to them, are more general than my specific situation.

My doctor sees me every 6 months, and so gives me two lab orders each time. Last week, I showed up at Revere Health's lab. They were happy to take my insurance, but not the lab order. They needed a date on it. So, I called my doctor and they said they'd fax over an order to the lab. We tried that three times but the lab never got it. So my doctor emailed it to me. The lab wouldn't take the electronic lab order from my phone, wouldn't let me email it to them (citing privacy issues with non-secure email), and couldn't let me print it there. I ended up driving to the UPS Store to print it, then return to the lab. Ugh.

This story is a perfect illustration of what David Graeber calls the Utopia of Rules. Designers of administrative systems do the imaginative work of defining processes, policies, and rules. But, as I wrote in Authentic Digital Relationships:

Because of the systematic imbalance of power that administrative ... systems create, administrators can afford to be lazy. To the administrator, everyone is structurally the same, being fit into the same schema. This is efficient because they can afford to ignore all the qualities that make people unique and concentrate on just their business. Meanwhile subjects are left to perform the "interpretive labor," as Graeber calls it, of understanding the system, what it allows or doesn't, and how it can be bent to accomplish their goals. Subjects have few tools for managing these relationships because each one is a little different from the others, not only technically, but procedurally as well. There is no common protocol or user experience [from one administrative system to the next].

The lab order format my doctor gave me was accepted just fine at Intermountain Health Care's labs. But Revere Health had different rules. I was forced to adapt to their rules, being subject to their administration.

Bureaucracies are often made functional by the people at the front line making exceptions or cutting corners. In my case no exceptions were made. They were polite, but ultimately uncaring and felt no responsibility to help me solve the problem. This is an example of the "interpretive labor" borne by the subjects of any administrative system.

Centralizing the system—such as having one national healthcare system—could solve my problem because the format for the order and the communication between entities could be streamlined. You can also solve the problem by defining cross-organization schema and protocols. My choice, as you might guess, would be a solution based on verifiable credentials—whether or not the healthcare system is centralized. Verifiable credentials offer a few benefits:

Verifiable credentials can solve the communication problem so that everyone in the system gets authentic data. Because the credentials issued to me, I can be a trustworthy conduit between the doctor and the lab. Verifiable credentials allow an interoperable solution with several vendors. The tools, software, and techniques for verifiable credentials are well understood.

Verifiable credentials don't solve the problem of the lab being able to understand the doctor's order or the order having all the right data. That is a governance problem outside the realm of technology. But because we've narrowed the problem to defining the schema for a given localized set of doctors, labs, pharmacies, and other health-care providers, it might be tractable.

Verifiable credentials are a no-brainer for solving problems in health care. Interestingly, many health care use cases already use the patient as the conduit for transferring data between providers. But they are stuck in a paper world because many of the solutions that have been proposed for solving it, lead to central systems that require near-universal buy-in to work. Protocol-based solutions are the antedote to that and, fortunately, they're available now.

Photo Credit: Blood Draw from Linnaea Mallette (CC0 1.0)

Tags: identity agents ssi autonomic+identity


MyDigitalFootprint

Chaos and the abyss

This read describes the space between chaos and the abyss, where we find ourselves when we allow machines to make decisions without safeguarding collective criticism or realise they can change our minds.   ----- There is a reality that we are not forced to recognise our collective ethical and own moral bias without others. However, these biases are the basis of our decision-making, so
This read describes the space between chaos and the abyss, where we find ourselves when we allow machines to make decisions without safeguarding collective criticism or realise they can change our minds.  

-----

There is a reality that we are not forced to recognise our collective ethical and own moral bias without others. However, these biases are the basis of our decision-making, so asking a machine to "take an unelected position of trust" and make a decision on our collective behalf creates a space we should explore as we move from human criticism to machine control.


Machines are making decisions.   

Automation is incredibly powerful and useful, and we continue to learn to reduce bias in automated decision-making by exploring data sets and understanding the outcomes by testing for bias.  As we continue testing, iterating and learning about using past data for future decisions, we expose many of our human frailties and faults.  


The decisions we ask machines to make today are easily compared to where we think we are going.  However, since I cannot confirm or write down my cognitive biases today (out of the 180 plus available). If asked which biases would be the same tomorrow, I would be unable to tell you.  Therefore, I am even less convinced that, as a team, we can agree on our team biases as these will change as a new sun rises because we all have eaten our own choice of food, have different biology, chemistry and bacteria and have had divergence experiences since the last sunrise.

AI and Hypocrisy

Hypocrisy is the practice of engaging in the same behaviour or activity for which one criticises another. Our past and present actions can be different, but because of our past, we have learnt, and change has happened, but that does not mean we should not be able to call out when someone is making the same mistakes.  


A defence used by those called out is to cry “hypocrisy”.  Human rights issues and football spring to mind. How can you judge when you did the same? As a Brit, we are responsible for some of the worst abuses of power and wrong thinking, but we are changing; I agree that it is not fast or far enough. However, the point here is that humans learn and can call something out to other humans if they are making the same mistakes.  I accept we are not very good at either. 


However, contemporary discourse is that if your past is flawed, you are not empowered to be critical of others.  However, if we ever believe that we are beyond criticism, fault or learning, surely we become delusional and unable to see the wrong we are doing, believing we are more moral or ethical.  But what about machines? When machines make a biased decision, who is there to be critical or will the AI call hypocrisy? 


I struggle with the idea that the company values, purpose and culture are good predictors of the decision-making processes that we have in place because of bias.  A good human culture can exist, but that is one of learning, but that does not mean the machine that powers the organisation is aligned with learning in the same direction.


This thinking about hypocrisy and culture creates gaps, voids and chasms filled with chaos between individuals' integrity, the integrity of the wider team/ company and what decisions we ask machines (automation) to make.   This is not new, and such gaps have been the study by many philosophies and political scientists since Aristotle.  


So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy?  This is the space between chaos and the abyss.

So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy? 
Being explainable is not the problem.

Explainable is in fashion in AI; however, events of 2020 to 2022 have presented rich picking from COVID lockdowns, the cost of living crisis, football WorldCup hosting and COP28 to say that explainable is not much use when decisions impact humans.  Equally, making an algorithm or the code behind it explainable does not solve the problem.  Neural networks are accurate but un-interpretable, whereas Decision Trees are interpretable but inaccurate.  I can explain an outcome, but that does not mean I can predict it.  We can explain mass shootings, but that is of little value or comfort to those who lost a loved one. 

Jumping into the abyss.

Machines with bias in decision-making are not new, nor is explainable AI thinking.  However, when we (humans) are criticised or called out, we often become defensive and don't change. Will machines be different?  Calling out that someone is wrong does not persuade them to follow a different path. Calling out a decision made by a machine is not going to change the machine's decision-making process.


Here is the final jump. How to change someone's mind is a super article from Manfred F. R. Kets de Vries at Insead.   It sets down The Scheherazade method of Seven steps to change a person’s mind.  Now when a machine learns that it is easier to change a human mind by following these steps, are we in danger of seeding the last part of independent thinking to the machine? We will not see the problems as our minds have been aligned to someone elses decision-making methods (calling out loyality).  It is why we need a void between morals and ethics and should celebrate unethical moral and immoral ethics as they show us the tensions and gaps in our thinking and allow us to learn. 

 


This is a nod to Brian Cork based on his comment on a previous article on Fear thank you.

Monday, 21. November 2022

Damien Bod

Use multiple Azure AD access tokens in an ASP.NET Core API

This article shows how to setup an ASP.NET Core application to authorize multiple access tokens from different Azure AD App registrations. Each endpoint can only accept a single AAD access token and it is important that the other access tokens do not work on the incorrect API. ASP.NET Core Schemes and Policies are used to […]

This article shows how to setup an ASP.NET Core application to authorize multiple access tokens from different Azure AD App registrations. Each endpoint can only accept a single AAD access token and it is important that the other access tokens do not work on the incorrect API. ASP.NET Core Schemes and Policies are used to force the delegated authorization.

Code: https://github.com/damienbod/AspNetCoreApiAuthMultiIdentityProvider/tree/main/AadMultiApis

Setup

Azure Active Directory is used to implement the identity provider and is responsible for creating the access tokens. Two Azure App registrations are used to implement the APIs. One Azure App registration is created for application clients and accepts tokens from multiple tenants which have the correct roles claims. A secret is required to get and access token for the App registration. Any tenant could used this endpoint so maybe extra authorization is required in case the clients shared the secrets, certificates or something like this. It would probably make sense to validate the tenant used to acquire the access token. The delegated Azure App registration is implemented as a single tenant and can only be used in the second API.

A test application implemented as a server rendered UI confidential client is used to send the API calls. The application can acquire both types of access tokens and send the tokens to the correct endpoints. (Or incorrect endpoints for testing)

Implement the API

The AddMicrosoftIdentityWebApi method from the Micorsoft.Identity.Web Nuget package is used to implement the AAD MSAL clients. Separate ASP.NET Core schemes are used for the different access tokens. The different tokens use different configurations and also use separate ASP.NET Core policies for forcing the authorization and the specific claims.

services.AddAuthentication(Consts.AAD_MULTI_SCHEME) .AddMicrosoftIdentityWebApi(Configuration, "AzureADMultiApi", Consts.AAD_MULTI_SCHEME); services.AddAuthentication(Consts.AAD_SINGLE_SCHEME) .AddMicrosoftIdentityWebApi(Configuration, "AzureADSingleApi", Consts.AAD_SINGLE_SCHEME);

The Azure AD configurations are added to the app settings and are specific for each client.

"AzureADMultiApi": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "967925d5-87ea-46e6-b0eb-1223c001fd77" }, "AzureADSingleApi": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "b2a09168-54e2-4bc4-af92-a710a64ef1fa" },

The AddAuthorization is used to add the ASP.NET Core policies which are specific for the schemes and not global. The policies validate the required claims. If using a multi tenant App registration, you might need to validate the tenant used to acquire the access token as well. Access tokens for both clients must be acquired using a secret (or certificate would also be ok, type 2). This is important for multi-tenant App registrations if allowing any enterprise application to use this.

services.AddAuthorization(policies => { policies.AddPolicy(Consts.MUTLI_AAD_POLICY, p => { // application access token // "roles": [ // "application-api-role" // ], // "azp": "967925d5-87ea-46e6-b0eb-1223c001fd77", p.RequireClaim("azp", "967925d5-87ea-46e6-b0eb-1223c001fd77"); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); policies.AddPolicy(Consts.SINGLE_AAD_POLICY, p => { // delegated access token => "scp": "access_as_user", // "azp": "46d2f651-813a-4b5c-8a43-63abcb4f692c", p.RequireClaim("azp", "46d2f651-813a-4b5c-8a43-63abcb4f692c"); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); });

An authorization filter is added to the AddControllers method which requires one of our defined schemes.

services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .AddAuthenticationSchemes( Consts.AAD_MULTI_SCHEME, Consts.AAD_SINGLE_SCHEME) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

The middelware is setup like any ASP.NET Core application using authentication. You could add a RequireAuthorization method to the MapControllers method as well.

app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); });

The Controllers used to expose the endpoints use both the scheme and the policy to validate the access token. It is important that correct access token only works for the correct endpoint. Controllers support authorization using attributes in a developer friendly way. You can develop secure endpoints really efficiently using this.

[Authorize(AuthenticationSchemes = Consts.AAD_MULTI_SCHEME, Policy = Consts.MUTLI_AAD_POLICY)] [Route("api/[controller]")] public class MultiController : Controller { [HttpGet] public IEnumerable<string> Get() { return new string[] { "data 1 from the multi api", "data 2 from multi api" }; } } Test Confidential Client

To test the API, I created an ASP.NET Core Razor page application which authenticates using a confidential OpenID Connect code flow client. The application acquires the different access tokens using services. The single tenant service gets a delegated access token to access the single tenant API.

using Microsoft.Identity.Web; using System.Net.Http.Headers; namespace RazorAzureAD; public class SingleTenantApiService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; private readonly IConfiguration _configuration; public SingleTenantApiService(IHttpClientFactory clientFactory, ITokenAcquisition tokenAcquisition, IConfiguration configuration) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; _configuration = configuration; } public async Task<List<string>> GetApiDataAsync(bool testIncorrectMultiEndpoint = false) { var client = _clientFactory.CreateClient(); var scope = _configuration["AzureADSingleApi:ScopeForAccessToken"]; var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(new[] { scope }); client.BaseAddress = new Uri(_configuration["AzureADSingleApi:ApiBaseAddress"]); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); HttpResponseMessage response; if (testIncorrectMultiEndpoint) { response = await client.GetAsync("api/Multi"); // must fail } else { response = await client.GetAsync("api/Single"); } if (response.IsSuccessStatusCode) { var responseContent = await response.Content.ReadAsStringAsync(); var data = System.Text.Json.JsonSerializer.Deserialize<List<string>>(responseContent); if(data != null) return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

The MultiTenantApplicationApiService class is used to get an application access token using the OAuth client credentials flow. This requires a secret (or certificate) and no user is involved in this flow.

using Microsoft.Identity.Client; using System.Net.Http.Headers; namespace RazorAzureAD; public class MultiTenantApplicationApiService { private readonly IHttpClientFactory _clientFactory; private readonly IConfiguration _configuration; public MultiTenantApplicationApiService(IHttpClientFactory clientFactory, IConfiguration configuration) { _clientFactory = clientFactory; _configuration = configuration; } public async Task<List<string>> GetApiDataAsync(bool testIncorrectMultiEndpoint = false) { // 1. Client client credentials client var app = ConfidentialClientApplicationBuilder .Create(_configuration["AzureADMultiApi:ClientId"]) .WithClientSecret(_configuration["AzureADMultiApi:ClientSecret"]) .WithAuthority(_configuration["AzureADMultiApi:Authority"]) .Build(); var scopes = new[] { _configuration["AzureADMultiApi:Scope"] }; // default scope // 2. Get access token var authResult = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); // 3. Use access token to access token var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configuration["AzureADMultiApi:ApiBaseAddress"]); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); HttpResponseMessage response; if (testIncorrectMultiEndpoint) { response = await client.GetAsync("api/Single"); // must fail } else { response = await client.GetAsync("api/Multi"); } if (response.IsSuccessStatusCode) { Console.WriteLine(await response.Content.ReadAsStringAsync()); } if (response.IsSuccessStatusCode) { var responseContent = await response.Content.ReadAsStringAsync(); var data = System.Text.Json.JsonSerializer.Deserialize<List<string>>(responseContent); if (data != null) return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

When the application is run, the APIs can be tested and validated. If running this locally, you need to setup your own Azure App registrations and change the configuration.

Links

https://github.com/AzureAD/microsoft-identity-web

https://learn.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core?view=aspnetcore-6.0

Sunday, 20. November 2022

Werdmüller on Medium

What is a globalist?

An overloaded term often used as a racist dog-whistle. Continue reading on Medium »

An overloaded term often used as a racist dog-whistle.

Continue reading on Medium »

Friday, 18. November 2022

Jon Udell

Debuggable explanations

I’ve been reviewing Greg Wilson’s current book project, Software Design in Python. Like the earlier JavaScript-based Software Design by Example it’s a guided tour of tools, techniques, and components common to many software systems: testing frameworks, parsers, virtual machines, debuggers. Each chapter of each of these books shows how to build the simplest possible working … Continue reading Debugg

I’ve been reviewing Greg Wilson’s current book project, Software Design in Python. Like the earlier JavaScript-based Software Design by Example it’s a guided tour of tools, techniques, and components common to many software systems: testing frameworks, parsers, virtual machines, debuggers. Each chapter of each of these books shows how to build the simplest possible working version of one of these things.

Though I’ve used this stuff for most of my life, I’ve never studied it formally. How does an interpreter work? The chapter on interpreters explains the basic mechanism using a mixture of prose and code. When I read the chapter I can sort of understand what’s happening, but I’m not great at mental simulation of running code. I need to run the code in a debugger, set breakpoints, step through execution, and watch variables change. Then it sinks in.

The GitHub repo for the book includes all the text and all the code. I’d like to put them side-by-side, so that as I read the narrative I can run and debug the code that’s being described. Here’s how I’m doing that in VSCode.

This is pretty good! But it wasn’t dead simple to get there. In a clone of the repo, the steps included:

Find the HTML file for chapter 3. Install a VSCode extension to preview HTML. Find the code for chapter 3. Adjust the code to not require command-line arguments. Arrange the text and code in side-by-side panes.

Though it’s all doable, the activation threshold is high enough to thwart my best intention of repeating the steps for every chapter.

Whether in VSCode or another IDE or some other kind of app, what would be the best way to lower that activation threshold?

Thursday, 17. November 2022

MyDigitalFootprint

We need more unethical morals!

I explore ethics, morals and integrity in the context of decision-making. This piece explores the void between ethics and morals and why we need this place to exist because it allows us to explore the reason why unethical morals force us to new thinking. The difference in definition between Ethics and Morals Definition: Ethics are guiding principles of conduct of an individual or group. Defi
I explore ethics, morals and integrity in the context of decision-making. This piece explores the void between ethics and morals and why we need this place to exist because it allows us to explore the reason why unethical morals force us to new thinking.

The difference in definition between Ethics and Morals

Definition: Ethics are guiding principles of conduct of an individual or group.

Definition: Morals are principles on which one’s judgments of right and wrong are based.

Therefore an important difference between ethics and morals is that ethics are relatively uniform within a group, whereas morals are individual and heavily influenced by local culture and beliefs.


How to change someone's mind is a super article from Manfred F. R. Kets de Vries at Insead.  It is important because if we want more people in the moral group, we need those with different ethics to change. And if we want to update our morals, we need to be able to change our ethics.

In Manfred’s article, I believe that ethics and morals become mixed up between what a writer means and what the reader understands. It is very confusing when a narrative uses the ideals of ethics and morals but only applies one of the words.

What we are aware of is that there is a dynamic relationship between what an individual thinks and what a group thinks and how both an individual can affect group thinking and group thinking affects individuals. #Morals are based on our own #principles - which are influenced by our society's values. Those values create unique #ethics, which are "rules" a particular societal system gives to those in that place.

Because values are not principles, and rules are not values, it creates friction which we see as moral and ethical voids - spaces where you can have unethical morals. Rules, how we control ethical behaviour is always a laggard, which means those at the forefront of change see the void and gaps between the perceived new values and the old rules.

From a linear world viewpoint, we understand this dynamic relationship between ethics and morals as they both challenge and refine each other for the betterment (we aspire to) of both, but there is a time lag. 

However, simple language and diagrams create this rather naive viewpoint because what we witness is that our morals are challenged by society, yet we often need a movement before our collective ethics create better moral outcomes and new rules. 

Therefore we have a time-lagged gap or void which prevents the full alignment of morals and ethics.  I never realised how important this time lag is. Without a time lag, which creates separation, we would never improve but rather get trapped in negative and unproductive ways because we all accept this moral or ethical behaviour as the best we can do.  

 It is in this void we find all the complexity of modern society and politics. 

This gap represents the tensions that boards and leadership teams have to face as they find that they need to find a strategy and route between the new thinking and the old rules. 

COVID19, climate change, sustainability, and poverty are just some examples that have made us more aware of the gaps between different nations' rules, constraints and resources which provide their ethics and different mental model for a morally better society.  If it is or not is yet to be determined.

Therefore I believe we should be asking for more “unethical moral”, and more “immoral ethical” dilemmas; as this will focus our attention on the void between our existing rule set and the new rules we need if we are to make a world more inclusive, accepting and transparent and less biased, cruel and prejudiced. 

I repeat …. we need more unethical morals!


Wednesday, 16. November 2022

Mike Jones: self-issued

OpenID Presentations at November 2022 OpenID Workshop and IIW

I gave the following presentation at the Monday, November 14, 2022 OpenID Workshop at VISA: OpenID Connect Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, November 15, 2022: Introduction to OpenID Connect (PowerPoint) (PDF)

I gave the following presentation at the Monday, November 14, 2022 OpenID Workshop at VISA:

OpenID Connect Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, November 15, 2022:

Introduction to OpenID Connect (PowerPoint) (PDF)

Tuesday, 15. November 2022

Heres Tom with the Weather

Mastodon Discovery

Making notes is helpful when reading and running unfamiliar code for the first time. I usually start with happy paths. Here’s some notes I made while learning about Mastodon account search and discovery. It’s really cool to poke around the code that so many people are using every day to find each other. When you search on an account identifier on Mastodon, your browser makes a request to you

Making notes is helpful when reading and running unfamiliar code for the first time. I usually start with happy paths. Here’s some notes I made while learning about Mastodon account search and discovery. It’s really cool to poke around the code that so many people are using every day to find each other.

When you search on an account identifier on Mastodon, your browser makes a request to your Mastodon instance:

/api/v2/search?q=%40herestomwiththeweather%40mastodon.social&resolve=true&limit=5

The resolve=true parameter tells your Mastodon instance to make a webfinger request to the target Mastodon instance if necessary. The search controller makes a call to the SearchService

def search_results SearchService.new.call( params[:q], current_account, limit_param(RESULTS_LIMIT), search_params.merge(resolve: truthy_param?(:resolve), exclude_unreviewed: truthy_param?(:exclude_unreviewed)) ) end

and since resolve=true, SearchService makes a call to the ResolveAccountService

if options[:resolve] ResolveAccountService.new.call(query)

The purpose of ResolveAccountService is to “Find or create an account record for a remote user” and return an account object to the search controller. It includes WebfingerHelper which is a trivial module with just one one-line method named webfinger!()

module WebfingerHelper def webfinger!(uri) Webfinger.new(uri).perform end end

This method returns a webfinger object. Rather than call it directly, ResolveAccountService invokes process_webfinger! which invokes it and then asks the returned webfinger object’s subject method for its username and domain and makes them instance variables of the service object.

def process_webfinger!(uri) @webfinger = webfinger!("acct:#{uri}") confirmed_username, confirmed_domain = split_acct(@webfinger.subject) if confirmed_username.casecmp(@username).zero? && confirmed_domain.casecmp(@domain).zero? @username = confirmed_username @domain = confirmed_domain return end

If the Mastodon instance does not already know about this account, ResolveAccountService invokes fetch_account! which calls the ActivityPub::FetchRemoteAccountService which inherits from ActivityPub::FetchRemoteActorService

@account = ActivityPub::FetchRemoteAccountService.new.call(actor_url, suppress_errors: @options[:suppress_errors])

The actor_url will look something like

https://mastodon.social/users/herestomwiththeweather

The ActivityPub::FetchRemoteActorService passes the actor_url parameter to fetch_resource to receive a json response for the remote account.

@json = begin if prefetched_body.nil? fetch_resource(uri, id) else

The response includes a lot of information including name, summary, publicKey, images and urls to fetch more information like followers and following.

Finally, the ActivityPub::FetchRemoteActorService calls the ActivityPub::ProcessAccountService, passing it the json response.

ActivityPub::ProcessAccountService.new.call(@username, @domain, @json, only_key: only_key, verified_webfinger: !only_key)

If the Mastodon instance does not know about the account, ActivityPub::ProcessAccountService invokes create_account and update_account to save the username, domain and all the associated urls from the json response to a new account record in the database.

create_account if @account.nil? update_account

I have several questions about how following others works and will probably look at that soon. I may start out by reading A highly opinionated guide to learning about ActivityPub which I bookmarked a while ago.


Identity Woman

Identosphere

Infominer and I have been publishing the weekly Identosphere Newsletter and Summary of all that is happening Self-Sovereign and Decentralized Identity. These are ways you can contribute a one time end of the year contribution: Or subscribe with a contribution every month this button will take you to a page where you can pick a […] The post Identosphere appeared first on Identity Woman.

Infominer and I have been publishing the weekly Identosphere Newsletter and Summary of all that is happening Self-Sovereign and Decentralized Identity. These are ways you can contribute a one time end of the year contribution: Or subscribe with a contribution every month this button will take you to a page where you can pick a […]

The post Identosphere appeared first on Identity Woman.


Damien Bod

Create Azure App Registration for API using Powershell

This post shows how to setup an Azure App registration using Powershell for an application access token using an application role. In Azure roles are used for App only, scopes are used for delegated flows (Or roles for users). The Azure App registration uses OAuth2 with the client credentials flow. A secret and a client_id […]

This post shows how to setup an Azure App registration using Powershell for an application access token using an application role. In Azure roles are used for App only, scopes are used for delegated flows (Or roles for users). The Azure App registration uses OAuth2 with the client credentials flow. A secret and a client_id is used.

Code: https://github.com/damienbod/GrpcAzureAppServiceAppAuth

The AzureAD Powershell module is used to create a new Azure App registration. The New-AzureADApplication function creates a new Azure App registration with a secret on the defined tenant from the authentication flow.

$Guid = New-Guid $startDate = Get-Date $allowPassthroughUsers = false $PasswordCredential = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordCredential $PasswordCredential.StartDate = $startDate $PasswordCredential.EndDate = $startDate.AddYears(20) $PasswordCredential.KeyId = $Guid $PasswordCredential.Value = ([System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes(($Guid)))) if(!($myApp = Get-AzureADApplication -Filter "DisplayName eq '$($appName)'" -ErrorAction SilentlyContinue)) { $myApp = New-AzureADApplication -DisplayName $appName -PasswordCredentials $PasswordCredential -AllowPassthroughUsers $allowPassthroughUsers # Write-Host $myApp | Out-String | ConvertFrom-Json }

We need an App Role and this is exposed in the access token. The App Role can be created using this function. See this link for the original:

https://stackoverflow.com/questions/51651889/how-to-add-app-roles-under-manifest-in-azure-active-directory-using-powershell-s

This Azure App registration is created for an Application client, ie no user. If creating this for delegated flows, the AllowedMemberTypes would need to be changed and no secret/certificate is required. A scope would probably be used as well instead of a Role, but this depends on the solution authorization architecture.

function CreateApplicationAppRole([string] $Name, [string] $Description) { $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string] $appRole.AllowedMemberTypes.Add("Application"); $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true $appRole.Description = $Description $appRole.Value = $Name; return $appRole }

The Set-AzureADApplication function adds the roles to the Azure App registration.

$appRoles = $myApp.AppRoles $newRole = CreateApplicationAppRole -Name $appRoleName -Description $appRoleName $appRoles.Add($newRole) Set-AzureADApplication -ObjectId $myApp.ObjectId -AppRoles $appRoles $appRoleId = $newRole.Id

The App Role can now be used and exposed in the access token. This is added using the RequiredResourceAccess

$req = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess" $acc1 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList $appRoleId,"Role" $req.ResourceAccess = $acc1 $req.ResourceAppId = $myApp.AppId Set-AzureADApplication -ObjectId $myApp.ObjectId -RequiredResourceAccess $req

For some unknown reasin, the Powershell AzureAD module add default Oauth2Permissions to the Azure App registration. This can be disabled. We have no scopes as this is an application client, i.e. AppOnly.

$Scopes = New-Object System.Collections.Generic.List[Microsoft.Open.AzureAD.Model.OAuth2Permission] $Scope = $myApp.Oauth2Permissions | Where-Object { $_.Value -eq "user_impersonation" } $Scope.IsEnabled = $false $Scopes.Add($Scope) Set-AzureADApplication -ObjectId $myApp.ObjectID -Oauth2Permissions $Scopes

The API IdentifierUris is added to the Azure App registration.

$apiUrl = "api://" + $myApp.AppId $IdentifierUris = New-Object System.Collections.Generic.List[string] $IdentifierUris.Add($apiUrl) Set-AzureADApplication -ObjectId $myApp.ObjectID -IdentifierUris $IdentifierUris

A service principal can be created for the Azure App Registration. This can then be used in the enterprise applications.

$createdServicePrincipal = New-AzureADServicePrincipal -AccountEnabled $true -AppId $myApp.AppId -DisplayName $appName

Graph application roles can also be added to the Azure App Registration if required. I usually separated this to a different Azure App registration.

$req = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess" $acc1 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "62a82d76-70ea-41e2-9197-370581804d09","Role" $acc2 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "5b567255-7703-4780-807c-7be8301ae99b","Role" $acc3 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "9e3f62cf-ca93-4989-b6ce-bf83c28f9fe8","Role" $acc4 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "741f803b-c850-494e-b5df-cde7c675a1ca","Role" $acc5 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "df021288-bdef-4463-88db-98f22de89214","Role" $acc6 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "7ab1d382-f21e-4acd-a863-ba3e13f7da61","Role" $acc7 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "19dbc75e-c2e2-444c-a770-ec69d8559fc7","Role" $req.ResourceAccess = $acc1,$acc2,$acc3,$acc4,$acc5,$acc6,$acc7 $req.ResourceAppId = "00000003-0000-0000-c000-000000000000"

Add the item to the Azure App registration.

################################## ### Create an RequiredResourceAccess list ################################## $requiredResourceAccessItems = New-Object System.Collections.Generic.List[Microsoft.Open.AzureAD.Model.RequiredResourceAccess] $requiredResourceAccessItems.Add($req) Set-AzureADApplication -ObjectId $myApp.ObjectId -RequiredResourceAccess $requiredResourceAccessItems

Access tokens version 2 are used. This needs to be set in the manifest. This property is called accessTokenAcceptedVersion in the portal and requestedAccessTokenVersion in Graph. Set the accessTokenAcceptedVersion to version 2 access tokens.

$Body = @{ api = @{ requestedAccessTokenVersion = 2 } } | ConvertTo-Json -Compress | ConvertTo-Json $null = az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/applications/$($appRegObjectId)" --body $Body --headers "Content-Type=application/json" Running the scripts

Install the required Azure AD Powershell module:

Install-Module AzureAD -AllowClobber

Connect to the correct tenant using an account which has the privileges to create App registrations:

Connect-AzureAD -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244

Run the script replacing the tenantId and your Azure App Registration name:

.\app-reg-application-cc.ps1 -tenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appName AppRegTest Login Azure CLI and Update access token version az login --tenant 5698af84-5720-4ff0-bdc3-9d9195314244

You can read the id from the manufest (ObjectId) “id”: “ba62783f-fb6b-48a9-ba51-f56355e84926”,

.\update-access-token-version2.ps1 -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appRegObjectId ba62783f-fb6b-48a9-ba51-f56355e84926

Create new secret

You can read the id from the manufest (ObjectId) “id”: “ba62783f-fb6b-48a9-ba51-f56355e84926”,

.\app-new-secrets.ps1 -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appRegObjectId ba62783f-fb6b-48a9-ba51-f56355e84926

See the full scripts in the Github repository accompanying this blog.

Notes

We are using secrets in this demo. You can also update to using certificates instead of secrets which then use client assertions on the access token request. I normal store the secret or certificate in an Azure Key Vault and use this directly from the application and services. I would normally add this to DevOps and create a single script for all infrastructure.

After the App registrations have been created, you need to grant consent before these can be used.

Links:

https://docs.microsoft.com/en-us/powershell/module/azuread/new-azureadapplication?view=azureadps-2.0

https://stackoverflow.com/questions/42164581/how-to-configure-a-new-azure-ad-application-through-powershell

Monday, 14. November 2022

Aaron Parecki

How to Build a Restreaming Server with a Raspberry Pi

First of all, what is a restreaming server? Sometimes you want to livestream video from a device like an ATEM Mini or OBS to multiple destinations. Many devices and software like this will let you push video to just one RTMP destination at a time.

First of all, what is a restreaming server? Sometimes you want to livestream video from a device like an ATEM Mini or OBS to multiple destinations. Many devices and software like this will let you push video to just one RTMP destination at a time.

To stream to multiple destinations, you need to use a restream server so that the device can stream the one stream to the server, and the restream server pushes to multiple destinations.

There are paid services you can use to restream for you, restream.io being one of the most well-known ones. This is a great solution too, and if you're just looking for a quick way to restream to multiple platforms, this is the easiest way to go.

(Note: the YoloBox does let you publish to multiple destinations with no extra setup, so if you're using that device, you can just ignore this whole tutorial!)

But, sometimes you want to do this yourself, avoid paying third party services, or you might need to restream to local devices that something on the public internet can't reach. That's what the rest of this blog post is about. I'll show you how to set up a Raspberry Pi (or really any other Linux computer) to restream your livestreams to multiple destinations.

Getting Started

Before we get into the details, you'll need to start with a Raspberry Pi or an Ubuntu server that's already set up and running. That should be as easy as following the official setup guide for Raspberry Pi. Also note that if you're comfortable with SSH, you can install the Raspberry Pi OS "Lite" without the desktop environment.

Install nginx

The magic that makes this all work is the nginx web server with a custom module that supports RTMP.

Install nginx and the rtmp module by running the following commands on the command line, either over SSH or by opening the Terminal on the desktop.

sudo apt update sudo apt install nginx libnginx-mod-rtmp Configure your Restream Server

Now we need to set up an RTMP server in nginx. Edit the main nginx config file:

sudo nano /etc/nginx/nginx.conf

Scroll all the way to the bottom and copy the below text into the config file:

rtmp { server { listen 1935; application restream { # Enable livestreaming live on; # Disable recording record off; # Allow only this machine to play back the stream allow play 127.0.0.1; deny play all; # Push your stream to one or more RTMP destinations push rtmp://a.rtmp.youtube.com/live2/XXXX-XXXX-XXXX-XXXX-XXXX; push rtmp://a.rtmp.youtube.com/live2/XXXX-XXXX-XXXX-XXXX-XXXX; push rtmp://live-cdg.twitch.tv/app/live_XXXXXXXX; } } }

Save this file by pressing ctrl+X, then Y, then enter.

To test the config file for errors, type:

sudo nginx -t

If that worked, you can reload nginx to make your changes take effect:

sudo nginx -s reload Start Streaming

At this point the Raspberry Pi is ready! You can now stream to this box and it will send a copy to each configured destination! Any stream key will work, and you can stream using any sort of device or software like OBS. You'll need to find the IP address of the Raspberry Pi which you can do by typing:

hostname -I

To stream to the Raspberry Pi, use the RTMP URL: rtmp://YOUR_IP_ADDRESS/restream and anything as the stream key.

NOTE: The way this is set up, anyone can stream to this if they know the IP address since it will accept any stream key. If you want to restrict this, you can use a long random string in place of restream in the config. For example:

... application restream-ABCD-EFGH-IJKL-MNOP { ...

Now you are ready to stream! Start pushing an RTMP feed to your server and it will send a copy to each of your configured destinations!

If you want to stream to this from an ATEM Mini, you'll need to create a custom streaming config and load that in to the software control app. You can use this XML generator to create the configuration.

Fill out your server's IP address, and use either restream or restream-ABCD-EFGH-IJKL-MNOP as the path.

Further Reading

Now that you have the nginx RTMP module installed, there's a lot more things you can do! You can read the official documentation for a full list of other commands you can use. You can do things like:

Record a local copy of anything your RTMP server receives Create multiple resolutions of your video and push different resolutions to different platforms Create a vertical cropped version of your feed and send it to another RTMP destination Notify external services when you start or stop streaming

Leave a comment below if you're interested in a tutorial on any of these other interesting features!

Sunday, 13. November 2022

Foss & Crafts

52: Terminal Phase: a space shooter that runs in your terminal!

Terminal Phase! A space shooter that runs in your terminal!!! Who wouldn't be excited about that? Not to mention that it shows off cool features of Spritely Goblins... like time travel: Well, Terminal Phase has been Christine's fun/downtime project for the last few years, and one of the bonuses you can get for the reward tiers of donating to this podcast! And yet we've never done an episode

Terminal Phase! A space shooter that runs in your terminal!!! Who wouldn't be excited about that?

Not to mention that it shows off cool features of Spritely Goblins... like time travel:

Well, Terminal Phase has been Christine's fun/downtime project for the last few years, and one of the bonuses you can get for the reward tiers of donating to this podcast! And yet we've never done an episode about it! Given that a brand new (and much easier to install) release of Terminal Phase is coming out really soon, we figured now's a good time to talk about it!

Links:

Terminal Phase!

Blogposts about Terminal Phase!

Project announcement 1.0 announcement Time travel debugging in Spritely Goblins, previewed through Terminal Phase 1.1 announcement Terminal Phase was in a Polish "Linux magazine"! FOSS & Crafts' Patreon Spritely Goblins, a project of the Spritely Institute Blast off! A tour of Spritely Institute's tech Racket Guile Guix 8sync (Goblins predecessor). See also the Mudsync video, on that very page. Raart Spacewar! A bit about how Spacewar lead to UNICS (later renamed Unix)

Saturday, 12. November 2022

Werdmüller on Medium

The fediverse is happening. Here’s how to take part

A guide to getting started with Mastodon Continue reading on Medium »

A guide to getting started with Mastodon

Continue reading on Medium »

Friday, 11. November 2022

Phil Windleys Technometria

Verifying Twitter

Summary: Elon has started a monthly $8 fee for verified twitter users. A verifiable credential-based solution would be a better way to increase trust in the platform by authenticating users as real people without attendent privacy concerns. This thread from Matthew Yglesias concerning Twitter's decision to charge for the blue verification checkmark got me thinking. Matthew makes some good

Summary: Elon has started a monthly $8 fee for verified twitter users. A verifiable credential-based solution would be a better way to increase trust in the platform by authenticating users as real people without attendent privacy concerns.

This thread from Matthew Yglesias concerning Twitter's decision to charge for the blue verification checkmark got me thinking. Matthew makes some good points:

Pseudonymity has value and offers protection to people who might not otherwise feel free to post if Twitter required real names like Facebook tries to. Verification tells the reader that the account is run by a person There's value to readers in knowing the real name and professional affiliation of some accounts

Importantly, the primary value accrues to the reader, not the tweeter. So, charging the tweeter $20/month (now $8) is charging the wrong party. In fact, more than the reader, the platform itself realizes the most value from verification because it can make the platform more trustworthy. Twitter will make more money if the verification system can help people understand the provenance of tweets because ads will become more valuable.

Since no one asked me, I thought I'd offer a suggestion on how to do this right. You won't be surprised that my solution uses verifiable credentials.

First, Twitter needs to make being verified worthwhile to the largest number of users possible. Maybe that means that tweets from unverified accounts are delayed or limited in some way. There are lots of options and some A/B testing would probably show what incentives work best.

Second, pick a handful (five springs to mind) of initial credential issuers that Twitter will trust and define the credential schema they'd prefer. Companies like Onfido can already do this. It wouldn't be hard for others like Equifax, ID.me, and GLEIF to issue credentials based on the "real person" or "real company" verifications they're already doing. These credential issuers could charge whatever the market would bear. Twitter might get some of this money.

Last, Twitter allows anyone with a "real person" credential from one of these credential issuers to verify their profile. The base verification would be for the holder to use zero-knowledge proof to prove they are a person or legal entity. If they choose, the credential holder might want to prove their real name and professional affiliation, but that wouldn't be required. Verifying these credentials as part of the Twitter profile would be relatively easy for Twitter to implement.

Twitter would have to decide what to do about accounts that are not real people or legal entities. Some of these bots have value. Maybe there's a separate verification process for these that requires that the holder of the bot account prove who they are to Twitter so they can be held responsible for their bot's behavior.

You might be worried that the verified person would sell their verification or verify multiple accounts. There are a number of ways to mitigate this. I explained some of this in Transferable Accounts Putting Passengers at Risk.

Real person verification using verifiable credentials has a number of advantages.

First, Twitter never knows anyone's real name unless that person chooses to reveal it. This means that Twitter can't be forced to reveal it to someone else. They just know they're a real person. This saves Twitter from being put in that position and building infrastructure and teams to deal with it. Yes, the police, for example, could determine who issued the Twitter Real Person credential and subpoena them, but that's the business these companies are in, so presumably they already have processes for doing this. Another nice perk from this is that Twitter jump starts an ecosystem for real person credentials that might have uses somewhere else. This has the side benefit of making fraud less likely since the more a person relies on a credential the less likely they are to use it for fraudulent purposes. A big advantage is that Twitter can now give people peace of mind that they accounts they're following are controlled by real people. Tools might let people adjust their feed accordingly so they see more tweets by real people. Twitter also can give advertisers comfort that their engagement numbers are closer to reality. Twitter makes more money.

Yglesias says:

Charging power users for features that most people don’t need or want makes perfect sense.

But verification isn’t a power user feature, it’s a terrible implementation of what’s supposed to be a feature for the everyday user. It should help newbies figure out what’s going on.

Verifiable credentials can help make Twitter a more trustworthy place by providing authentic data about people and companies creating accounts—and do it better than Twitter's current system. I'm pretty sure Twitter won't. Elon seems adamant that they are going to charge to get the blue checkmark. But, I can dream.

Bonus Link: John Bull's Twitter thread on Trust Thermoclines

Notes

Photo Credit: tree-nature-branch-bird-flower-wildlife-867763-pxhere.com from Unknown (CC0)

Tags: twitter identity verifiable+credentials

Wednesday, 09. November 2022

MyDigitalFootprint

Why does fear fill the gap?

In that moment of panic, we forget to reflect on what type of gap this is and why it has been filled with fear. Leadership is a recognition of the gaps, that not all gaps are the same and how to prevent fear being the first response. Image source: Susan David, Ph.D (love her work) Fear and Gaps  Fear is an unpleasant emotion caused by the immediate or expected threat of danger, pain, or
In that moment of panic, we forget to reflect on what type of gap this is and why it has been filled with fear. Leadership is a recognition of the gaps, that not all gaps are the same and how to prevent fear being the first response.

Image source: Susan David, Ph.D (love her work)

Fear and Gaps 

Fear is an unpleasant emotion caused by the immediate or expected threat of danger, pain, or harm, but it is also so much more.  We know fear sells in terms of marketing.  We understand FOMO (fear of missing out) and the fear of failure (FOF) are significant drivers. We are aware that fear produces a unique reaction in the body driven from the gut ahead of the brain (Antonio Damasio research). Fear is a stimuli but is subjective and how fear is perceived is different for everyone. Different types of fear spread at different speeds. Brands and the media use fear and to create headlines and force change.  COP27 and climate change agenda are not adverse to utilising this insight.

We should be aware that fear drives many decisions we make.  Therefore, the interesting question becomes, “Why is it that fear fills the gaps between what we know/ believe and the unknown/ uncertain?” A further question on the link between fear and trust is worth exploring, but it is beyond this post. 

Why is it that fear tends to be the feeling that fills the gaps between what we know/ believe and the unknown/ uncertain?
Peak Human Purpose 

In the Peak Paradox framework, one of the peaks to optimise for is “Peak Human Purpose”. Each of the four purposes of the framework exists at the exclusion of anything else - purity at an extreme.  At peak human purpose, we are here (humans on earth) to escape ultimate death by reproducing as much as possible with the broadest community we can. We also have to adapt as fast as possible. We have to meet our chemistry requirements to stay alive for as long as possible to adopt and reproduce at the expense of anything else. These form the most basic definitions of life with clarity and purity.

Whilst the purity of all the peak purposes might be controversial (even to myself), saying the purity of human purpose is chemistry/ biology does not go down very well; it is too simplistic. However, this is a model for framing thinking, so please go with it as it needs to be pure, and every other human purpose has conflicts with someone.  The point here is that when we realise that fear and/or anxiety fills gaps, we understand that we are optimising for something deeply human - life, survival, and thriving.   

The point here is that when we realise that fear/or anxiety fills gaps, we understand that we are optimising for something deeply human.

I am often questioned why I put “Human Purpose” as one of the peaks, and it is because of some deeply human traits of life that influence our decisions and create conflicts and tensions within us and our groups.  Fear and anxiety are some of these feelings.  I am neither an expert, counsellor or theorist in any human behaviour or psychology; however, that does not stop me from realising how much chemistry, biology and experience influence our decision-making, if we want to realise it or not. These disciplines are currently undervalued as is that fear is baked in to some systems from management, control and performance. 

Different gaps have different fears

Different gaps have different fears sounds obvious, but it is not, as the only gap that fear is filling is the one in front of us right now.  Fear steps in when there is a gap in our knowledge/ information.  We hear a noise we cannot explain; someone is walking behind us or an imagined scenario.  Fear is not limited to our personal lives and is an active component in the world of our daily commercial activities and actions. Geoffery Moores's Book “Crossing the Chasm” is about a book that sells by creating fear.  The book is much more important than that and is a fantastic insight into adoption - however until you knew about the chasm, you did not fear it.

Without a doubt, life would be easier if only one gap and one fear existed. However, we have to be content with the fact that every moment we are dealing with different gaps (leadership, innovation, knowledge, information, experience) and different fears that come from the gaps we have right now and those we image in the future.  

What do we image are the boundaries? 

The image below illustrates two different gaps. The original thought was caught in an ESG session, so using this as an example.  For some, ESG is a gap (think fear) between what is unknown and known and how we cross it (the gap on the left below.)  For others, ESG is a gap (think fear) between what is known and the action they need to take (the gap on the right below.)  At the recent Sibos conference*, where this thinking emerged, was a debate about the role data has in ESG and whether data can ever be useful because there are two gaps. A good question to ask is which area does your ESG data fall? This removes the ideas about for and against, and forces you to determine which camp your data represents! 


The fear in each gap is real and, depending on the persona and team - determines how you will cross your fear fill gap. However, this model whilst “obvious” might not actually be a good representation of the issue..

Just Fix It

We (humans) tend to have an obsession with fixing things.  The majority who will read this realise that we cannot fix wicked problems, usually because we cannot understand them. Even our systems thinking and explanation have limits because of the boundary interconnection problem.  (the unknown consequences of your action on another system and vice-versa).  

When you hear someone bark an order to “fix something”, we know it resembles the old order of control, dogma and hierarchy.  There was a belief that in a more simple time, leadership should/could just fix everything. However, not everything is easy to fix (humans, climate, economy, inflation) and not all problems have solutions (my favourite is pure maths), and the majority of what we face every day requires us to walk past the ideal of a “quick fix” and wrestle with the complexity of wicked systems.   

We should not ignore the power and pull of a “fix it” mental model.  We all tend to do it as a first untrained response when faced with a fear, gap or problem. Because of the “fix-it” mental model, our gaps are mostly filled with fear because there is no immediate fix within our experience. Our early experience in education and business teaches us we can “fix it” by defining the problem and building a solution. To do this, we have to accept we must ignore critical facts that add complexity to the actual problem or lack experience to see such layers. 

Management, leadership and MBA courses all spend a lot of time teaching us to ask, “what is the problem to be solved?” Usually, so we can determine if the pre-packed solution on offer aligns with the problem at hand.  When we know the problem, we can write a plan.  This fix-it provides a perception that we know how to cross the two big chasms filled with fears. This is not true because a “Fix it” mentality and language ignore struggles, dilemmas, compromises and paradoxes.   
We, humans and our environment, are not a problem to be fixed but something to be crafted, shaped and moulded over time.


We, humans, and our environment are not a problem to be fixed but something to be crafted, shaped and moulded over time.

The purpose of the Peak Paradox framework is to embrace “fix it” thinking for simple things but then build a model that allows us to picture and imagine many of the complications of a dynamic interactive, interdependent system of systems.  Wicked problems.  

A single independent system can be modelled and might be fixable. A system of systems cannot be modelled or fixed as there are unknowns at the boundaries between the systems presenting unknown effects, dependencies and consequences.  Not all humans have the same motivations, incentives or desires - a core identification in the peak paradox framework.

Moving on from the “fix-it” model

When we take out the “fix-it” thinking and redraw the two chasms, we observe that it is critical that executives are able to cope with leading in uncertainty and management who remain flexible so they can continually adapt the plan.  I would argue this is why reporting and compliance boards fail and don’t work for any stakeholder, as they focus on the wrong model - “fix-it” 

How does fear align with the Peak Paradox framework and thinking on sustainability? 

“Fix-It” thinking defines problems and solves them or ends up with gaps filled with fears, the information gap.  I see too many executive boards fixated on reporting, gaps and compliance, translating leadership into an instruction to fix it as there is a divergence between the plan and the actual.  Leadership is surely about bringing vision, belief and skills to help bridge the gap, not by barking instructions to fix it but by providing the next base camp on an uncharted map.  Stakeholder trust management to be flexible and adaptable so cope with change and the plan is there to change and not to manage to.  The delta (gap in the plan to actual) is not to be feared but embraced and understood. Dashboards are a leadership killer 

Dashboards are a leadership killer 

Humans and the earth (terra firma, water and climate) need to find a sustainable compromise and are the same in this respect. We don’t need to be fixed, and we don’t need fixing. What we do need is a map.  

Cop 27  “Fix-It” or map? 

The obsession with 1.5 degrees to me is a problem. The earth will not end, but yes, it will definitely become far more difficult for humans on earth to thrive over just survive. The changes in temperature will affect some humans in some regions far more. Our favoured economic model is likely also to tested to a breaking point.  

I am a massive supporter of SDG’s and change but my issue is that 1.5 degrees is a solution to a problem that we have not fully defined and depends on the “we can fix-it” mental model; The same with NetZero and ESG data.  These are solutions to problems we don’t understand.  These are wicked problems that should not be boiled down to a single number that no one can do anything about.  1.5 degrees is not a vision, a north star or a plan - it is a target. It should be the first camp on a long journey.   However fear fills the gaps and drives a model that drives more fear into making the gaps bigger.   

Perhaps we should step back to agree and determine what fears and gaps we are talking about.    


Thank you. 
* At Sibos 22 (the big banking, payments and finance conference), I had the joy of meeting a flock of old friends and meeting IRL some new ones I had only ever interacted with digitally. During one of the #Innotribe ESG sessions, it was good to interactively pen ideas based on the content as I sat with Yael Rozencwajg, which has become this post.  


Tuesday, 08. November 2022

Werdmüller on Medium

It’s time to be heard

Voting is not a right to take lightly. Continue reading on Medium »

Voting is not a right to take lightly.

Continue reading on Medium »

Sunday, 06. November 2022

Doc Searls Weblog

On Twitter 2.0

So far the experience of using Twitter under Musk is pretty much unchanged. Same goes for Facebook. Yes, there is a lot of hand-wringing, and the stock market hates Meta (the corporate parent to which Facebook gave birth); but so far the experience of using both is pretty much unchanged. This is aside from the fact […]

So far the experience of using Twitter under Musk is pretty much unchanged. Same goes for Facebook.

Yes, there is a lot of hand-wringing, and the stock market hates Meta (the corporate parent to which Facebook gave birth); but so far the experience of using both is pretty much unchanged.

This is aside from the fact that the two services are run by feudal overlords with crazy obsessions and not much feel for roads they both pave and ride.

As for Meta (and its Reality Labs division), virtual and augmented realities (VR and AR) via headgear are today where “Ginger” was before she became the Segway: promising a vast horizontal market that won’t materialize because its utilities are too narrow.

VR/AR will, like the Segway, will find some niche uses. For Segway, it was warehouses, cops, and tourism. For VR/AR headgear it will be gaming, medicine, and hookups in meta-space. The porn possibilities are beyond immense.

As for business, both Twitter and Facebook will continue to be hit by a decline in personalized advertising and possibly a return to the old-fashioned non-tracking-based kind, which the industry has mostly forgotten how to do. But it will press on.

Not much discussed, but a real possibility is that advertising overall will at least partially collapse. This has been coming for a long time. (I’ve been predicting it at least since 2008.) First, there is near-zero (and widespread negative) demand for advertising on the receiving end. Second, Apple is doing a good job of working for its customers by providing ways to turn off or thwart the tracking that aims most ads online. And Apple, while not a monopoly, is pretty damn huge.

It may also help to remember that trees don’t grow to the sky. There is a life cycle for companies just as there is for living things.

Thursday, 03. November 2022

Identity Woman

Thoughtful Biometrics Workshop

It is happening again. February 13-17th. Registration will open soon. Two things happened today that solidified the decision to move forward with the event. I had a great conversation with a government of Canada official who started his career as an officer at a boarder crossing and is currently inside the government on modernization on […] The post Thoughtful Biometrics Workshop appeared first

It is happening again. February 13-17th. Registration will open soon. Two things happened today that solidified the decision to move forward with the event. I had a great conversation with a government of Canada official who started his career as an officer at a boarder crossing and is currently inside the government on modernization on […]

The post Thoughtful Biometrics Workshop appeared first on Identity Woman.


Phil Windleys Technometria

The Nature of Identity

Summary: This post is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023. Cogito, ergo sum. —René Descartes The Peace of Westphalia, which ended the Thirty Years' War in 1648, created the concept of Westphalian sovereignty: the principle of international law that "each state has sovereignty over its territory and domestic affairs

Summary: This post is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023.

Cogito, ergo sum.
—René Descartes

The Peace of Westphalia, which ended the Thirty Years' War in 1648, created the concept of Westphalian sovereignty: the principle of international law that "each state has sovereignty over its territory and domestic affairs, to the exclusion of all external powers, on the principle of non-interference in another country's domestic affairs, and that each state (no matter how large or small) is equal in international law."1

The ensuing century saw many of these states begin civil registration for their citizens, in an effort to turn their sovereignty over territory into governance over the people living in those lands. These registrations, from which our modern system of birth certificates springs, became the basis for personal identity and legal identity in a way that conflated these two concepts.

Birth certificates are a source of legal identity and a proof of citizenship, and thus the basis for individual identity in most countries. Civil registration has become the foundation for how states relate to their citizens. As modern nation-states have become more and more influential (and often controlling) in the lives of their citizens, civil registration and its attendant legal identity have come to play a larger and larger role in their lives. People present proof of civil registration for many purposes: to prove who they are and, springing from that, their citizenship.

Even so, Descartes did not say, "I have a birth certificate, therefore I am." When most people hear the word identity, they think about birth certificates, passports, driver's licenses, logins, passwords, and other sorts of credentials. But clearly, we are more than our legal identity. For most purposes and interactions, our identity is defined through our relationships. Even more deeply, we each experience these independently as an autonomous being with an individual perspective.

This dichotomy reflects identity's dual nature. While identity is something others assign to us, it is also something deep inside of us, reflecting what Descartes actually said: "I think, therefore I am."

A Bundle of Sticks?

Another way to think about the dual nature of identity is to ask, "Am I more than a set of attributes?" Property rights are often thought of as a "bundle of sticks": each right is separable from the rest and has value independent of the rest. Similarly, identity is often considered a bundle of attributes, each with independent value. This is known in philosophy as bundle theory, originated by David Hume.

Bundle theory puts attributes into a collection without worrying about what ties them together. As an example, you might identify a plum as purple, spherical, 5 centimeters in diameter, and juicy. Critics of bundle theory question how these attributes can be known to be related without knowing the underlying substance—the thing itself.

Substance theory, on the other hand, holds that attributes are borne by "an entity which exists in such a way that it needs no other entity to exist," according to our friend Descartes. Substance theory gives rise to the idea of persistence in the philosophy of personal identity. People, organizations, and things persist through time. In one sense, you are the same person who you were when you were 16. But in another, you are not. The thing that makes you the same person over your lifetime is substance. The thing that makes you different is the collection of ever-changing attributes you present to the outside world over time.

I'm no philosopher, but I believe both viewpoints are useful for understanding digital identity. For many practical purposes, viewing people, organizations, and things as bundles of attributes is good enough. This view is the assumption upon which the modern web is built. You log into different services and present a different bundle of attributes to each. There is no substance, at least in the digital sense, since the only thing tying them together is you, a decidedly nondigital entity.

This lack of a digital representation of you, that you alone control, is one of the themes I'll return to several times in my book. At present, you are not digitally embodied—your digital existence depends on other entities. You have no digital substance to connect the various attributes you present online. I believe that digital identity systems must embody us and give us substance if we are to build a digital future where people can operationalize their online existence and maintain their dignity as autonomous human beings.

Notes "Nation-States and Sovereignty," History Guild, accessed October 5, 2022. Substance theory has many more proponents than Descartes, but his definition is helpful in thinking through identity’s dual nature.

Photo Credit: Smoke sticks for honey harvesting from Lucy McHugh/CIFOR (CC BY-NC-ND 2.0, photo cropped vertically)

Tags: identity ldid book

Tuesday, 01. November 2022

reb00ted

California water prices have quadrupled

Why should other countries have all the fun with exploding prices for base resources, like heating in the UK, or all kinds of energy across Europe? Nasdaq has an index for open-market wholesale prices for water in the US West, mostly California. Currently, it is in the order of a $1000 per acre-foot, while the non-drought price seems to be about $250. Quadrupled. Links: current prices,

Why should other countries have all the fun with exploding prices for base resources, like heating in the UK, or all kinds of energy across Europe?

Nasdaq has an index for open-market wholesale prices for water in the US West, mostly California. Currently, it is in the order of a $1000 per acre-foot, while the non-drought price seems to be about $250.

Quadrupled.

Links: current prices, explanation.

Monday, 31. October 2022

Damien Bod

Switch tenants in an ASP.NET Core app using Azure AD with multi tenants

This article shows how to switch between tenants in an ASP.NET Core multi-tenant application using a multi-tenant Azure App registration to implement the identity provider. Azure roles are added to the Azure App registration and this can be used in the separate enterprise applications created from the multi-tenant Azure App registration to assign users and […]

This article shows how to switch between tenants in an ASP.NET Core multi-tenant application using a multi-tenant Azure App registration to implement the identity provider. Azure roles are added to the Azure App registration and this can be used in the separate enterprise applications created from the multi-tenant Azure App registration to assign users and groups.

Code: https://github.com/damienbod/AspNetCoreTenantSelect

Azure AD is used to implement the identity provider for the ASP.NET Core application. In the home tenant, an Azure App registration was created to support multiple tenants. Three roles for users and groups were created and added to the Azure App registration. The first time a user authenticates using the Azure App registration, an administrator can give consent for the tenant. This creates an Azure enterprise application inside the corresponding tenant. Users or groups can be assigned the roles from the Azure App registration. This is specific for the corresponding tenant only.

If a user exists in two separate tenants, the user needs an easy way to switch between the tenants without a logout and a login. The user can be assigned separate roles in each tenant. The email is used to identify the user, as separate OIDs are created for each tenant. The user can be added as an external user in multiple tenants with the same email.

The ASP.NET Core application uses the Azure App registration to authentication.

The ASP.NET Core application uses Microsoft.Identity.Web to implement the OpenID Connect client. This client uses MSAL. The user of the application needs a way to switch between the tenants. To do this, the specific tenant must be used in the authorize request of the OpenID Connect flow. If the common endpoint is used, which is the standard for a multi-tenant Azure App registration, the user cannot switch between the tenants without an account logout first or using a separate incognito browser.

A cache is used to store the preferred tenant of the authenticated user. The user of the application can select the required tenant and the tenant is used for authentication. Before the authorize request is sent to Azure AD, the ProtocolMessage.IssuerAddress is used with the correct tenant GUID identifier. The prompt select_account was added for the authorize request in the OpenID Connect flow so that the user will always be asked to choose an account. Most of us have multiple identities and account nowadays.

The application requires an authenticated user. The default authentication uses the common endpoint and no select account prompt.

There are different ways to implement the switch tenant logic. I have not focused on this. I just add the selected organization to an in-memory cache. You can for example keep a database of your specific allowed organizations and authorize this after a successful authentication using claims returned from the identity provider. You could also provide the organization as a query parameter in the URL. The Azure AD Microsoft.Identity.Web.UI client and ASP.NET Core application requires that the application starts the authentication flow from a HTTP GET, and not a redirect to a GET or a POST request.

services.AddTransient<TenantProvider>(); services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd")); WebApplication? app = null; services.Configure<MicrosoftIdentityOptions>(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Prompt = "select_account"; var redirectToIdentityProvider = options.Events.OnRedirectToIdentityProvider; options.Events.OnRedirectToIdentityProvider = async context => { if(app != null) { var tenantProvider = app.Services.GetRequiredService<TenantProvider>(); var email = context.HttpContext!.User.Identity!.Name; if (email != null) { var tenant = tenantProvider.GetTenant(email); var address = context.ProtocolMessage.IssuerAddress.Replace("common", tenant.Value); context.ProtocolMessage.IssuerAddress = address; } } await redirectToIdentityProvider(context); }; }); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI();

The TenantProvider service implements the tenant select logic so that a user can switch between tenants or accounts without signing out or switching browsers. This can be replaced with a database or logic as your business requires. I hard coded some test tenants for the organization switch. Some type of persistence or database would be better for this. An in-memory cache is used to persist the user and the preferred organization.

public class TenantProvider { private static readonly SelectListItem _org1 = new("Org1", "7ff95b15-dc21-4ba6-bc92-824856578fc1"); private static SelectListItem _org2 = new("Org2", "a0958f45-195b-4036-9259-de2f7e594db6"); private static SelectListItem _org3 = new("Org3", "5698af84-5720-4ff0-bdc3-9d9195314244"); private static SelectListItem _common = new("common", "common"); private static readonly object _lock = new(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; public TenantProvider(IDistributedCache cache) { _cache = cache; } public void SetTenant(string email, string org) { AddToCache(email, GetTenantForOrg(org)); } public SelectListItem GetTenant(string email) { var org = GetFromCache(email); if (org != null) return org; return _common; } public List<SelectListItem> GetAvailableTenants() { return new List<SelectListItem> { _org1, _org2, _org3, _common }; } private SelectListItem GetTenantForOrg(string org) { if (org == "Org1") return _org1; else if (org == "Org2") return _org2; else if (org == "Org3") return _org3; return _common; } private void AddToCache(string key, SelectListItem userActiveOrg) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonSerializer.Serialize(userActiveOrg), options); } } private SelectListItem? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonSerializer.Deserialize<SelectListItem>(item); } return null; } }

An ASP.NET Core Razor Page is used to implement the tenant switch UI logic. This just displays the available tenants and allows the user to choose a new tenant.

public class SwitchTenantModel : PageModel { private readonly TenantProvider _tenantProvider; public SwitchTenantModel(TenantProvider tenantProvider) { _tenantProvider = tenantProvider; } [BindProperty] public string Domain { get; set; } = string.Empty; [BindProperty] public string TenantId { get; set; } = string.Empty; [BindProperty] public List<string> RolesInTenant { get; set; } = new List<string>(); [BindProperty] public string AppTenantName { get; set; } = string.Empty; [BindProperty] public List<SelectListItem> AvailableAppTenants { get; set; } = new List<SelectListItem>(); public void OnGet() { var name = User.Identity!.Name; if (name != null) { AvailableAppTenants = _tenantProvider.GetAvailableTenants(); AppTenantName = _tenantProvider.GetTenant(name).Text; List<Claim> roleClaims = HttpContext.User.FindAll(ClaimTypes.Role).ToList(); foreach (var role in roleClaims) { RolesInTenant.Add(role.Value); } TenantId = HttpContext.User.FindFirstValue("http://schemas.microsoft.com/identity/claims/tenantid"); } } /// <summary> /// Only works from a direct GET, not a post or a redirect /// </summary> public IActionResult OnGetSignIn([FromQuery]string domain) { var email = User.Identity!.Name; if(email != null) _tenantProvider.SetTenant(email, domain); return Challenge(new AuthenticationProperties { RedirectUri = "/" }, OpenIdConnectDefaults.AuthenticationScheme); } }

The Index Razor page in the ASP.NET Core application displays the actually tenant, the organization and the roles for this identity in this tenant.

public void OnGet() { var name = User.Identity!.Name; if(name != null) { AvailableAppTenants = _tenantProvider.GetAvailableTenants(); AppTenantName = _tenantProvider.GetTenant(name).Text; List<Claim> roleClaims = HttpContext.User.FindAll(ClaimTypes.Role).ToList(); foreach (var role in roleClaims) { RolesInTenant.Add(role.Value); } TenantId = HttpContext.User.FindFirstValue( "http://schemas.microsoft.com/identity/claims/tenantid"); } }

After a successful authentication using Azure AD and the multi-tenant Azure App registration, the user can see the assigned roles and the tenant.

The tenant switch is displayed in a HTML list and the authentication request with the select account prompt is sent to Azure AD.

The new tenant and the new corresponding roles for the authorization are displayed after a successful authentication.

Switching tenants is becoming a required feature in most applications now that we have access to multiple Azure AD tenants and domains using the same email. This makes using external identities for an Azure AD user in a multiple domain environment a little less painful.

Notes

If using this in environment where all tenants are not allowed, the tid claim must be validated. You should always restrict the tenants in a multi tenant application if possible. You could force this by adding a tenant requirement.

services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() // Eanble to force tenant restrictions .AddRequirements(new[] { new TenantRequirement() }) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); Links

https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/multi-tenant-user-management-introduction

https://github.com/AzureAD/microsoft-identity-web

https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app

https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal

Friday, 28. October 2022

Moxy Tongue

A Society Worth Contributing To

 [Authoritative Work In Progress..]

 [Authoritative Work In Progress..]

Thursday, 27. October 2022

Heres Tom with the Weather

RubyConf in Houston

Earlier this week, I signed up for RubyConf 2022 which is Nov. 29 - Dec. 1 in Houston. This is my first conference since the pandemic started and I was glad to see the safety precautions. The schedule also looks great! Please say “Hi!” if you see me there.

Earlier this week, I signed up for RubyConf 2022 which is Nov. 29 - Dec. 1 in Houston. This is my first conference since the pandemic started and I was glad to see the safety precautions. The schedule also looks great! Please say “Hi!” if you see me there.

Tuesday, 25. October 2022

Heres Tom with the Weather

IndieAuth login history

In my last post, I mentioned that I planned to add login history to Irwin. As I was testing my code, I logged into indieweb.org and noticed that I needed to update my code to support 5.3.2 Profile URL Response of the IndieAuth spec as this IndieAuth client does not need an access token. Here’s what the history looks like on my IndieAuth server: If I click on a login timestamp, I have the

In my last post, I mentioned that I planned to add login history to Irwin. As I was testing my code, I logged into indieweb.org and noticed that I needed to update my code to support 5.3.2 Profile URL Response of the IndieAuth spec as this IndieAuth client does not need an access token. Here’s what the history looks like on my IndieAuth server:

If I click on a login timestamp, I have the option to revoke the access token associated with the login if it exists and has not already expired. My next step is to test some other micropub servers than the one I use to see what interoperability updates I may need to make.

Friday, 21. October 2022

Werdmüller on Medium

The end of Twitter

Our online public squares are sunsetting. What’s next? Continue reading on Medium »

Our online public squares are sunsetting. What’s next?

Continue reading on Medium »

Thursday, 20. October 2022

MyDigitalFootprint

How to build a #team fit for #uncertainty

The pandemic changed us, our views, what we value and how we work.  We might not recognise all the changes and hang on in the hope of a return to something we loved, but we must make the best of it now.  We should be aware that the change has not only affected us but also our teams.  Whilst the Bruce Tuckman 1965 forming–storming–norming–performing model of group development is ti
The pandemic changed us, our views, what we value and how we work.  We might not recognise all the changes and hang on in the hope of a return to something we loved, but we must make the best of it now.  We should be aware that the change has not only affected us but also our teams. 

Whilst the Bruce Tuckman 1965 forming–storming–norming–performing model of group development is timeless. No one is likely to dissent that phases remain necessary and inevitable for a team to grow, face challenges, tackle problems, find solutions, plan work, and deliver results. 

However, because we have changed, so has the utility of the tools we apply that move us along the journey from forming to performing.  Tools learnt and built in stable and certain times, have less applicability when we are faced with volatility and uncertainty. 


It is the usefulness of tools we utilise that move us on the journey from forming to performing that has changed. 

More books and articles on “teams” and “leadership” exist than on almost all other management topics. However, forming teams today and how to get teams to perform is continually changing. Below is a new framework for the tool bag. Frameworks are just like a tradesperson supplementing their faithful and well-used wooden screwdriver with a sophisticated modern digital electronic driver.  Both are needed, but the update can be faster and have more applications.

Because this is not a book on team building, an overly simplistic view of the Tuckman model is to recognise there are tensions and compromises in all new teams that have to be addressed (storming.)  When addressed, align the team to the reason this team exists (norming.) Finally, focus on the tasks that are delegated to the team (performing.) If only it were that simple or that quick!!!  Critically team leadership helps the team realise, faster than it would otherwise, that this team has an identity, purpose and unique capabilities.  

The focus of the Peak-Paradox framework in this context is to aid in the unpacking of the storming part in the process. How to unpick and identify different tensions and compromises is essential, and there are many tools for this when the markets are stable and certain. The Peak Paradox framework has the most value in this part of a team's journey, especially when everything is changing.  Digital interactions, Gen Z and remote working mean we need new tools to supplement the existing ones.

The Peak Paradox framing makes us realise that teams and communities don’t naturally exist in all places equally. The purer the purpose that drives an individual, the less likely they can find others who will make a team and walk someone else's path.  Yes, there are certain exceptions, but those are not normal.  

Teams and communities don’t exist in all places equally.

At Peak Paradox, where you try to optimise for everything, teams will not naturally exist as those in the team cannot agree on what they are optimising for. Debate, argument and hostility remain forever, with a team never escaping the forming stage.  Indeed, forming a team with individuals who dwell at the two extremes (Peak Purpose and Peak Paradox) would appear to be futile. However, when humans' only mission is to survive, some of the best teams form.  Always an exception. 

Note: these comments should not be confused with applying the Peak Paradox framework to leadership or decision-making - the focus here is purely on team storming.


A team can start weak, messy or strong - the descriptions of the areas on the chart. They will all still follow the forming – storming – norming – performing model.  These three areas give rise to where teams form and where teams need to move if they are to perform.

Weak teams.  The reality, the individuals are not weak, the team is weak.  These teams are made of very strong-minded individuals who know what they want, can lead themselves and know what to do but cannot work as a team at the outset. In this case, it is about unpacking what purpose they align to and how they would approach fulfilling the purpose this team has. This allows you to unpack what tensions and compromises they have to live with and how they will deal with them.  The right question to ask each team member is, “Where do you naturally align on the four purposes?”  Often these teams find they need to lose members or find a different leadership style to be able to move on.  

Messy teams.   This is because the individuals are messy, and we come with bias, history, experience and incentives.  Messy teams are full of individuals who grasp compromise and tension and live with it every day but have been unable to find the right place for them.  They are drawn to optimise for many things at the same time, they comprehend ambiguity, complexity, volatility and uncertainty.  Messy teams can be easily guided by strong leadership, but often the cracks appear much later when an individual cannot live with the compromises that are now enforced.  The right question to ask each team member is, “What is the one thing you will not compromise over because it creates tension you cannot live with?”

Strong teams.   Teams both start here and should come here to perform, as there is a balance between clarity of purpose, tensions and compromises - sufficient diversity of thought and experience means they can work through problems that occur on the journey the team travels in delivering.  

The team who starts here will still go on the same F-S-N-P path and, depending on the alignment the team has (if well selected), can get to performing fast.  A misaligned team that was selected because of the wrong criteria can fall apart and never get anywhere - this is due to the fact that the individuals oppose the purpose and optimisation of others in the team.  The division is divisive.  Strong starting teams are not always the route to the best outcome.  The trick here is to use the Peak Paradox model to select diverse team members who will be able to cope with the demands and requirements other team members place on the group.  The right question to ask is, “What will you compromise on to make another team member more successful?”

The teams who come to this area from a different starting place, messy or weak, may take time however, on that journey, some team members may have to be lost, and others may be transformed, but they will come to a strong alignment and cohesion by the time they are performing.  This is a thing of beauty. The right question to ask when you get to norming is, “what sacrosanct thing can you compromise on to be part of the team?”


The value of the Peak Paradox framework to modern team building is it enables you to ask questions and plot where people are. This allows you to visualise gaps so you can work on how to bring a team together to perform.  This is very different to culture and style analysis.  

Why is this new? Because Gen-Z is much more opinionated and vocal on their purpose than previous generations, they are also far less likely to compromise to be in a team they don’t want to be in.  Old tools still work, but new tools can help us get there faster.

The title is “How to build a #team fit for #uncertainty” 

When there is stability and certainty, teams can perform with a far narrower, more holistic and aligned view, principally because decision-making has more data, and history is a good predictor.  There is a greater demand for variance, tension and compromise in teams during instability and uncertainty.   Building teams that are resilient when all is changing demands a deeper understanding of what drives individuals and how they cope with new tensions and compromises.  To do that, we need new tools to help leadership and team builders visualise what they have and where they need to be.

My suggestion to start this, is that you

1. Plot where the team needs to be on the Peak Paradox map to deliver its objective.

2. Plot the individuals and determine if you have a team that can deliver or has too many tensions, which means compromise will not be sufficient to get them to perform.  

3. Do it again before each review or every time a new member joins or leaves.

4. Question yourself if the current team is too narrow and aligned or too divergent and divisive. 

Another reality - none of this is easy, and for anyone who entered work place post-1990, this is different to other significant but localised market disruptions in 2001 and 2008. 


Wednesday, 19. October 2022

Identity Praxis, Inc.

MEF Market Review: Personal Data and Identity Meeting of the Waters

The Mobile Ecosystem Forum released a report today—MEF Market Review: Personal Data and Identity Meeting of the Waters—that I’ve been working on for a while. You can download it for FREE here. The report explains the current state of the personal data and identity market. Take a look. Let’s collaborate.  The world is responding to […] The post MEF Market Review: Personal Data and Identity M

The Mobile Ecosystem Forum released a report today—MEF Market Review: Personal Data and Identity Meeting of the Waters—that I’ve been working on for a while. You can download it for FREE here.

The report explains the current state of the personal data and identity market. Take a look. Let’s collaborate. 

The world is responding to the growing importance of personal data and identity. This response is reshaping the world’s markets. Regulatory, technological, cultural, and economic factors are shifting the context of personal data and identity: the what, when, why, who, where, and how. In light of these shifts, we’re witnessing the nature of personal data and identity change—i.e., the definition or lack thereof. We are seeing shifts in personal data control, i.e., from organizations to individuals. As a result of the Internet of Things (IoT) use, AI, and other technical advancements, personal data is exponentially growing in scope and scale. Many stakeholders are waking up to the value of personal data—not just the idea that it is the “new black gold” but something entirely different; that it is a non-rivalrous, non-depleting, regenerative asset. And finally, we’re seeing an explosion of people-centric regulations rolling out—by 2024, according to Gartner, 75% of the world’s population will be endowed with rights under one or more of these regulations, and organizations will be held accountable to a myriad of new obligations. This all means that we are witnessing the birth of the “personal data and identity meeting of the waters” and a new economy, the personal information economy, where individuals will have a legit seat at the economic table for personal data and identity.  

This report, the “MEF Market Report: Personal Data and Identity Meeting of the Waters,” provides a detailed overview of what’s happening to and with personal data and identity, why you should care, and what you—all of us—should consider doing to harness the power of personal data and identity responsibly. We hope that this report is used as a guide to help us come together to: 

Rebuild trusted relationships by inviting individuals to the table Educate and empower all actors Shape and reshape new and existing personal data and identity policies, frameworks, laws, and regulations Attack cybercrime and enhance data stewardship practices Lobby to address market failures and support people-centric infrastructure as a public utility Consider interoperable technology standards and protocols Envision new and evolved business models

The MEF Personal Data and Identity working group welcomes your feedback and contribution. Please message me and let’s discuss the world of personal data and identity. 

The post MEF Market Review: Personal Data and Identity Meeting of the Waters appeared first on Identity Praxis, Inc..


Doc Searls Weblog

The Rhetoric of War

I wrote this more than a quarter century ago when Linux Journal was the only publication that would have me, and I posted unsold essays and wannabe columns at searls.com. These postings accumulated in this subdirectory for several years before Dave Winer got me to blog for real, starting here. Interesting how much has changed since I wrote […]

I wrote this more than a quarter century ago when Linux Journal was the only publication that would have me, and I posted unsold essays and wannabe columns at searls.com. These postings accumulated in this subdirectory for several years before Dave Winer got me to blog for real, starting here.

Interesting how much has changed since I wrote this, and how much hasn’t. Everything I said about metaphor applies no less than ever, even as all the warring parties mentioned have died or moved on to other activities, if not battles. (Note that there was no Google at this time, and the search engines mentioned exist only as fossils in posts such as this one.)

Perhaps most interesting is the paragraph about MARKETS ARE CONVERSATIONS. While that one-liner had no effect at the time, it became a genie that would not return to its bottle after Chris Locke, David Weinberger, Rick Levine and I put it in The Cluetrain Manifesto in 1999. In fact, I had been saying “markets are conversations” to no effect at least since the 1980s. Now “join the conversation” is bullshit almost everywhere it’s uttered, but you can’t stop hearing it. Strange how that goes.

MAKE MONEY, NOT WAR
TIME TO MOVE PAST THE WAR METAPHORS OF THE INDUSTRIAL AGE

By Doc Searls
19 March 1997

“War isn’t an instinct. It’s an invention.”

“The metaphor is probably the most fertile power possessed by man.”

“Conversation is the socializing instrument par excellence.”

-José Ortega y Gasset

Patton lives

In the movie “Patton,” the general says, “Compared to war, all other forms of human endeavor shrink to insignificance.” In a moment of self-admonition, he adds, “God help me, I love it so.”

And so do we. For proof, all we have to do is pick up a trade magazine. Or better yet, fire up a search engine.

Altavista says more than one million documents on the Web contain the words Microsoft, Netscape, and war. Hotbot lists hundreds of documents titled “Microsoft vs. Netscape,” and twice as many titled “Netscape vs. Microsoft.”

It’s hard to find an article about the two companies that does not cast them as opponents battling over “turf,” “territory,” “sectors” and other geographies.

It’s also hard to start a conversation without using the same metaphorical premise. Intranet Design Magazine recently hosted a thread titled “Who’s winning?? Netscape vs. Microsoft.” Dave Shafer starts the thread with “Wondering what your informed opinion is on who is winning the internet war and what affects this will have on inter/intranet development.” The first respondent says, “sorry, i’m from a french country,” and “I’m searching for economical informations about the war between Microsoft and Netscape for the control of the WEB industrie.” Just as telling is a post by a guy named Michael, who says “Personaly I have both on my PC.”

So do I. Hey, I’ve got 80 megs of RAM and a 2 gig hard drive, so why not? I also have five ISPs, four word processors, three drawing programs, and two presentation packages. I own competing products from Apple, IBM, Microsoft, Netscape, Adobe, Yamaha, Sony, Panasonic, Aiwa, Subaru, Fisher Price and the University of Chicago — to name just a few I can see from where I sit. I don’t sense that buying and using any of these is a territorial act, a victory for one company, or a defeat for another.

But that doesn’t mean we don’t have those perceptions when we write and talk about companies and the markets where they compete. Clearly, we do, because we understand business — as we understand just about everything — in metaphorical terms. As it happens, our understanding of companies and markets is largely structured by the metaphors BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS.

By those metaphors we share an understanding that companies fight battles over market territories that they attack, defend, dominate, yield or abandon. Their battlefields contain beachheads, bunkers, foxholes, sectors, streams, hills, mountains, swamps, streams, rivers, landslides, quagmires, mud, passages, roadblocks, and high ground. In fact, the metaphor BUSINESS IS WAR is such a functional conceptual system that it unconsciously pumps out clichés like a machine. And since sports is a sublimated and formalized kind of war, the distances between sports and war metaphors in business are so small that the vocabularies mix without clashing.

Here, I’ll pick up the nearest Business Week… it’s the January 13 issue. Let’s look at the High Technology section that starts on page 104. The topic is Software and the headline reads, “Battle stations! This industry is up for grabs as never before…” Here’s the first paragraph, with war and sports references capitalized: “Software was once an orderly affair in which a few PLAYERS called most of the shots. The industry had almost gotten used to letting Microsoft Corp. set the agenda in personal computing. But as the Internet ballooned into a $1 billion software business in 1996, HUGE NEW TERRITORIES came up for grabs. Microsoft enters the new year in a STRONG POSITION TO REASSERT CONTROL. But it will have to FIGHT OFF Netscape, IBM, Oracle and dozens of startups that are DESPERATELY STAKING OUT TURF on the Net. ‘Everyone is RACING TO FIND MARKET SPACE and get established…'”

Is this a good thing? Does it matter? The vocabularies of war and sports may be the most commonly used sources of metaphors, for everything from academic essays to fashion stories. Everybody knows war involves death and destruction, yet we experience little if any of that in the ordinary conduct of business, or even of violent activities such as sports.

So why should we concern ourselves with war metaphors, when we all know we don’t take them literally?

Two reasons. First, we do take them literally. Maybe we don’t kill each other, but the sentiments are there, and they do have influences. Second, war rarely yields positive sums, except for one side or another. The economy the Internet induces is an explosion of positive sums that accrue to many if not all participants. Doesn’t it deserve a more accurate metaphor?

For answers, let’s turn to George Lakoff.

The matter of Metaphor

“Answer true or false,” Firesign Theater says. “Dogs flew spaceships. The Aztecs invented the vacation… If you answered ‘false’ to any of these questions, then everything you know is wrong.”

This is the feeling you begin to get when you read George Lakoff, the foremost authority on the matter of metaphor. Lakoff is Professor of Linguistics and Cognitive Science at UC-Berkeley, the author of Women, Fire and Dangerous Things and Moral Politics: What Conservatives Know that Liberals Don’t. He is also co-author of Metaphors We Live By and More than Cool Reason. All are published by the University of Chicago Press.

Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

If Lakoff is right, the most important class you ignored in school was English — not because you need to know all those rules you forgot or books you never read, but because there’s something else behind everything you know (or think you know) and talk about. That something is a metaphor. (And if you think otherwise, you’re wrong.)

In English class — usually when the subject was poetry — they told us that meaning often arises out of comparison, and that three comparative devices are metaphor, simile, and analogy. Each compares one thing to another thing that is similar in some way:

Metaphors say one thing is another thing, such as “time is money,” “a computer screen is a desktop,” or (my favorite Burt Lancaster line) “your mind is a cookie of arsenic.” Similes say one thing is like another thing, such as “gone like snow on the water” or “dumb as a bucket of rocks.” Analogies suggest partial similarities between unalike things, as with “licorice is the liver of candy.”

But metaphor is the device that matters, because, as Lakoff says, “We may not always know it, but we think in metaphor.” And, more to the point, “Metaphors can kill.” Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

But now we’re adults, and you’d think we should know how safely to arm and operate a language device. But it’s not easy. Cognitive science is relatively new and only beginning to make sense of the metaphorical structures that give shape and meaning to our world. Some of these metaphors are obvious but many others are hidden. In fact, some are hidden so well that even a guru like Lakoff can overlook them for years.

Lakoff’s latest book, “Moral Politics: What Conservatives Know and Liberals Don’t,” was inspired by his realization that the reason he didn’t know what many conservatives were talking about was that, as a Liberal, he didn’t comprehend conservative metaphors. Dan Quayle’s applause lines went right past him.

After much investigation, Lakoff found that central to the conservative worldview was a metaphor of the state as a strict father and that the “family values” conservatives espouse are those of a strict father’s household: self-reliance, rewards and punishments, responsibility, respect for authority — and finally, independence. Conservatives under Ronald Reagan began to understand the deep connection between family and politics, while Liberals remained clueless about their own family metaphor — the “nurturant parent” model. Under Reagan, Lakoff says, conservatives drove the language of strict father morality into the media and the body politic. It won hearts and minds, and it won elections.

So metaphors matter, big time. They structure our perceptions, the way we make sense of the world, and the language we use to talk about things that happen in the world. They are also far more literal than poetry class would lead us to believe. Take the metaphor ARGUMENT IS WAR —

“It is important to see that we don’t just talk about arguments in terms of war. We can actually win or lose arguments. We see the person we are arguing with as an opponent. We attach kis decisions and defend our own. We gain and lose ground. We plan and use strategies… Many of the things we do in arguing are partially structured by the concept of war.” (From Metaphors We Live By)

In our culture argument is understood and structured by the war metaphor. But in other cultures it is not. Lakoff invites us to imagine a culture where argument is viewed as dance, participants as performers and the goal to create an aesthetically pleasing performance.

Right now we understand that “Netscape is losing ground in the browser battle,” because we see the browser business a territory over which Netscape and Microsoft are fighting a war. In fact, we are so deeply committed to this metaphor that the vocabularies of business and war reporting are nearly indistinguishable.

Yet the Internet “battlefield” didn’t exist a decade ago, and the software battlefield didn’t exist a decade before that. These territories were created out of nothingness. Countless achievements have been made on them. Victories have been won over absent or equally victorious opponents.

In fact, Netscape and Microsoft are creating whole new markets together, and both succeed mostly at nobody’s expense. Netscape’s success also owes much to the robust nature of the Windows NT Server platform.

The war stories we’re telling about the Internet are turning into epic lies.

At the same time Microsoft has moved forward in browsers, directory services, languages, object models and other product categories — mostly because it’s chasing Netscape in each of them.

Growing markets are positive-sum creations, while wars are zero-sum at best. But BUSINESS IS WAR is an massive metaphorical machine that works so well that business war stories almost write themselves. This wouldn’t be a problem if business was the same now as it was twenty or fifty years ago. But business is changing fast, especially where the Internet is involved. The old war metaphor just isn’t doing the job.

Throughout the Industrial Age, both BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS made good structure, because most industries and markets were grounded in physical reality. Railroads, shipping, construction, automobiles, apparel and retail were all located in physical reality. Even the phone system was easily understood in terms of phones, wires and switches. And every industrial market contained finite labor pools, capital, real estate, opportunities and natural resources. Business really was war, and markets really were battlefields.

But the Internet is hardly physical and most of its businesses have few physical limitations. The Web doesn’t look, feel or behave like anything in the analog world, even though we are eager to describe it as a “highway” or as a kind of “space.” Internet-related businesses appear and grow at phenomenal rates. The year 1995 saw more than $100 billion in new wealth created by the Internet, most of it invested in companies that were new to the world, or close to it. Now new markets emerge almost every day, while existing markets fragment, divide and expand faster than any media can track them.

For these reasons, describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology. But that’s what we’re doing, and every day the facts of business and technology life drift farther away from the metaphors we employ to support them. We arrive at pure myth, and the old metaphors stand out like bones from a dry corpse.

Of course myths are often full of truth. Fr. Seán Olaoire says “there are some truths so profound only a story can tell them.” But the war stories we’re telling about the Internet are turning into epic lies.

Describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology.

What can we do about it?

First, there’s nothing we can do to break the war metaphor machine. It’s just too damn big and old and good at what it does. But we can introduce some new metaphors that make equally good story-telling machines, and tell more accurately what’s going on in this new business world.

One possibility is MARKETS ARE CONVERSATIONS. These days we often hear conversations used as synonyms for markets. We hear about “the privacy conversation” or “the network conversation.” We “talk up” a subject and say it has a lot of “street cred.” This may not be much, but it does accurately structure an understanding of what business is and how markets work in the world we are creating with the Internet.

Another is the CONDUIT metaphor. Lakoff credits Michael Reddy with discovering hidden in our discussions of language the implication of conduit structure:

Your thinking comes through loud and clear. It’s hard to put my ideas into words You can’t stuff ideas into a sentence His words carry little meaning

The Net facilitates communication, and our language about communication implies contuits through which what we say is conveyed. The language of push media suggests the Net is less a centerless network — a Web — than a set of channels through which stuff is sent. Note the preposition. I suggest that we might look more closely at how much the conduit metaphor is implicit in what we say about push, channels and related subjects. There’s something to it, I think.

My problem with both CONDUIT and CHANNEL is that they don’t clearly imply positive sums, and don’t suggest the living nature of the Net. Businesses have always been like living beings, but in the Net environment they enjoy unprecedented fecundity. What’s a good metaphor for that? A jungle?

Whatever, it’s clearly not just a battlefield, regardless of the hostilities involved. It’s time to lay down our arms and and start building new conceptual machines. George Lakoff will speak at PC Forum next week. I hope he helps impart some mass to one or more new metaphorical flywheels. Because we need to start telling sane and accurate stories about our businesses and our markets.

If we don’t, we’ll go on shooting at each other for no good reason.

Links

Here are a few links into the worlds of metaphor and cognitive science. Some of this stuff is dense and heavy; but hey, it’s not an easy subject. Just an important one..

The University of Oregon Metaphor Center, which has piles of other links. A good place to start. The Conceptual Metaphor Home Page, UC-Berkeley’s massive list of metaphor names, source domains and target domains. Oddly, neither business nor markets can be found on any of the three lists. Let’s get them in there. Morality, Metaphor and Politics, Or, Why Conservatives Have Left Liberals In the Dust, by George Lakoff. This is Moral Politics condensed to an essay. An excellent introduction to conceptual metaphor, made vivid by a very hot topic. Metaphor and War: The Metaphor System Used to Justify War in the Gulf. Be sure to look at both Part 1 and Part 2 Conceptual Blending on the Information Highway: How Metaphorical Inferences Work, by Tom Rohrer. An exploration of the INFORMATION HIGHWAY metaphor for the Internet.

I also explored the issue of push media in Shoveling Push and When Push Becomes Shove. And I visited the Microsoft vs. Netscape “war” in Microsoft + Netscape: The Real Story. All three are in Reality 2.0.


Places

Let’s say you want to improve the Wikipedia page for Clayton Indiana with an aerial photograph. Feel free to use the one above. That’s why I shot it, posted it, and licensed it permissively. It’s also why I put a helpful caption under it, and some call-outs in mouse-overs. It’s also why I did the […]

Let’s say you want to improve the Wikipedia page for Clayton Indiana with an aerial photograph. Feel free to use the one above. That’s why I shot it, posted it, and licensed it permissively. It’s also why I put a helpful caption under it, and some call-outs in mouse-overs.

It’s also why I did the same with Danville, Indiana:

Also Brownsville, Indiana, featuring the Brickyard VORTAC station (a navigational beacon used by aircraft):

Eagle Creek Park, the largest in Indianapolis, and its Reservoir:

The district of Indianapolis charmlessly called Park 100:

The White River, winding through Indianapolis:

Where the White River joins and the Wabash, which divides Southern Indiana from Southern Illinois (which is on the far side here, along with Mt. Carmel):

Among other places.

These were shot on the second leg of a United flight from Seattle to Indianapolis by way of Houston. I do this kind of thing on every flight I take. Partly it’s because I’m obsessed with geography, geology, weather, culture, industry, infrastructure, and other natural things. And partly it’s to provide a useful service.

I don’t do it for the art, though sometimes art happens. For example, with this shot of salt ponds at the south end of San Francisco Bay:

Airplane windows are not optically ideal for photography. On the contrary, they tend to be scratched, smudged, distorted, dirty, and worse. Most of the photos above were shot through a window that got frosty and gray at altitude and didn’t clear until we were close to landing. The air was also hazy. For cutting through that I can credit the dehaze slider in Adobe Photoshop 2021. I can also thank Photoshop for pulling out color and doing other things that make bad photos useful, if not good in the artsy sense. They fit my purpose, which is other people’s purposes.

In addition to Adobe, I also want to tip my hat toward Sony, for making the outstanding a7iv mirrorless camera and the 24-105mm f/4 FE G OSS lens I used on this flight. Also Flickr, which makes it easy to upload, organize, caption, tag, and annotate boundless quantities of full- (and other-) size photos—and to give them Creative Commons licenses. I’ve been using Flickr since it started in 2005, and remain a happy customer with two accounts: my main one, and another focused on infrastructure.

While they are no longer in a position to care, I also want to thank the makers of iView MediaPro, Microsoft Expressions and PhaseOne MediaPro for providing the best workflow software in the world, at least for me. Alas, all are now abandonware, and I don’t expect any of them to work on a 64-bit operating system, which is why, for photographic purposes, I’m still sitting on MacOS Mojave 10.14.6.

I’m hoping that I can find some kind of substitute when I get a new laptop, which will inevitably come with an OS that won’t run the oldware I depend on. But I’ll save that challenge for a future post.

Sunday, 16. October 2022