Last Update 9:40 AM July 03, 2022 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Friday, 01. July 2022

Ben Werdmüller

California late start law aims to make school less of a yawn

“Beginning this fall high schools in the nation’s most populous state can’t start before 8:30 a.m. and middle schools can’t start before 8 a.m. under a 2019 first-in-the-nation law forbidding earlier start times. Similar proposals are before lawmakers in New Jersey and Massachusetts.” This is a big deal. I can’t believe they were making teenagers go to school before 8am.

“Beginning this fall high schools in the nation’s most populous state can’t start before 8:30 a.m. and middle schools can’t start before 8 a.m. under a 2019 first-in-the-nation law forbidding earlier start times. Similar proposals are before lawmakers in New Jersey and Massachusetts.” This is a big deal. I can’t believe they were making teenagers go to school before 8am.

[Link]


Florida's 'Don't Say Gay' bill isn't the only anti-LGBTQ+ bill taking effect today

“Collectively, the bills build toward an atmosphere of silence around LGBTQ+ people and restrict how LGBTQ+ youth can learn about themselves and participate at school, advocates say.” [Link]

“Collectively, the bills build toward an atmosphere of silence around LGBTQ+ people and restrict how LGBTQ+ youth can learn about themselves and participate at school, advocates say.”

[Link]


Social Media Can Be Reimagined for the Good of Society

“Yet what well-meaning regulatory proposals lack is a vision of social media that could be good for society. At best, these regulatory approaches seek to make existing social media less awful. But an emerging movement that we might call “the Good Web” envisions the possibility of social media that has a salutary role in a public sphere. What’s less clear is which of several du

“Yet what well-meaning regulatory proposals lack is a vision of social media that could be good for society. At best, these regulatory approaches seek to make existing social media less awful. But an emerging movement that we might call “the Good Web” envisions the possibility of social media that has a salutary role in a public sphere. What’s less clear is which of several dueling visions of the Good Web might lead us to a healthy social media environment.”

[Link]


‘It’s Scary’: Students Fear Going to College in Red States After Roe

“After the overturning of Roe, millions of college students found themselves attending institutions where they would no longer have access to certain types of reproductive healthcare. Now, students who had committed to attending colleges or universities in majority conservative states are rethinking their decisions. Meanwhile, rising high school seniors say they now have somet

“After the overturning of Roe, millions of college students found themselves attending institutions where they would no longer have access to certain types of reproductive healthcare. Now, students who had committed to attending colleges or universities in majority conservative states are rethinking their decisions. Meanwhile, rising high school seniors say they now have something new to consider when compiling their lists of prospective schools: the access and right to an abortion.”

[Link]


Reading, watching, playing, using: June, 2022

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for April, 2022. Books Reap3r, by Eliot Peper. A page-turner set in a very familiar world to me - I had fun recognizing the scenery, the interpersonal dynamics, the cultural references. There was adventure, plausible near-future science fiction scenarios mined for tension; I had trouble p

This is my monthly roundup of the books, articles, and streaming media I found interesting. Here's my list for April, 2022.

Books

Reap3r, by Eliot Peper. A page-turner set in a very familiar world to me - I had fun recognizing the scenery, the interpersonal dynamics, the cultural references. There was adventure, plausible near-future science fiction scenarios mined for tension; I had trouble putting it down, and that’s exactly what I wanted from it. Worth a read.

The Glass Hotel, by Emily St John Mandel. Her writing style takes a lot of getting used to: not so much plot as collage. I spent the first third to a half wondering where we were going. Still, there’s an interesting story here, and well-drawn characters. The themes take some teasing out but are rewarding.

Notable Articles Business

Starbucks Threatens Loss of Trans Benefits in Anti-Union Push, Staff Say. “Starbucks Corp. managers in several states have told baristas that its vaunted transgender-inclusive health-care benefits could go away if they unionize, employees alleged in interviews and a new complaint filed with the US labor board.”

Microsoft Announces It Will Include Pay Ranges In All U.S. Job Postings. Experts Predict It Will Be The First Of Many. “Changes may not ripple through big companies immediately. Many employers don’t relish sharing pay data that’s long been kept secret. Laws in some other jurisdictions that require disclosure of pay ranges—there are now six, including New York City—don’t go into effect for months, and employers have already pushed to postpone the practice there.” But when it happens - and it will - it will be a great step forward, in particular for communities that have systemically been underpaid.

Microsoft adopts principles for employee organizing and engagement with labor organizations. “We recognize that employees have a legal right to choose whether to form or join a union. We respect this right and do not believe that our employees or the company’s other stakeholders benefit by resisting lawful employee efforts to participate in protected activities, including forming or joining a union.” Major statement from Microsoft, breaking rank with most of the rest of the industry.

Climate

The US Supreme Court just gutted federal climate policy. ““Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible ‘solution to the crisis of the day,’” the decision reads. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme.””

The US is pushing EVs while sending its polluting gas-guzzlers abroad. “But what’s missing from that agenda is any plan for how to deal with the diesel and gas-guzzling vehicles being exported in increasingly large numbers to low-income countries around the world. That essentially offshores carbon and air pollution, but in the case of the climate and public health, out of sight isn’t out of mind. That missing piece could wind up derailing the very purpose of Biden’s clean transportation plan and global climate goals.”

Covid

COVID vaccines saved 20M lives in 1st year, scientists say. “The researchers used data from 185 countries to estimate that vaccines prevented 4.2 million COVID-19 deaths in India, 1.9 million in the United States, 1 million in Brazil, 631,000 in France and 507,000 in the United Kingdom.”

Crypto

Cryptocurrency Titan Coinbase Providing “Geo Tracking Data” to ICE. “Coinbase, the largest cryptocurrency exchange in the United States, is selling Immigrations and Customs Enforcement a suite of features used to track and identify cryptocurrency users, according to contract documents shared with The Intercept.”

Bitcoin fell below $20,000 — and why it has further to go. “Of course, everyone is asking, why did bitcoin plunge so quickly Saturday night? What pushed it below $20,000 so suddenly? Somebody is selling. Who needs to sell?”

Why the crypto crash hits different in Latin America. “As the Venezuelan economist Aarón Olmos of the Institute of Higher Administrative Studies (IESA) told Rest of World, people in Latin America began turning to crypto as a way to circumvent their unstable or stagnant economies. He said that in surveys he ran with crypto users in Venezuela, the most common response was, “I would rather have a digital asset whose price goes up and down than a currency whose only real trend is down, thanks to the political economy.””

Inside a Corporate Culture War Stoked by a Crypto C.E.O. “He also questioned their use of preferred pronouns and led a discussion about “who can refer to another person as the N word.” And he told workers that questions about women’s intelligence and risk appetite compared with men’s were “not as settled as one might have initially thought.”” Reprehensible.

There's an Interesting Theory About Why Anthony Hopkins Is Suddenly Shilling NFTs. “Since Hopkins’ public turn towards blockchain, Twitter users have been quick to point out that CAA is an investor in the OpenSea NFT market, and others still suggested that the agency is pushing its talent to shill NFTs because of this investment.”

Culture

Nate. “I made this comic to explain things to my family, but you can have it too.” This is delightful.

A half star review of Top Gun: Maverick (2022). “Even if one can ignore the rabidly bloodthirsty nature of this movie, it is still absolute garbage. The morals of this story are, and I am not exaggerating in the slightest: soldiers should ignore orders to stand down, and you should take actions without thinking about them. Our heroes follow these lessons throughout the story and are constantly rewarded for it. It is a child’s understanding of bravery and honor, coated in thick layers of some of the most painfully sentimental slime that Hollywood has ever produced.”

Edinburgh Festival Fringe: Phoebe Waller-Bridge heralds 'new dawn' in major shake-up to win over locals and 'red card' rogue venue operators. “The Fringe Society has pledged to “eradicate” exploitative, unsafe and unfair work practices by introducing a new three-stage system, which will see event organisers banned from using the official programme, website and box office if they fall foul of official guidelines for a third time.” Good to see.

Media

Twitter is the go-to social media site for U.S. journalists, but not for the public. “More than nine-in-ten journalists in the United States (94%) use social media for their jobs, according to a recent Pew Research Center survey of reporters, editors and others working in the news industry. But the sites that journalists use most frequently differ from those that the public turns to for news.”

Every week, two more newspapers close — and ‘news deserts’ grow larger. “Already, some 2,500 dailies and weeklies have shuttered since 2005; there are fewer than 6,500 left. Every week, two more disappear. And although many digital-only news sites have cropped up around the nation, most communities that lost a local newspaper will not get a print or digital replacement.”

Fox Corp. Loses Bid to Toss Dominion Defamation Lawsuit Over Vote-Rigging Claims. “Delaware Superior Court Judge Eric M. Davis on Tuesday denied Fox Corp.’s motion to dismiss the suit, saying Dominion Voting Systems had shown that the Murdochs may have been on notice that the conspiracy theory that rigged voting machines tilted the vote was false but let Fox News broadcast it anyway. Dominion cited in its suit a report that Rupert Murdoch spoke with Trump a few days after the election “and informed him that he had lost,” the judge noted.”

Politics

The fall of Roe v. Wade is the culmination of the Democratic establishment’s failures. “The overturning of Roe v. Wade, and the underwhelming reaction from senior Democratic leaders to that huge defeat, make the case even clearer that the party’s too-long-in-power leaders — including President Biden — need to move aside. On their watch, a radicalized Republican Party has gained so much power that it’s on the verge of ending American democracy as we know it.”

The Philosophy that Underpins the Right: It's Not What You Think. A notable piece from a venture capital investor: “After the Supreme Court decision overturning Roe vs. Wade, I was chatting with someone who grew up in another country and hadn’t spent a lot of time in and around American politics. They were trying to understand the inherent contradictions between a theoretically conservative right that expands the government to legislate over personal decisions like the healthcare around a pregnancy.”

Pride sponsors also donate to lawmakers behind anti-LGBTQ+ bills. “At least seven companies and their employee-led PACs tracked by Data for Progress continued campaign donations for the 2022 election cycle to politicians backing anti-LGBTQ+ legislation after signing a pledge against such bills from the Human Rights Campaign and Freedom for All Americans.”

Overturning Roe v. Wade could drive voter turnout, poll finds. “A Kaiser Family Foundation survey found that 64 percent of U.S adults say they do not want abortion rights to be overturned, with 37 percent of voters saying a Roe reversal would make them more motivated to vote.”

Living With The Far-Right Insurgency In Idaho. “A lot has been written about both the radicalization of the Republican Party and the decline of democracy in the U.S. — about the country being at a precipice. It’s maybe easy for those warnings to become background noise, or to dismiss them as doom-mongering pieces of clickbait. But in Idaho, the nightmare scenario is crossing into reality, as an authoritarian GOP sets about to create a whiter, Christian nation.”

Christian nationalism on the rise in some GOP campaigns. “According to a recent survey by the institute, white evangelical Christians were among the strongest supporters of the assertion that God intended America as a “promised land” for European Christians. Those who backed that idea were far more likely to agree that “true American patriots may have to resort to violence ... to save our country.””

Science

Dyslexia Actually Grants Special Powers, Researchers Say. “A team of Cambridge scientists published research in the journal Frontiers of Psychology earlier today that raises the possibility that dyslexia, which affects an estimated one in five people worldwide, could actually help the human species adapt and ensure future success.”

‘Fluffy’ crab that wears a sponge as a hat discovered in Western Australia. “Hosie said it wasn’t clear why Lamarckdromia beagle was so fluffy.” But I’m glad that it is.

Why Is This Tiny Frog So Awful At Jumping? “The pumpkin toadlet, which is a frog but not a toad, is so terrible at landing its jumps that its sheer incompetence has become a subject of scientific inquiry. A team of researchers from the United States and Brazil that includes Confetti and Singh say they have an answer: The miniaturized toadlets are so tiny that the fluid-filled chambers in their inner ears which control their balance function rather ineffectively, dooming the valiant little jumpers to a lifetime of crash landings.”

Asteroid samples contain 'clues to origin of life': Japan scientists. “Scientists have been questioning how organic matter -- including amino acids -- was created or where it came from, and the fact that amino acids were discovered in the sample offers a reason to think that amino acids were brought to Earth from outer space.”

Society

Texas educator group proposes referring to slavery as “involuntary relocation” in second grade curriculum. “This group proposing second grade curriculum revisions was given a copy of Senate Bill 3, Texas’ law that dictates how slavery and race is taught in Texas. In it, the law states that slavery can’t be taught as a true founding of the United States and that slavery was nothing more than a deviation from American values.”

1955 warrant in Emmett Till case found, family seeks arrest. “A team searching a Mississippi courthouse basement for evidence about the lynching of Black teenager Emmett Till has found the unserved warrant charging a white woman in his 1955 kidnapping, and relatives of the victim want authorities to finally arrest her nearly 70 years later.” All this terrible history is so close.

Patients in Texas abortion clinic waiting rooms were turned away as Roe fell. “Those turned away were patients who were now outside an already small window: In September, Texas banned abortion past six weeks of pregnancy. That law was the first in a series of abortion restrictions passed in states across the country in the last year that served as a preview of life after Roe.”

Liberal Supreme Court justices detail post-Roe America in furious abortion dissent. ““Those responsible for the original Constitution, including the Fourteenth Amendment, did not perceive women as equals, and did not recognize women’s rights,” Breyer continued, adding that the court may as well rely on standards from the Dark Ages, and that this “consigns women to second-class citizenship.””

Ohio Makes It Easier for Teachers to Carry Guns at School. “A new law requires educators and other school staff members who want to carry a weapon to undergo no more than 24 hours of training — compared with more than 700 hours previously.” What could possibly go wrong?

Young women are leading the movement to stop the next school shooting. ““People often forget that women are the backbone of most of our progressive movements in this country,” Eastmond said. “So, I have noticed a lot of women involved [in gun reform], but that’s not something out of the ordinary that we haven’t seen before. I think women just naturally end up involved in progressive change.””

A Year in Photos of Gender Expansive Youth Across U.S. “The photographer Annie Flanagan spent a year documenting gender-expansive young people across the U.S. as they experience adolescence at a fraught political and cultural time. Flanagan’s subjects are supporting one another, thriving, and finding joy. They’re getting ready for summer vacation. They’re hanging out with their friends. They’re maneuvering the social dynamics of prom. They’re walking across the stage at high school graduation and getting their diplomas, looking to the future, and planning for better days. These moments send their own message.”

It’s Been 50 Years. I Am Not ‘Napalm Girl’ Anymore. “I cannot speak for the families in Uvalde, Texas, but I think that showing the world what the aftermath of a gun rampage truly looks like can deliver the awful reality. We must face this violence head-on, and the first step is to look at it.”

Ethiopia’s Invisible Ethnic Cleansing. “For more than a year and a half, a largely invisible campaign of ethnic cleansing has played out in Ethiopia’s northern region of Tigray. Older people, women, and children have been loaded onto trucks and forced out of their villages and hometowns. Men have been herded into overcrowded detention sites, where  many have died of disease, starvation, or torture. In total, several hundred thousand Tigrayans have been forcibly uprooted because of their ethnicity.”

Technology

Instagram and Facebook remove posts offering abortion pills. “The Facebook account was immediately put on a “warning” status for the post, which Facebook said violated its standards on “guns, animals and other regulated goods.” Yet, when the AP reporter made the same exact post but swapped out the words “abortion pills” for “a gun,” the post remained untouched. A post with the same exact offer to mail “weed” was also left up and not considered a violation.”

Section 230 Is a Last Line of Defense for Abortion Speech Online. “Section 230 is the last line of defense keeping reproductive health care support, information, and fundraising online. Under Section 230, internet platforms that host and moderate user-generated content cannot generally be sued for that content. Section 230 is not absolute. It does not provide immunity if the platform develops or creates the content, and it does not provide immunity from the enforcement of federal criminal laws. But, crucially, it does protect against criminal liability from state laws.”

They Live and the secret history of the Mozilla logo. “So that was the time that I somehow convinced a multi-billion dollar corporation to give away the source code to their flagship product and re-brand it using propaganda art by the world’s most notorious graffiti artist.”

W3C to become a public-interest non-profit organization. “We need a structure where we meet at a faster pace the demands of new web capabilities and address the urgent problems of the web. The W3C Team is small, bounded in size, and the Hosted model hinders rapid development and acquisition of skills in new fields.”

Amazon Shows Off Alexa Speaking in the Voice of a Dead Relative. “In a video demo shown at the event, a young boy says, “Alexa, can Grandma finish reading me ‘The Wizard of Oz’?” — whereupon a synthesized voice of the grandmother emanates from an Amazon Echo Dot smart speaker.” That’s a hard no from me.

Facebook and Anti-Abortion Clinics Are Collecting Highly Sensitive Info on Would-Be Patients. “More than a third of the websites sent data to Facebook when someone made an appointment for an “abortion consultation” or “pre-termination screening.” And at least 39 sites sent Facebook details such as the person’s name, email address, or phone number.”

Facebook Is Receiving Sensitive Medical Information from Hospital Websites. “A tracking tool installed on many hospitals’ websites has been collecting patients’ sensitive health information—including details about their medical conditions, prescriptions, and doctor’s appointments—and sending it to Facebook.”

Tesla Accused of Shutting Off Autopilot Moments Before Impact. “In the report, the NHTSA spotlights 16 separate crashes, each involving a Tesla vehicle plowing into stopped first responders and highway maintenance vehicles. In the crashes, it claims, records show that the self-driving feature had “aborted vehicle control less than one second prior to the first impact” — a finding that calls supposedly-exonerating crash reports, which Musk himself has a pension for circulating, into question.”

Firefox Rolls Out Total Cookie Protection By Default To All Users. Really good work.

Salesforce to employees: We're not going to stop working with the NRA. “Salesforce employees have asked the company to end its relationship with the National Rifle Association. But during an all-hands Wednesday, co-CEOs Bret Taylor and Marc Benioff said that the company wouldn’t bar specific customers from using its services, according to a recording obtained by Protocol.”

Smartphones Blur the Line Between Civilian and Combatant. This seems to be laying some dangerous ground: “The principle of distinction between the two roles is a critical cornerstone of international humanitarian law—the law of armed conflict, codified by decades of customs and laws such as the Geneva Conventions. Those considered civilians and civilian targets are not to be attacked by military forces; as they are not combatants, they should be spared. At the same time, they also should not act as combatants—if they do, they may lose this status.”


Reap3r, by Eliot Peper

A page-turner set in a very familiar world to me - I had fun recognizing the scenery, the interpersonal dynamics, the cultural references. There was adventure, plausible near-future science fiction scenarios mined for tension; I had trouble putting it down, and that's exactly what I wanted from it. Worth a read. [Link]

A page-turner set in a very familiar world to me - I had fun recognizing the scenery, the interpersonal dynamics, the cultural references. There was adventure, plausible near-future science fiction scenarios mined for tension; I had trouble putting it down, and that's exactly what I wanted from it. Worth a read.

[Link]


John Philpin : Lifestream

The PowerPhotos upgrade looks like a good one.

The PowerPhotos upgrade looks like a good one.

The PowerPhotos upgrade looks like a good one.


I was just thinking just the other day how this needs to be

I was just thinking just the other day how this needs to be made easier and voila - it’s made easier. Sign in with” anything – and save it in 1Password Via @Birchtree

I was just thinking just the other day how this needs to be made easier and voila - it’s made easier.

Sign in with” anything – and save it in 1Password

Via @Birchtree


“Posers in positions of owners” I wrote this down with

“Posers in positions of owners” I wrote this down without the source, sorry - the conversation was around ‘business threats’.

“Posers in positions of owners”

I wrote this down without the source, sorry - the conversation was around ‘business threats’.

Thursday, 30. June 2022

Ben Werdmüller

Twitter is the go-to social media site for U.S. journalists, but not for the public

“More than nine-in-ten journalists in the United States (94%) use social media for their jobs, according to a recent Pew Research Center survey of reporters, editors and others working in the news industry. But the sites that journalists use most frequently differ from those that the public turns to for news.” [Link]

“More than nine-in-ten journalists in the United States (94%) use social media for their jobs, according to a recent Pew Research Center survey of reporters, editors and others working in the news industry. But the sites that journalists use most frequently differ from those that the public turns to for news.”

[Link]


Texas educator group proposes referring to slavery as “involuntary relocation” in second grade curriculum

“This group proposing second grade curriculum revisions was given a copy of Senate Bill 3, Texas’ law that dictates how slavery and race is taught in Texas. In it, the law states that slavery can’t be taught as a true founding of the United States and that slavery was nothing more than a deviation from American values.” [Link]

“This group proposing second grade curriculum revisions was given a copy of Senate Bill 3, Texas’ law that dictates how slavery and race is taught in Texas. In it, the law states that slavery can’t be taught as a true founding of the United States and that slavery was nothing more than a deviation from American values.”

[Link]


John Philpin : Lifestream

I had a theory years ago that Apple were going to disinter-m

I had a theory years ago that Apple were going to disinter-mediate the carriers … and zilch. How they do it … I don’t know … but it has to be on their radar. “While Apple’s stated goals will be only iMessage and Find My, followed by IoT, in the longer run Cupertino plans to dis-intermediate the mobile carriers — becoming themselves a satellite-based global phone and data company. That will r

I had a theory years ago that Apple were going to disinter-mediate the carriers … and zilch.

How they do it … I don’t know … but it has to be on their radar.

“While Apple’s stated goals will be only iMessage and Find My, followed by IoT, in the longer run Cupertino plans to dis-intermediate the mobile carriers — becoming themselves a satellite-based global phone and data company. That will require shifting over additional Globalstar bandwidth plus launching another 300-600 satellites, so it is several years away but IS coming.”

💬 Robert Cringely

Apple’s Space Ambitions are Real


How Lobbyists from the Auto Industry Invented Jaywalking I

How Lobbyists from the Auto Industry Invented Jaywalking I have only ever had 3 tickets issued to me by the po-lice. Jaywalking was one of them … and therein lies a story.

How Lobbyists from the Auto Industry Invented Jaywalking

I have only ever had 3 tickets issued to me by the po-lice.

Jaywalking was one of them … and therein lies a story.


How Do We Know How Old Things Like Dinosaur Bones Are? “

How Do We Know How Old Things Like Dinosaur Bones Are? “There’s an old joke told in natural history museums about a new tour guide showing around a group of visitors. Stopping before a dinosaur skeleton, he informs the group that the fossil remains are 80 million years and three weeks old. “How do you know that?” asks one of the visitors. “Because it was 80 million when I started,”  says

How Do We Know How Old Things Like Dinosaur Bones Are?

“There’s an old joke told in natural history museums about a new tour guide showing around a group of visitors. Stopping before a dinosaur skeleton, he informs the group that the fossil remains are 80 million years and three weeks old. “How do you know that?” asks one of the visitors. “Because it was 80 million when I started,”  says the guide.”


Simon Willison

s3-ocr: Extract text from PDF files stored in an S3 bucket

I've released s3-ocr, a new tool that runs Amazon's Textract OCR text extraction against PDF files in an S3 bucket, then writes the resulting text out to a SQLite database with full-text search configured so you can run searches against the extracted data. You can search through a demo of 697 pages of OCRd text at s3-ocr-demo.datasette.io/pages/pages. Textract works extremely well: it handles

I've released s3-ocr, a new tool that runs Amazon's Textract OCR text extraction against PDF files in an S3 bucket, then writes the resulting text out to a SQLite database with full-text search configured so you can run searches against the extracted data.

You can search through a demo of 697 pages of OCRd text at s3-ocr-demo.datasette.io/pages/pages.

Textract works extremely well: it handles dodgy scanned PDFs full of typewritten code and reads handwritten text better than I can! It charges $1.50 per thousand pages processed.

Why I built this

My initial need for this is a collaboration I have running with the San Francisco Microscopy Society. They've been digitizing their archives - which stretch back to 1870! - and were looking for help turning the digital scans into something more useful.

The archives are full of hand-written and type-written notes, scanned and stored as PDFs.

I decided to wrap my work up as a tool because I'm sure there are a LOT of organizations out there with a giant bucket of PDF files that would benefit from being able to easily run OCR and turn the results into a searchable database.

Running Textract directly against large numbers of files is somewhat inconvenient (here's my earlier TIL about it). s3-ocr is my attempt to make it easier.

Tutorial: How I built that demo

The demo instance uses three PDFs from the Library of Congress Harry Houdini Collection on the Internet Archive:

The unmasking of Robert-Houdin from 1908 The practical magician and ventriloquist's guide: a practical manual of fireside magic and conjuring illusions: containing also complete instructions for acquiring & practising the art of ventriloquism from 1876 Latest magic, being original conjuring tricks from 1918

I started by downloading PDFs of those three files.

Then I installed the two tools I needed:

pip install s3-ocr s3-credentials

I used my s3-credentials tool to create a new S3 bucket and credentials with the ability to write files to it, with the new --statement option (which I released today) to add textract permissions to the generated credentials:

s3-credentials create s3-ocr-demo --statement '{ "Effect": "Allow", "Action": "textract:*", "Resource": "*" }' --create-bucket > ocr.json

(Note that you don't need to use s3-credentials at all if you have AWS credentials configured on your machine with root access to your account - just leave off the -a ocr.json options in the following examples.)

s3-ocr-demo is now a bucket I can use for the demo. ocr.json contains JSON with an access key and secret key for an IAM user account that can interact with the that bucket, and also has permission to access the AWS Textract APIs.

I uploaded my three PDFs to the bucket:

s3-credentials put-object s3-ocr-demo latestmagicbeing00hoff.pdf latestmagicbeing00hoff.pdf -a ocr.json s3-credentials put-object s3-ocr-demo practicalmagicia00harr.pdf practicalmagicia00harr.pdf -a ocr.json s3-credentials put-object s3-ocr-demo unmaskingrobert00houdgoog.pdf unmaskingrobert00houdgoog.pdf -a ocr.json

(I often use Transmit as a GUI for this kind of operation.)

Then I kicked off OCR jobs against every PDF file in the bucket:

% s3-ocr start s3-ocr-demo --all -a ocr.json Found 0 files with .s3-ocr.json out of 3 PDFs Starting OCR for latestmagicbeing00hoff.pdf, Job ID: f66bc2d00fb75d1c42d1f829e5b6788891f9799fda404c4550580959f65a5402 Starting OCR for practicalmagicia00harr.pdf, Job ID: ef085728135d524a39bc037ad6f7253284b1fdbeb728dddcfbb260778d902b55 Starting OCR for unmaskingrobert00houdgoog.pdf, Job ID: 93bd46f02eb099eca369c41e384836d2bd3199b95d415c0257ef3fa3602cbef9

The --all option scans for any file with a .pdf extension. You can pass explicit file names instead if you just want to process one or two files at a time.

This returns straight away, but the OCR process itself can take several minutes depending on the size of the files.

The job IDs can be used to inspect the progress of each task like so:

% s3-ocr inspect-job f66bc2d00fb75d1c42d1f829e5b6788891f9799fda404c4550580959f65a5402 { "DocumentMetadata": { "Pages": 244 }, "JobStatus": "SUCCEEDED", "DetectDocumentTextModelVersion": "1.0" }

Once the job completed, I could preview the text extracted from the PDF like so:

% s3-ocr text s3-ocr-demo latestmagicbeing00hoff.pdf 111 . 116 LATEST MAGIC BEING ORIGINAL CONJURING TRICKS INVENTED AND ARRANGED BY PROFESSOR HOFFMANN (ANGELO LEWIS, M.A.) Author of "Modern Magic," etc. WITH NUMEROUS ILLUSTRATIONS FIRST EDITION NEW YORK SPON & CHAMBERLAIN, 120 LIBERTY ST. ...

To create a SQLite database with a table containing rows for every page of scanned text, I ran this command:

% s3-ocr index s3-ocr-demo pages.db -a ocr.json Fetching job details [####################################] 100% Populating pages table [####--------------------------------] 13% 00:00:34

I then published the resulting pages.db SQLite database using Datasette - you can explore it here.

How s3-ocr works

s3-ocr works by calling Amazon's S3 and Textract APIs.

Textract only works against PDF files in asynchronous mode: you call an API endpoint to tell it "start running OCR against this PDF file in this S3 bucket", then wait for it to finish - which can take several minutes.

It defaults to storing the OCR results in its own storage, expiring after seven days. You can instead tell it to store them in your own S3 bucket - I use that option in s3-ocr.

A design challenge I faced was that I wanted to make the command restartable and resumable: if the user cancelled the task, I wanted to be able to pick up from where it had got to. I also want to be able to run it again after adding more PDFs to the bucket without repeating work for the previously processed files.

I also needed to persist those job IDs: Textract writes the OCR results to keys in the bucket called textract-output/JOB_ID/1-? - but there's no indication as to which PDF file the results correspond to.

My solution is to write tiny extra JSON files to the bucket when the OCR job is first started.

If you have a file called latestmagicbeing00hoff.pdf the start command will create a new file called latestmagicbeing00hoff.pdf.s3-ocr.json with the following content:

{ "job_id": "f66bc2d00fb75d1c42d1f829e5b6788891f9799fda404c4550580959f65a5402", "etag": "\"d79af487579dcbbef26c9b3be763eb5e-2\"" }

This associates the job ID with the PDF file. It also records the original ETag of the PDF file - this is so in the future I can implement a system that can re-run OCR if the PDF has been updated.

The existence of these files lets me do two things:

If you run s3-ocr start s3-ocr-demo --all it can avoid re-submitting PDF files that have already been sent for OCR, by checking for the existence of the .s3-ocr.json file. When you later ask for the results of the OCR it can use these files to associate the PDF with the results.

Scatting .s3-ocr.json files all over the place feels a little messy, so I have an open issue considering moving them all to a s3-ocr/ prefix in the bucket instead.

Try it and let me know what you think

This is a brand new project, but I think it's ready for other people to start trying it out.

I ran it against around 7,000 pages from 531 PDF files in the San Francisco Microscopy Society archive and it seemed to work well!

If you try this out and it works (or it doesn't work) please let me know via Twitter or GitHub.

Releases this week s3-ocr: 0.4 - (4 releases total) - 2022-06-30
Tools for running OCR against files stored in S3 s3-credentials: 0.12 - (12 releases total) - 2022-06-30
A tool for creating credentials for accessing S3 buckets datasette-scale-to-zero: 0.1.2 - (3 releases total) - 2022-06-23
Quit Datasette if it has not received traffic for a specified time period TIL this week One-liner for running queries against CSV files with SQLite Ignoring errors in a section of a Bash script Running OCR against a PDF file with AWS Textract

John Philpin : Lifestream

Trump Voters Need a New Direction “ I never meet America

Trump Voters Need a New Direction “ I never meet Americans who love America more than Trump people do. They really love it—its history, what it means in the world, what it’s done. 💬 Peggy Noonan

Trump Voters Need a New Direction

“ I never meet Americans who love America more than Trump people do. They really love it—its history, what it means in the world, what it’s done.

💬 Peggy Noonan


Ben Werdmüller

The US Supreme Court just gutted federal climate policy

““Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible ‘solution to the crisis of the day,’” the decision reads. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme.”” [Link]

““Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible ‘solution to the crisis of the day,’” the decision reads. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme.””

[Link]


Nate

“I made this comic to explain things to my family, but you can have it too.” This is delightful. [Link]

“I made this comic to explain things to my family, but you can have it too.” This is delightful.

[Link]

Wednesday, 29. June 2022

John Philpin : Lifestream

Early Mid Journey experiment. Text: “the impact of no inco

Early Mid Journey experiment. Text: “the impact of no income for work in the style of Constable” Interesting how the majority of the vision remains on ‘terra firma’- but introduces a minor replication of same in the ‘cloud’.

Early Mid Journey experiment.

Text: “the impact of no income for work in the style of Constable”

Interesting how the majority of the vision remains on ‘terra firma’- but introduces a minor replication of same in the ‘cloud’.


Mike Jones: self-issued

OAuth DPoP Presentation at Identiverse 2022

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference: Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF) A few photographs that workation photographer Brian Campbell took during the […]

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference:

Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF)

A few photographs that workation photographer Brian Campbell took during the presentation follow.

Mike Presenting:

Who is that masked man???

Pieter Presenting:

Tuesday, 28. June 2022

John Philpin : Lifestream

Twenty years. it’s a start. Reminds me of a cartoon I saw…

Twenty years. it’s a start. Reminds me of a cartoon I saw…

Twenty years. it’s a start. Reminds me of a cartoon I saw…


Can you be a ‘phony’ and a ‘leaker’?

Can you be a ‘phony’ and a ‘leaker’?

Can you be a ‘phony’ and a ‘leaker’?


@_Nat Zone

Global Identity GAINs Global Interoperability

金曜日の朝、前日夜のIdentiverse伝統のハ… The post Global Identity GAINs Global Interoperability first appeared on @_Nat Zone.

金曜日の朝、前日夜のIdentiverse伝統のハードなパーティー(わたしは行きませんでしたが)の後、朝8時半に人が集まり、最終日のキーノートセッションが始まりました。Andi Hindle のイントロダクションから始まり、Don ThibeauによるOpenID Foundation キム・キャメロン・アワードの発表(わたしはバックステージにいたので見れませんでした)、そして最後に我々のパネルが30分ほど行われました。

午前8時45分〜午前9時15分 基調講演

パネリストは以下の方々です。

Drummond Reed Director of Trust Services • Avast Daniel Goldscheider Co-founder & CEO • Yes.com Sanjay Jain Chief Innovation Officer; Partner • CIIE Co.; Bharat Innovation Fund Nat Sakimura, Chairman • OpenID Foundation

話の内容は、基本的には、信頼できるネットワーク間の相互運用性についての話でした。既存のネットワークを活用するという考え方は、多くの法域で絶大な人気を誇っているようです。

Network of networks, given first in my presentation at EIC 2021 Keynote

また、スマートフォンが一人一人に行き渡る「金持ちのコンピューティング」のユースケースだけでなく、スマートフォンにアクセスできず、回線も電気も散発的なケースも考慮する必要がある点にも触れました。

より詳しい内容については、Identiverse 2022のアーカイブが公開されてからこの記事を更新します。

(写真提供:Brian Campbell)

The post Global Identity GAINs Global Interoperability first appeared on @_Nat Zone.

Simon Willison

Quoting Jason Miller

The general idea of an “Islands” architecture is deceptively simple: render HTML pages on the server, and inject placeholders or slots around highly dynamic regions. These placeholders/slots contain the server-rendered HTML output from their corresponding widget. They denote regions that can then be "hydrated" on the client into small self-contained widgets, reusing their server-rendered initial

The general idea of an “Islands” architecture is deceptively simple: render HTML pages on the server, and inject placeholders or slots around highly dynamic regions. These placeholders/slots contain the server-rendered HTML output from their corresponding widget. They denote regions that can then be "hydrated" on the client into small self-contained widgets, reusing their server-rendered initial HTML.

Jason Miller


The Magic Interview Question

The Magic Interview Question Jeff Gothelf explains why "Tell me about the last time you [did something]" is the most valuable question you can ask when interviewing a user or potential user. Via @andybudd

The Magic Interview Question

Jeff Gothelf explains why "Tell me about the last time you [did something]" is the most valuable question you can ask when interviewing a user or potential user.

Via @andybudd


John Philpin : Lifestream

“Everywhere you will find that the wealth of the wealthy s

“Everywhere you will find that the wealth of the wealthy springs from the poverty of the poor.” 💬 Peter Kropotkin

“Everywhere you will find that the wealth of the wealthy springs from the poverty of the poor.”

💬 Peter Kropotkin

Monday, 27. June 2022

Phil Windley's Technometria

Fixing Web Login

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do. You know the conventional wisdom that the "close" button in elevators isn't really hooked up to a

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do.

You know the conventional wisdom that the "close" button in elevators isn't really hooked up to anything. That it's just there to make you feel good? "Keep me logged in" is digital identity's version of that button. Why is using authenticated service on the web so unpleasant?

Note that I'm specifically talking about the web, as opposed to mobile apps. As I wrote before, compare your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when most people need something from their bank, they pull out their phone and use the mobile app. I think this is a shame. I like the web. There's more freedom on the web because there are fewer all-powerful gatekeepers. And, for many developers, it's more approachable. The web, by design, is more transparent in how it works, inspiring innovation and accelerating it's adoption.

The core problem with the web isn't just passwords. After all, most mobile apps authenticate using passwords as well. The problem is how sessions are set up and refreshed (or not, in the case of the web). On the web, sessions are managed using cookies, or correlation identifiers. HTTP cookies are generated by the server and stored on the browser. Whenever the browser makes a request to the server, it sends back the cookie, allowing the server to correlate all requests from that browser. Web sites, over the years, have become more security conscious and, as a result, most set expirations for cookies. When the cookie has expired, you have to log in again.

Now, your mobile app uses HTTP as well, and so it also uses cookies to link HTTP requests and create a session. The difference is in how you're authenticated. Mobile apps (speaking generally) are driven by APIs. The app makes an HTTP request to the API and receives JSON data in return which it then renders into the screens and buttons you interact with. Most API access is protected by an identity protocol called OAuth.

Getting an access token from the authorization server (click to enlarge) Using a token to request data from an API (click to enlarge)

You've used OAuth if you've ever used any kind of social login like Login with Apple, or Google sign-in. Your mobile app doesn't just ask for your user ID and password and then log you in. Rather, it uses them to authenticate with an authentication server for the API using OAuth. The standard OAuth flow returns an authentication token that the app stores and then returns to the server with each request. Like cookies, these access tokens expire. But, unlike cookies, OAuth defines a refresh token mechanism that the app can be use to get a new access token. Neat, huh?

The problem with using OAuth on the web is that it's difficult to trust browsers:

Some are in public places and people forget to log out. A token in the browser can be attacked with techniques like cross-site scripting. Browser storage mechanisms are also subject to attack.

Consequently, storing the access token, refresh token, and developer credentials that are used to carry out an OAuth flow is hard—maybe impossible—to do securely.

Solving this problem probably won't happen because we solved browser security problems and decided to use OAuth in the browser. A more likely approach is to get rid of passwords and make repeated authentication much less onerous. Fortunately, solutions are at hand. Most major browsers on most major platforms can now be used as FIDO platform authenticators. This is a fancy way of saying you can use the the same mechanisms you use to authenticate to the device (touch ID, face ID, or even a PIN) to authenticate to your favorite web site as well. Verifiable credentials are another up and coming technology that promises to significantly reduce the burdens of passwords and multi-factor authentication.

I'm hopeful that we may really be close to the end for passwords. I think the biggest obstacle to adoption is likely that these technologies are so slick that people won't believe they're really secure. If we can get adoption, then maybe we'll see a resurgence of web-based services as well.

Notes This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present the right option. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't, and you have to use your password manager twice for a single login.

Photo Credit: Dual elevator door buttons from Nils R. Barth (CC0 1.0)

Tags: identity web mobile oauth cookies


Kerri Lemole

JFF & VC-EDU Plugfest #1:

JFF & VC-EDU Plugfest #1: Leaping Towards Interoperable Verifiable Learning & Employment Records Plugfest #1 Badge Image Digital versions of learning and employment records (LER) describe a person’s learning and employment experiences and are issued or endorsed by entities making claims about these experiences. The advantage over paper documents is that LERs can contain massive amount
JFF & VC-EDU Plugfest #1: Leaping Towards Interoperable Verifiable Learning & Employment Records Plugfest #1 Badge Image

Digital versions of learning and employment records (LER) describe a person’s learning and employment experiences and are issued or endorsed by entities making claims about these experiences. The advantage over paper documents is that LERs can contain massive amounts of useful data that describe the experiences, skills and competencies applied, and may even include assets like photos, videos, or content that demonstrate the achievement. The challenge is that this data needs to be understandable and it should be in the hands of those that the data is about so that they have the power to decide who or what has access to it much like they do with their watermarked and notarized paper documents.

LERs that are issued, delivered, and verified according to well-established and supported standards with syntactic, structural, and semantic similarities, can be understood and usable across many systems. This can provide individuals with direct, convenient, understandable, and affordable access to their records (Read more about interoperable verifiable LERs).

To encourage the development of a large and active marketplace of interoperable LER-friendly technology, tools, and infrastructure, Jobs for the Future (JFF), in collaboration with the W3C Verifiable Credentials Education Task Force (VC-EDU) is hosting a series of interoperability plugfests. These plugfests are inspired by the DHS Plugfests and provide funding to vendors that can demonstrate the use of standards such as W3C Verifiable Credentials (VC), and Decentralized Identifiers (DIDs). The first plugfest set the stage for the others by introducing VC wallet vendors to an education data standard called Open Badges and introducing Open Badges platforms to VCs.

Over the past year, the community at VC-EDU and 1EdTech Open Badges members have been working towards an upgrade of Open Badges to 3.0 which drops its web server hosted verification in favor of the VC cryptographic verification method. Open Badges are digital credentials that can represent any type of achievement from micro-credentials to diplomas. Until this upgrade, they have been used more as human verifiable credentials shared on websites and social media than machine verifiable ones. This upgrade increases the potential for machines to interact with these credentials giving individuals more opportunities to decide to use them in educational and employment situations that use computers to read and analyze the data.

Plugfest #1 requirements were kept simple in order to welcome as many vendors as possible. It required that vendors be able to display an Open Badge 3.0 including a badge image, issuer name, achievement name, achievement description, and achievement criteria. Optionally they could also display an issuer logo and other Open Badges 3.0 terms. For a stretch goal, vendors could demonstrate that they verified the badge prior to accepting and displaying it in their wallet app. Lastly, the participants were required to make a 3–5 minute video demonstrating what they’d done.

There were 20 participants from around the world at various stages in their implementation (list of participants). They were provided with a web page listing resources and examples of Open Badges. Because work on Open Badges 3.0 was still in progress, a sample context file was hosted at VC-EDU that would remain unchanged during the plugfest. Open discussion on the VC-EDU email list was encouraged so that they could be archived and shared with the community. These were the first Open Badges 3.0 to be displayed and there were several questions about how to best display them in a VC wallet. As hoped, the cohort worked together to answer these questions in an open conversation that the community could access and learn from.

The timeline to implement was a quick three weeks. Demo day was held on June 6, 2022, the day before the JFF Horizons conference in New Orleans. The videos were watched in batches by the participants and observers who were in person and on Zoom. Between batches, there were questions and discussions.

A complete list of the videos is available on the list of participants. Here are a few examples:

Plugfest #1 succeeded in familiarizing VC wallet vendors with an education data standard and education/workforce platforms with VCs. The participants were the first to issue & display Open Badges 3.0 or for that matter any education standard as a VC. It revealed new questions about displaying credentials and what onboarding resources will be useful.

Each of the participating vendors that met the requirements will be awarded the Plugfest #1 badge (image pictured above). With this badge, they qualify to participate in Plugfest #2 which will focus on issuing and displaying LER VCs. Plugfest #2 will take place in November 2022 with plans to meet in person the day before the Internet Identity Workshop on November 14 in Mountainview, CA. If vendors are interested in Plugfest #2 and didn’t participate in Plugfest #1, there is still an opportunity to do so by fulfilling the same requirements listed above including the video and earning a Plugfest #1 badge.

To learn more, join VC-EDU which meets online most Mondays at 8 am PT/11 am ET/5 pm CET. Meeting connection info and archives can be found here. Subscribe to the VC-EDU mailing list by sending an email to public-vc-edu-request@w3.org with the subject “subscribe” (no email message needed).

Sunday, 26. June 2022

John Philpin : Lifestream

5th Session of The Republic of Readers starts in 25 minutes

5th Session of The Republic of Readers starts in 25 minutes … live conversation with a few of us across 3 continents who like to read books … let me know if you’d like to join us .. I’ll send you the zoom link.

5th Session of The Republic of Readers starts in 25 minutes … live conversation with a few of us across 3 continents who like to read books … let me know if you’d like to join us .. I’ll send you the zoom link.

Saturday, 25. June 2022

John Philpin : Lifestream

“There are some aspects of its design that are clearly iPh

“There are some aspects of its design that are clearly iPhone-inspired — the basic shape, flat sides, button shapes even.” … gawd … no quote attribution for a reason.

“There are some aspects of its design that are clearly iPhone-inspired — the basic shape, flat sides, button shapes even.”

… gawd … no quote attribution for a reason.


Moving through my news feeds at a pace this morning … it’s l

Moving through my news feeds at a pace this morning … it’s like a giant echo chamber of ‘told you so’.

Moving through my news feeds at a pace this morning … it’s like a giant echo chamber of ‘told you so’.


There IS hope. Yang has left the Dems. EEJIT

There IS hope. Yang has left the Dems. EEJIT

There IS hope.

Yang has left the Dems.

EEJIT


There is a new movie from New Zealand coming out … the entir

There is a new movie from New Zealand coming out … the entire dialogue is gibberish My mind went back to 1969 when Eric Sykes launched Rhubarb on to the world. Nothing you create is ultimately your own, yet all of it is you. Your imagination, it seems to me, is mostly an accidental dance between collected memory and influence, and is not intrinsic to you, rather it is a construction that awa

There is a new movie from New Zealand coming out … the entire dialogue is gibberish

My mind went back to 1969 when Eric Sykes launched Rhubarb on to the world.

Nothing you create is ultimately your own, yet all of it is you. Your imagination, it seems to me, is mostly an accidental dance between collected memory and influence, and is not intrinsic to you, rather it is a construction that awaits spiritual ignition.

💬 Nick Cave


Dr Leon Winer’s idea processing article Dave Winer’s dad t

Dr Leon Winer’s idea processing article Dave Winer’s dad talking about his ‘idea processor’ … and explaining why it is not a ‘‘word processor’ - the paper leans heavily on the work of @dave .. something that Nick Milo should enjoy.

Dr Leon Winer’s idea processing article

Dave Winer’s dad talking about his ‘idea processor’ … and explaining why it is not a ‘‘word processor’ - the paper leans heavily on the work of @dave .. something that Nick Milo should enjoy.


Thursday, 23. June 2022

Simon Willison

First impressions of DALL-E, generating images from text

I made it off the DALL-E waiting list a few days ago and I've been having an enormous amount of fun experimenting with it. Here are some notes on what I've learned so far (and a bunch of example images too). (For those not familiar with it, DALL-E is OpenAI's advanced text-to-image generator: you feed it a prompt, it generates images. It's extraordinarily good at it.) First, a warning: DALL-E

I made it off the DALL-E waiting list a few days ago and I've been having an enormous amount of fun experimenting with it. Here are some notes on what I've learned so far (and a bunch of example images too).

(For those not familiar with it, DALL-E is OpenAI's advanced text-to-image generator: you feed it a prompt, it generates images. It's extraordinarily good at it.)

First, a warning: DALL-E only allows you to generate up to 50 images a day. I found this out only when I tried to generate image number 51. So there's a budget to watch out for.

I've usually run out by lunch time!

How to use DALL-E

DALL-E is even simpler to use than GPT-3: you get a text box to type in, and that's it. There are no advanced settings to tweak.

It does have one other mode: you can upload your own photo, crop it to a square and then erase portions of it and ask DALL-E to fill them in with a prompt. This feature is clearly still in the early stages - I've not had great results with it yet.

DALL-E always returns six resulting images, which I believe it has selected as the "best" from hundreds of potential results.

Tips on prompts

DALL-E's initial label suggests to "Start with a detailed description". This is very good advice!

The more detail you provide, the more interesting DALL-E gets.

If you type "Pelican", you'll get an image that is indistinguishable from what you might get from something like Google Image search. But the more details you ask for, the more interesting and fun the result.

Fun with pelicans

Here's "A ceramic pelican in a Mexican folk art style with a big cactus growing out of it":

Some of the most fun results you can have come from providing hints as to a medium or art style you would like. Here's "A heavy metal album cover where the band members are all pelicans... made of lightning":

This illustrates a few interesting points. Firstly, DALL-E is hilariously bad at any images involving text. It can make things that look like letters and words but it has no concept of actual writing.

My initial prompt was for "A death metal album cover..." - but DALL-E refused to generate that. It has a filter to prevent people from generating images that go outside its content policy, and the word "death" triggered it.

(I'm confident that the filter can be easily avoided, but I don't want to have my access revoked so I haven't spent any time pushing its limits.)

It's also not a great result - those pelicans are not made of lightning! I tried a tweaked prompt:

"A heavy metal album cover where the band members are all pelicans that are made of lightning":

Still not made of lightning. One more try:

"pelican made of lightning":

Let's try the universal DALL-E cheat code, adding "digital art" to the prompt.

"a pelican made of lightning, digital art"

OK, those look a lot better!

One last try - the earlier prompt but with "digital art" added.

"A heavy metal album cover where the band members are all pelicans that are made of lightning, digital art":

OK, these are cool. The text is gone - maybe the "digital art" influence over-rode the "album cover" a tiny bit there.

This process is a good example of "prompt engineering" - feeding in altered prompts to try to iterate towards a better result. This is a very deep topic, and I'm confident I've only just scratched the surface of it.

Breaking away from album art, here's "A squadron of pelicans having a tea party in a forest with a raccoon, digital art". Often when you specify "digital art" it picks some other additional medium:

Recreating things you see

A fun game I started to play with DALL-E was to see if I could get it to recreate things I saw in real life.

My dog, Cleo, was woofing at me for breakfast. I took this photo of her:

Then I tried this prompt: "A medium sized black dog who is a pit bull mix sitting on the ground wagging her tail and woofing at me on a hardwood floor"

OK, wow.

Later, I caught her napping on the bed:

Here's DALL-E for "A medium sized black pit bull mix curled up asleep on a dark green duvet cover":

One more go at that. Our chicken Cardi snuck into the house and snuggled up on the sofa. Before I evicted her back into the garden I took this photo:

"a black and white speckled chicken with a red comb snuggled on a blue sofa next to a cushion with a blue seal pattern and a blue and white knitted blanket":

Clearly I didn't provide a detailed enough prompt here! I would need to iterate on this one a lot.

Stained glass

DALL-E is great at stained glass windows.

"Pelican in a waistcoat as a stained glass window":

"A stained glass window depicting 5 different nudibranchs"

People

DALL-E is (understandably) quite careful about depictions of people. It won't let you upload images with recognisable faces in them, and when you ask for a prompt with a famous person it will sometimes pull off tricks like showing them from behind.

Here's "The pope on a bicycle leading a bicycle race through Paris":

Though maybe it was the "leading a bicycle race" part that inspired it to draw the image from this direction? I'm not sure.

It will sometimes generate made-up people with visible faces, but they ask users not to share those images.

Assorted images

Here are a bunch of images that I liked, with their prompts.

Inspired by one of our chickens:

"A blue-grey fluffy chicken puffed up and looking angry perched under a lemon tree"

I asked it for the same thing, painted by Salvador Dali:

"A blue-grey fluffy chicken puffed up and looking angry perched under a lemon tree, painted by Salvador Dali":

"Bats having a quinceañera, digital art":

"The scene in an Agatha Christie mystery where the e detective reveals who did it, but everyone is a raccoon. Digital art.":

(It didn't make everyone a raccoon. It also refused my initial prompt where I asked for an Agatha Christie murder mystery, presumably because of the word "murder".)

"An acoustic guitar decorated with capybaras in Mexican folk art style, sigma 85mm":

Adding "sigma 85mm" (and various other mm lengths) is a trick I picked up which gives you realistic images that tend to be cropped well.

"A raccoon wearing glasses and reading a poem at a poetry evening, sigma 35mm":

"Pencil sketch of a Squirrel reading a book":

Pencil sketches come out fantastically well.

"The royal pavilion in brighton covered in snow"

I experienced this once, many years ago when I lived in Brighton - but forgot to take a photo of it. It looked exactly like this.

And a game: fantasy breakfast tacos

It's difficult to overstate how much fun playing with this stuff is. Here's a game I came up with: fantasy breakfast tacos. See how tasty a taco you can invent!

Mine was "breakfast tacos with lobster, steak, salmon, sausages and three different sauces":

Natalie is a vegetarian, which I think puts her at a disadvantage in this game. "breakfast taco containing cauliflower, cheesecake, tomatoes, eggs, flowers":

Closing thoughts

As you can see, I have been enjoying playing with this a LOT. I could easily share twice as much - the above are just the highlights from my experiments so far.

The obvious question raised by this is how it will affect people who generate art and design for a living. I don't have anything useful to say about that, other than recommending that they make themselves familiar with the capabilities of these kinds of tools - which have taken an astonishing leap forward in the past few years.

My current mental model of DALL-E is that it's a fascinating tool for enhancing my imagination. Being able to imagine something and see it visualized a few seconds later is an extraordinary new ability.

I haven't yet figured out how to apply this to real world problems that I face - my attempts at getting DALL-E to generate website wireframes or explanatory illustrations have been unusable so far - but I'll keep on experimenting with it. Especially since feeding it prompts is just so much fun.


How Imagen Actually Works

How Imagen Actually Works Imagen is Google's new text-to-image model, similar to (but possibly even more effective than) DALL-E. This article is the clearest explanation I've seen of how Imagen works: it uses Google's existing T5 text encoder to convert the input sentence into an encoding that captures the semantic meaning of the sentence (including things like items being described as being on

How Imagen Actually Works

Imagen is Google's new text-to-image model, similar to (but possibly even more effective than) DALL-E. This article is the clearest explanation I've seen of how Imagen works: it uses Google's existing T5 text encoder to convert the input sentence into an encoding that captures the semantic meaning of the sentence (including things like items being described as being on top of other items), then uses a trained diffusion model to generate a 64x64 image. That image is passed through two super-res models to increase the resolution to the final 1024x1024 output.


Phil Windley's Technometria

Transferable Accounts Putting Passengers at Risk

Summary: The non-transferability of verifiable credential is one of their super powers. This post examines how that super power can be used to reduce fraud and increase safety in a hired car platform. Bolt is a hired-car service like Uber or Lyft. Bolt is popular because its commissions are less than other ride-sharing platforms. In Bolt drivers in Nigeria are illicitly selling their

Summary: The non-transferability of verifiable credential is one of their super powers. This post examines how that super power can be used to reduce fraud and increase safety in a hired car platform.

Bolt is a hired-car service like Uber or Lyft. Bolt is popular because its commissions are less than other ride-sharing platforms. In Bolt drivers in Nigeria are illicitly selling their accounts, putting passengers at risk Rest of World reports on an investigation showing that Bolt drivers in Nigeria (and maybe other countries) routinely sell verified accounts to third parties. The results are just what you'd expect:

Adede Sonaike is another Lagos-based Bolt user since 2018, and said she gets frequently harassed and shouted at by its drivers over even the simplest of issues, such as asking to turn down the volume of the car stereo. Sonaike said these incidents have become more common and that she anticipates driver harassment on every Bolt trip. But on March 18, she told Rest of World she felt that her life was threatened. Sonaike had ordered a ride, and confirmed the vehicle and plate number before entering the car. After the trip started, she noticed that the driver’s face didn’t match the image on the app. “I asked him why the app showed me a different face, and he said Bolt blocked his account and that [he] was using his brother’s account, and asked why I was questioning him,” she recalled. She noticed the doors were locked and the interior door handle was broken, and became worried. Sonaike shared her ride location with her family and asked the driver to stop, so she could end the trip. He only dropped her off after she threatened to break his windows. From Bolt drivers in Nigeria are illicitly selling their accounts
Referenced 2022-06-09T09:44:24-0400

The problem is accounts are easily transferable and reputations tied to transferable accounts can't be trusted since they don't reflect the actions of the person currently using the account. Making accounts non-transferable using traditional means is difficult because they're usually protected by something you know (e.g., a password) and that can be easily changed and exchanged. Even making the profile picture difficult to change (like Bolt apparently does) isn't a great solution since people may not check the picture, or fall for stories like the driver gave the passenger in the preceding quote.

Verifiable credentials are a better solution because they're designed to not be transferable1. Suppose Bob wants to sell his Bolt account to Malfoy. Alice, a rider wants to know the driver is really the holder of the account. Bolt issued a verifiable credential (VC) to Bob when he signed up. The VC issuing and presenting protocols cryptographically combine an non-correlatable identifier and a link secret and use zero-knowledge proofs (ZKPs) to present the credential. ZKP-based credential presentations have a number of methods that can be used to prevent transferring the credential. I won't go into the details, but the paper I link to provides eight techniques that can be used to prevent the transfer of a VC. We can be confident the VC was issued to the person presenting it.

Bolt could require that Bob use the VC they provided when he signed up to log into his account each time he starts driving. They could even link a bond or financial escrow to the VC to ensure it's not transferred. To prevent Bob from activating the account for Malfoy at the beginning of each driving period, Alice, and other passengers could ask drivers for proof that they're a legitimate Bolt driver by requesting a ZKP from the Bolt credential. Their Bolt app could do this automatically and even validate that the credential is from Bolt.

Knowing that the credential was issued to the person presenting it is one of the four cryptographic cornerstones of credential fidelity. The Bolt app can ensure the provenance of the credential Bob presents. Alice doesn't have trust Bob or know very much about Bob personally, just that he really is the driver that Bolt has certified.

The non-transferability of verifiable credential is one of their super powers. A lot of the talk about identity in Web 3 has focused on NFTs. NFTs are, for the most part, designed to be transferable2. In that sense, they're no better than a password-protected account. Identity relies on knowing that the identifiers and attributes being presented are worthy of confidence and can be trusted. Otherwise, identity isn't reducing risk the way it should. That can't happen with transferable identifiers—whether their password-based accounts or even NFTs. There's no technological barrier to Bolt implementing this solution now...and they should for the safety of their customers.

Notes I'm speaking of features specific to the Aries credential exchange protocol in this post. Recently Vatalik el. al. proposed what they call a soul-bound token as a non-transferable credential type for Web3. I'm putting together my thoughts on that for a future post.

Photo Credit: A Buenos Aires taxi ride from Phillip Capper (CC BY 2.0)

Tags: identity ssi verifiable+credentials reputation


John Philpin : Lifestream

“Go through all the sessions on the official program and y

“Go through all the sessions on the official program and you’ll see lots of talk about Web3, the metaverse and uplifting stories of Ukraine, but you won’t find anybody holding a microphone talking about China, about data misuse, about ad fraud.” 💬 Mark Di Stefano

“Go through all the sessions on the official program and you’ll see lots of talk about Web3, the metaverse and uplifting stories of Ukraine, but you won’t find anybody holding a microphone talking about China, about data misuse, about ad fraud.”

💬 Mark Di Stefano


Wednesday, 22. June 2022

John Philpin : Lifestream

Wow - Stowe Boyd is recommending my newsletter …

Wow - Stowe Boyd is recommending my newsletter …

Wow - Stowe Boyd is recommending my newsletter …


Kerri Lemole

Interoperability for Verifiable Learning and Employment Records

“real-world slide together” by fdecomite is licensed under CC BY 2.0. in·ter·op·er·a·ble /ˌin(t)ərˈäp(ə)rəb(ə)l/ adjective (of computer systems or software) able to exchange and make use of information. (Oxford Dictionary) if two products, programs, etc. are interoperable, they can be used together. (Cambridge Dictionary) It’s no surprise that digital versions of learning and employment rec
real-world slide together” by fdecomite is licensed under CC BY 2.0.

in·ter·op·er·a·ble
/ˌin(t)ərˈäp(ə)rəb(ə)l/
adjective

(of computer systems or software) able to exchange and make use of information. (Oxford Dictionary)

if two products, programs, etc. are interoperable, they can be used together. (Cambridge Dictionary)

It’s no surprise that digital versions of learning and employment records (LERs) like certifications, licenses, and diplomas can introduce new worlds of opportunity and perspective. If they are issued, delivered, and verified according to well-established and supported standards, computers are able to exchange and use this information securely and interoperably. This practice of technical interoperability could also precipitate an increase in systemic interoperability by providing more individuals with direct, convenient, understandable, and affordable access to their confirmable LERs that are syntactically, structurally, and semantically similar. This can make digital credentials useful across many different systems.

Interoperability of digital LERs has three primary aspects:

Verification describes when the claims were made, who the credentials are from, who they are about, and provides methods to prove these identities and that the claim data have remained unchanged since issuance. Delivery describes how the LERs move from one entity to another; overlaps with the verification layer. Content describes what each claim is and is also referred to as the credential subject.

Verification
At the Worldwide Web Consortium (W3C) there’s a standard called Verifiable Credentials (VC) that describes how claims can be verified. It’s being used for claims that require unmitigated proof like government credentials, identity documents, supply chain management, and education credentials. A diploma issued as a VC by a university would contain content representing the diploma and would be digitally signed by the university. The identities of the university and the student could be represented by a Decentralized Identifier (DID, also a recommendation developed at the W3C for cryptographically verifiable identities. The diploma could be stored in a digital wallet app where the student would have access to their cryptographically verifiable digital diploma at a moment’s notice. Verifiers, such as employers, who understand the VC and DID standards could verify the diploma efficiently without notifying the university. Digitally, this resembles how watermarked and notarized documents are handled offline.

Delivery
The connections between the wallet, the university credential issuing system, the student, and the verifier encompass the delivery of VCs. This overlaps with verification because DIDs and digital signature methods must be taken into consideration when the LERs are issued and transported. There are a handful of ways to accomplish this and several efforts aiming towards making this more interoperable including W3C CCG VC HTTP API and DIF Presentation Exchange.

Content
Verifiers can recognize that a VC is a diploma, certification, or a transcript because there are many semantic standards with vocabularies that describe learning and employment records like these. Open Badges and the Comprehensive Learner Record (CLR) at 1edtech (formerly IMS Global) provide descriptions of credentials and the organizations that issue them. Both of these standards have been working towards upgrades (Open Badges 3.0 and CLR 2.0) to use the W3C Verifiable Credential model for verification. They provide a structural and semantic content layer that describes the claim as a type of achievement, the criteria met, and a potential profile of the issuer.

Another standard is the Credential Transparency Language (CTDL) at Credential Engine which provides a more in-depth vocabulary to describe organizations, skills, jobs, and even pathways. When LER VCs contain CTDL content on its own or in addition to Open Badges or CLR, the rich data source can precisely describe who or what is involved in an LER providing additional context and taxonomy that can be aligned with opportunities.

Standards groups continue to seek ways to meet the needs of issuing services, wallet vendors, and verifying services that are coming to market. The Credentials Community Group (CCG) is a great place to get acquainted with the community working on this. The W3C Verifiable Credentials for Education Task Force (VC-EDU) is a subgroup of the CCG that is exploring how to represent education, employment, and achievement verifiable credentials. This includes pursuing data model recommendations, usage guidelines, and best practices. Everyone at every stage of technology understanding is welcome to join in because we are all learning and every perspective increases understanding. VC-EDU meets online most Mondays at 8 am PT/11 am ET/5 pm CET. Meeting connection info and archives can be found here. Subscribe to the VC-EDU mailing list by sending an email to public-vc-edu-request@w3.org with the subject “subscribe” (no email message needed).


John Philpin : Lifestream

I love how you can take a simple game that is really rather

I love how you can take a simple game that is really rather uninteresting and with just a couple of tweaks make it not just interesting - but also challenging.

If you are into Mac drawing apps - Acorn is currently availa

If you are into Mac drawing apps - Acorn is currently available for $20

The corporations backing the Texas Republican Party - Judd L

The corporations backing the Texas Republican Party - Judd Legum Corporations should support whichever political party they wish. They should also get called out when what they say is not how they act.

The corporations backing the Texas Republican Party - Judd Legum

Corporations should support whichever political party they wish.

They should also get called out when what they say is not how they act.

Tuesday, 21. June 2022

Doc Searls Weblog

What shall I call my newsletter?

I’ve been blogging since 1999, first at weblog.searls.com, and since 2007 here. I also plan to continue blogging here* for the rest of my life. But it’s clear now that newsletters are where it’s at, so I’m going to start one of those. The first question is, What do I call it? The easy thing, and […]

I’ve been blogging since 1999, first at weblog.searls.com, and since 2007 here. I also plan to continue blogging here* for the rest of my life. But it’s clear now that newsletters are where it’s at, so I’m going to start one of those.

The first question is, What do I call it?

The easy thing, and perhaps the most sensible, is Doc Searls Newsletter, or Doc Searls’ Newsletter, in keeping with the name of this blog. In branding circles, they call this line extension.

Another possibility is Spotted Hawk. This is inspired by Walt Whitman, who wrote,

The spotted hawk swoops by and accuses me,
he complains of my gab and my loitering.
I too am not a bit tamed.
I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

In the same spirit I might call the newsletter Barbaric Yawp. But ya kinda gotta know the reference, which even English majors mostly don’t. Meanwhile, Spotted Hawk reads well, even if the meaning is a bit obscure. Hell, the Redskins or the Indians could have renamed themselves the Spotted Hawks.

Yet barbaric yawping isn’t my style, even if I am untamed and sometimes untranslatable.

Any other suggestions?

As a relevant but unrelated matter, I also have to decide how to produce it. The easy choice is to use Substack, which all but owns the newsletter platform space right now. But Substack newsletters default to tracking readers, and I don’t want that. I also hate paragraph-long substitutes for linked URLs, and tracking cruft appended to the ends of legible URLs. (When sharing links from newsletters, always strip that stuff off. Pro tip: the cruft usually starts with a question mark.) I’m tempted by Revue, entirely because Julia Angwin and her team at The Markup went through a similar exercise in 2019 and chose Revue for their newsletter. I’m already playing with that one. Other recommendations are welcome. Same goes for managing the mailing list if I don’t use a platform. Mailman perhaps?

*One reason I keep this blog up is that Harvard hosts it, and Harvard has been around since 1636. I also appreciate deeply its steady support of what I do here and at ProjectVRM, which also manifests as a blog, at the Berkman Klein Center.


John Philpin : Lifestream

So …. craiyon …

So …. craiyon …

So …. craiyon …


Identity Woman

Seeing Self-Sovereign Identity in Historical Context

Abstract A new set of technical standards called Self-Sovereign Identity (SSI) is emerging, and it reconfigures how digital identity systems work. My thesis is that the new configuration aligns better with the emergent ways our social systems in the west have evolved identity systems to  work at a mass scale and leverage earlier paper-based technologies. […] The post Seeing Self-Sovereign I

Abstract A new set of technical standards called Self-Sovereign Identity (SSI) is emerging, and it reconfigures how digital identity systems work. My thesis is that the new configuration aligns better with the emergent ways our social systems in the west have evolved identity systems to  work at a mass scale and leverage earlier paper-based technologies. […]

The post Seeing Self-Sovereign Identity in Historical Context appeared first on Identity Woman.

Monday, 20. June 2022

Simon Willison

Joining CSV files in your browser using Datasette Lite

I added a new feature to Datasette Lite - my version of Datasette that runs entirely in your browser using WebAssembly (previously): you can now use it to load one or more CSV files by URL, and then run SQL queries against them - including joins across data from multiple files. Your CSV file needs to be hosted somewhere with access-control-allow-origin: * CORS headers. Any CSV file hosted on Git

I added a new feature to Datasette Lite - my version of Datasette that runs entirely in your browser using WebAssembly (previously): you can now use it to load one or more CSV files by URL, and then run SQL queries against them - including joins across data from multiple files.

Your CSV file needs to be hosted somewhere with access-control-allow-origin: * CORS headers. Any CSV file hosted on GitHub provides these, if you use the link you get by clicking on the "Raw" version.

Loading CSV data from a URL

Here's the URL to a CSV file of college fight songs collected by FiveThirtyEight in their data repo as part of the reporting for this story a few years ago:

https://raw.githubusercontent.com/fivethirtyeight/data/master/fight-songs/fight-songs.csv

You can pass this to Datasette Lite in two ways:

You can load the web app, click the "Load data by URL to a CSV file" button and paste in the URL Or you can pass it as a ?csv= parameter to the application, like this: https://lite.datasette.io/?csv=https://raw.githubusercontent.com/fivethirtyeight/data/master/fight-songs/fight-songs.csv

Once Datasette has loaded, a data database will be available with a single table called fight-songs.

As you navigate around in Datasette the URL bar will update to reflect current state - which means you can deep-link to table views with applied filters and facets:

https://lite.datasette.io/?csv=https://raw.githubusercontent.com/fivethirtyeight/data/master/fight-songs/fight-songs.csv#/data/fight-songs?_facet=conference&_facet=student_writer&_facet=official_song

Or even link to the result of a custom SQL query:

https://lite.datasette.io/?csv=https://raw.githubusercontent.com/fivethirtyeight/data/master/fight-songs/fight-songs.csv#/data?sql=select+school%2C+conference%2C+song_name%2C+writers%2C+year%2C+student_writer+spotify_id+from+%5Bfight-songs%5D+order+by+rowid+limit+101

Loading multiple files and joining data

You can pass the ?csv= parameter more than once to load data from multiple CSV files into the same virtual data database. Each CSV file will result in a separate table.

For this demo I'll use two CSV files.

The first is us-counties-recent.csv from the NY Times covid-19-data repository, which lists the most recent numbers for Covid cases for every US county.

The second is us_census_county_populations_2019.csv, a CSV file listing the population of each county according to the 2019 US Census which I extracted from this page on the US Census website.

Both of those tables include a column called fips, representing the FIPS county code for each county. These 4-5 digit codes are ideal for joining the two tables.

Here's a SQL query which joins the two tables, filters for the data for the most recent date represented (using where date = (select max(date) from [us-counties-recent])) and calculates cases_per_million using the cases and the population:

select [us-counties-recent].*, us_census_county_populations_2019.population, 1.0 * [us-counties-recent].cases / us_census_county_populations_2019.population * 1000000 as cases_per_million from [us-counties-recent] join us_census_county_populations_2019 on us_census_county_populations_2019.fips = [us-counties-recent].fips where date = (select max(date) from [us-counties-recent]) order by cases_per_million desc

And since everything in Datasette Lite can be bookmarked, here's the super long URL (clickable version here) that executes that query against those two CSV files:

https://lite.datasette.io/?csv=https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties-recent.csv&csv=https://raw.githubusercontent.com/simonw/covid-19-datasette/main/us_census_county_populations_2019.csv#/data?sql=select%0A++%5Bus-counties-recent%5D.*%2C%0A++us_census_county_populations_2019.population%2C%0A++1.0+*+%5Bus-counties-recent%5D.cases+%2F+us_census_county_populations_2019.population+*+1000000+as+cases_per_million%0Afrom%0A++%5Bus-counties-recent%5D%0A++join+us_census_county_populations_2019+on+us_census_county_populations_2019.fips+%3D+%5Bus-counties-recent%5D.fips%0Awhere%0A++date+%3D+%28select+max%28date%29+from+%5Bus-counties-recent%5D%29%0Aorder+by%0A++cases_per_million+desc


The State of WebAssembly 2022

The State of WebAssembly 2022 Colin Eberhardt talks through the results of the State of WebAssembly 2022 survey. Rust continues to dominate as the most popular language for working to WebAssembly, but Python has a notable increase of interest.

The State of WebAssembly 2022

Colin Eberhardt talks through the results of the State of WebAssembly 2022 survey. Rust continues to dominate as the most popular language for working to WebAssembly, but Python has a notable increase of interest.

Sunday, 19. June 2022

Simon Willison

WarcDB

WarcDB Florents Tselai built this tool for loading web crawl data stored in WARC (Web ARChive) format into a SQLite database for smaller-scale analysis with SQL, on top of my sqlite-utils Python library. Via Hacker News

WarcDB

Florents Tselai built this tool for loading web crawl data stored in WARC (Web ARChive) format into a SQLite database for smaller-scale analysis with SQL, on top of my sqlite-utils Python library.

Via Hacker News


Werdmüller on Medium

Tech on Juneteenth

Some tech firms perpetuate modern-day slavery by using prison labor. Continue reading on Medium »

Some tech firms perpetuate modern-day slavery by using prison labor.

Continue reading on Medium »


Simon Willison

Weeknotes: datasette-socrata, and the last 10%...

... takes 90% of the work. I continue to work towards a preview of the new Datasette Cloud, and keep finding new "just one more things" to delay inviting in users. Aside from continuing to work on that, my big project in the last week was a blog entry: Twenty years of my blog, in which I celebrated twenty years since starting this site by pulling together a selection of highlights from over the

... takes 90% of the work. I continue to work towards a preview of the new Datasette Cloud, and keep finding new "just one more things" to delay inviting in users.

Aside from continuing to work on that, my big project in the last week was a blog entry: Twenty years of my blog, in which I celebrated twenty years since starting this site by pulling together a selection of highlights from over the years.

I've actually updated that entry a few times over the past few days as I remembered new highlights I forgot to include - the Twitter thread that accompanies the entry has those updates, starting here.

datasette-socrata

I've been thinking a lot about the Datasette Cloud onboarding experience: how can I help new users understand what Datasette can be used for as quickly as possible?

I want to get them to a point where they are interacting with a freshly created table of data. I can provide some examples, but I've always thought that one of the biggest opportunities for Datasette lies in working with the kind of data released by governments through their Open Data portals. This is especially true for its usage in the field of data journalism.

Many open data portals - including the one for San Francisco - are powered by a piece of software called Socrata. And it offers a pretty comprehensive API.

datasette-socrata is a new Datasette plugin which can import data from Socrata instances. Give it the URL to a Socrata dataset (like this one, my perennial favourite, listing all 195,000+ trees managed by the city of San Francisco) and it will import that data and its associated metadata into a brand new table.

It's pretty neat! It even shows you a progress bar, since some of these datasets can get pretty large:

As part of building this I ran into the interesting question of what a plugin like this should do if the system it is running on runs out of disk space?

I'm still working through that, but I'm experimenting with a new type of Datasette plugin for it: datasette-low-disk-space-hook, which introduces a new plugin hook (low_disk_space(datasette)) which other plugins can use to report a situation where disk space is running out.

I wrote a TIL about that here: Registering new Datasette plugin hooks by defining them in other plugins.

I may use this same trick for a future upgrade to datasette-graphql, to allow additional plugins to register custom GraphQL mutations.

sqlite-utils 3.27

In working on datasette-socrata I was inspired to push out a new release of sqlite-utils. Here are the annotated release notes:

Documentation now uses the Furo Sphinx theme. (#435)

I wrote about this a few weeks ago - the new documentation theme is now live for the stable documentation.

Code examples in documentation now have a "copy to clipboard" button. (#436)

I made this change to Datasette first - the sphinx-copybutton plugin adds a neat "copy" button next to every code example.

I also like how this encourages ensuring that every example will work if people directly copy and paste it.

sqlite_utils.utils.utils.rows_from_file() is now a documented API, see Reading rows from a file. (#443)

Francesco Frassinelli filed an issue about this utility function, which wasn't actually part of the documented stable API, but I saw no reason not to promote it.

The function incorporates the logic that the sqlite-utils CLI tool uses to automatically detect if a provided file is CSV, TSV or JSON and detect the CSV delimeter and other settings.

rows_from_file() has two new parameters to help handle CSV files with rows that contain more values than are listed in that CSV file's headings: ignore_extras=True and extras_key="name-of-key". (#440)

It turns out csv.DictReader in the Python standard library has a mechanism for handling CSV rows that contain too many commas.

In working on this I found a bug in mypy which I reported here, but it turned out to be a dupe of an already fixed issue.

sqlite_utils.utils.maximize_csv_field_size_limit() helper function for increasing the field size limit for reading CSV files to its maximum, see Setting the maximum CSV field size limit. (#442)

This is a workaround for the following Python error:

_csv.Error: field larger than field limit (131072)

It's an error that occurs when a field in a CSV file is longer than a default length.

Saying "yeah, I want to be able to handle the maximum length possible" is surprisingly hard - Python doesn't let you set a maximum, and can throw errors depending on the platform if you set a number too high. Here's the idiom that works, which is encapsulated by the new utility function:

field_size_limit = sys.maxsize while True: try: csv_std.field_size_limit(field_size_limit) break except OverflowError: field_size_limit = int(field_size_limit / 10)
table.search(where=, where_args=) parameters for adding additional WHERE clauses to a search query. The where= parameter is available on table.search_sql(...) as well. See Searching with table.search(). (#441)

This was a feature suggestion from Tim Head.

Fixed bug where table.detect_fts() and other search-related functions could fail if two FTS-enabled tables had names that were prefixes of each other. (#434)

This was quite a gnarly bug. sqlite-utils attempts to detect if a table has an associated full-text search table by looking through the schema for another table that has a definition like this one:

CREATE VIRTUAL TABLE "searchable_fts" USING FTS4 ( text1, text2, [name with . and spaces], content="searchable" )

I was checking for content="searchable" using a LIKE query:

SELECT name FROM sqlite_master WHERE rootpage = 0 AND sql LIKE '%VIRTUAL TABLE%USING FTS%content=%searchable%'

But this would incorrectly match strings such as content="searchable2" as well!

Releases this week datasette-socrata: 0.3 - (4 releases total) - 2022-06-17
Import data from Socrata into Datasette datasette-low-disk-space-hook: 0.1 - (2 releases total) - 2022-06-17
Datasette plugin providing the low_disk_space hook for other plugins to check for low disk space sqlite-utils: 3.27 - (101 releases total) - 2022-06-15
Python CLI utility and library for manipulating SQLite databases datasette-ics: 0.5.1 - (4 releases total) - 2022-06-10
Datasette plugin for outputting iCalendar files datasette-upload-csvs: 0.7.1 - (9 releases total) - 2022-06-09
Datasette plugin for uploading CSV files and converting them to database tables TIL this week Making HTTP calls using IPv6 Formatting thousands in Jinja Using iconv to convert the text encoding of a file Registering new Datasette plugin hooks by defining them in other plugins

Saturday, 18. June 2022

Simon Willison

Quoting Addy Osmani

Becoming a good engineer is about collecting experience. Each project, even small ones, is a chance to add new techniques and tools to your toolbox. Where this delivers even more value is when you can solve problems by pairing techniques learned on one project with tools learned working on another. It all adds up. — Addy Osmani

Becoming a good engineer is about collecting experience. Each project, even small ones, is a chance to add new techniques and tools to your toolbox. Where this delivers even more value is when you can solve problems by pairing techniques learned on one project with tools learned working on another. It all adds up.

Addy Osmani

Friday, 17. June 2022

Ludo Sketches

The end of a chapter

After almost 12 years, I’ve decided to close the ForgeRock chapter and leave the company. Now that the company has successfully gone public, and has been set on a trajectory to lead the Identity industry, it was time for me… Continue reading →

After almost 12 years, I’ve decided to close the ForgeRock chapter and leave the company.

Now that the company has successfully gone public, and has been set on a trajectory to lead the Identity industry, it was time for me to pause and think about what matters to me in life. So I’ve chosen to leverage the exciting experience I’ve gained with ForgeRock and to start giving back to the startups in my local community.

But what an incredible journey, it has been! I joined the company when it had a dozen employees, I was given the opportunity to found the French subsidiary, to start an engineering center, build an amazing team of developers and deliver some rock solid, highly scalable products. For this opportunity, I will always be thankful to the amazing 5 Founders of ForgeRock.

The ForgeRock Founders: Hermann, Victor, Lasse, Steve, Jonathan.

I have nothing but good memories of all those years, the amazing events organized for all the employees or for our customers. There has been many IdentityLive events (formerly known as Identity Summits), there has been fewer but so energizing Company Meetings, in Portugal, Greece, USA, Italy.

I’ve worked with a team of rock-star product managers, from which I’ve learnt so much:

I’ve hired and built a team of talented and software engineers, some of them I’ve known for 20 years:

I don’t have enough space to write about all the different things we’ve done together, at work, outside work… But yeah, we rocked!

Overall those 12 years have been an incredible and exciting journey, but what made the journey so exceptional is all the persons that have come along. Without you, nothing would have been the same. Thank you ! Farewell but I’m sure we will meet again

Thursday, 16. June 2022

reb00ted

A new term: "Platform DAO"

I usually invent technology, but today I present you with a new term: Platform DAO. Web searches don’t bring anything meaningful up, so I claim authorship on this term. Admittedly the amount of my inventiveness here is not very large. Trebor Scholz coined the term “Platform Cooperative” in an article in 2014 (according to Wikipedia). He started with the long established term of a “cooperativ

I usually invent technology, but today I present you with a new term:

Platform DAO.

Web searches don’t bring anything meaningful up, so I claim authorship on this term.

Admittedly the amount of my inventiveness here is not very large. Trebor Scholz coined the term “Platform Cooperative” in an article in 2014 (according to Wikipedia). He started with the long established term of a “cooperative”, and applied it to an organization that creates and maintains a software platform. So we get a “Platform Co-op”.

I’m doing the exact same thing: a “Platform DAO” is a Decentralized Autonomous Organization, a DAO, that creates and maintains a software platform. Given that DAOs largely are the same as Co-ops, except that they use technology in order to automate, and reduce the cost of some governance tasks – and also use technology for better transparency – it seems appropriate to create that parallel.

Why is this term needed? This is where I think things get really interesting.

The Platform co-op article on Wikipedia lists many reasons why platform co-ops could deliver much more societal benefits than traditional vendor-owned tech platforms can. But it also points out some core difficulties, which is why we haven’t seen too many successful platform co-ops. At the top of which is the difficulty of securing early-stage capital.

Unlike in co-ops, venture investors these days definitely invest in DAOs.

Which means we might see the value of “Platform Co-ops” realized in their form as “Platform DAOs” as venture investment would allow them to compete at a much larger scale.

Imagine if today, somebody started Microsoft Windows. As a DAO. Where users, and developers, and the entire VAR channel, are voting members of the DAO. This DAO will be just as valuable as Microsoft – in fact I would argue it would be more valuable than Microsoft –, with no reason to believe it would deliver fewer features or quality, but lots of reasons to believe that the ecosystem would rally around it in a way that it would never rally around somebody else’s company.

Want to help? (No, I’m not building a Windows DAO. But a tech platform DAO that could be on that scale.) Get in touch!

Wednesday, 15. June 2022

Moxy Tongue

Self-Administered Governance In America

"We the people" have created a new living master; a bureaucratic machine, not "for, by, of" our control as people. This bureaucratic system, protected by a moat of credentialed labor certification processes and costs, is managed via plausible deniability practices now dominating the integrity of the civil systems which a civil society is managed by. Living people, now legal "people", function as as
"We the people" have created a new living master; a bureaucratic machine, not "for, by, of" our control as people. This bureaucratic system, protected by a moat of credentialed labor certification processes and costs, is managed via plausible deniability practices now dominating the integrity of the civil systems which a civil society is managed by. Living people, now legal "people", function as assets under management and social liabilities leveraged for the development of budget expenditures not capable of self-administration by the people they exist to serve. This "bureaucratic supremacy" in governed process has rendered words meaningless in practice, and allowed a new Administrative King to rule the Sovereign territory where American self-governance was envisioned and Constituted, once upon a time.
President after President, one precedent after another is used to validate actions that lack integrity under inspection. "Civil Rights Laws", suspended by bureaucratic supremacy alone, allow a President to nominate and hire a Supreme Court Justice on the stated basis of gender, skin color and qualifications. In lieu of a leader demonstrating what self-governance is capable of, "we the people" are rendered spectators of lifetime bureaucrats demonstrating their bureaucratic supremacy upon their "people". 
Throw all the words away; democracy, republic, authoritarian dictatorship, gone. None matter, none convey meaningful distinctions.
You can either self-administer your role in a "civil society", or you can not. If you can not, it need not matter what you call your Government, or what form of "voting" is exercised. In the end, you are simply data. You are data under management, a demographic to be harvested. You will either be able to self-administer your participation, or you will ask for endless permission of your bureaucratic masters who fiddle with the meaning of those permissions endlessly. In this context, a bureaucratic process like gerrymandering is simply an exercise in bureaucratic fraud, always plausibly deniable. 
Read all the history books you like; dead history is dead.
Self-administered governance of a civil society is the basis of the very existence of a "civil society" derived "of, by, for" people. People, Individuals All, living among one another, expressing the freedom of self-administration, is the only means by which a computationally accurate Constitution can exist. The imperfection of politics, driven by cult groupings of people practicing group loyalty for leverage in governed practices is itself a tool of leverage held exclusively by the bureaucracy. Self-administration of one's vote, held securely in an authenticated custodial relationship as an expression of one's authority in Government, is the means by which a Government derived "of, by, for" people comes into existence, and is sustained. Bureaucratic processes separating such people from the self-administration of their participation Constitutes a linguistic and legal ruse perpetrated upon people, Individuals all.
Plato, John Locke, Adam Smith... put down the books & seminal ideas.
Self-Administration of human authority, possessed equally by all living Individuals who choose civil participation as a method of Governance derived "of, by, for" people, begins and ends with the structural accuracy of words, and their functional practices. 






Tuesday, 14. June 2022

reb00ted

Impressions from the Consensus conference, Austin 2022

This past weekend I went to the Consensus conference in Austin. I hadn’t been to another, so I can’t easily compare this year with previous years. But here are my impressions, in random order: The show was huge. Supposedly 20,000 in-person attendees. Just walking from one presentation to another at the other end of the conference took a considerable amount of time. And there were other locat

This past weekend I went to the Consensus conference in Austin. I hadn’t been to another, so I can’t easily compare this year with previous years. But here are my impressions, in random order:

The show was huge. Supposedly 20,000 in-person attendees. Just walking from one presentation to another at the other end of the conference took a considerable amount of time. And there were other locations distributed all over downtown Austin.

Lots and lots of trade show booths with lots of traffic.

In spite of “crypto winter”, companies still spent on the trade show booths. (But then, maybe they committed to the expense before the recent price declines)

Pretty much all sessions were “talking heads on stage”. They were doing a good job at putting many women on. But only “broadcast from the experts to the (dumb) audience”? This feels out of touch in 2022, and particularly because web3/crypto is all supposed to be giving everyday users agency, and a voice. Why not practice what you promote, Consensus? Not even an official hash tag or back channel.

Frances Haugen is impressive.

No theme emerged. I figured that there would be one, or a couple, of “hot topics” that everybody talked about and would be excited about. Instead, I didn’t really see anything that I hadn’t heard about for some years.

Some of the demos at some of the booths, and some of the related conversations were surprisingly bad. Without naming names, for example, what would you expect if somebody’s booth promises you some kind of “web3 authentication”? What I won’t expect is that the demo consists of clicking on a button labeled “Log in with Google”, and when I voiced surprise, handwaved about something with split keys, without being able to explain, or show, it at all.

I really hate it if I ask “what does the product do?”", and the answer is “60,000 people use it”. This kind of response is of course not specific to crypto, but either the sales guy doesn’t actually know what the product does – which happens surprisingly often – or simply doesn’t care at all the somebody asked a question. Why are you going to trade shows again?

The refrain “it’s early days for crypto” is getting a bit old. Yes, other industries have been around for longer, but one should be able to see a few compelling, deployed solutions for real-world problems that are touching the real world outside of crypto. Many of those that I heard people pitch were often some major distance away from being realistic. For example, if somebody pitches tokenizing real estate, I would expect them to talk about the value proposition for, say, realtors, how they are reaching them and converting them, or how there is a new title insurance company based on blockchain that is growing very rapidly because it can provide better title insurance at much lower cost. Things like that. But no such conversation could be heard – well, at least not by me – and that indicates to me that the people pitching this haven’t really really encountered the market yet.

An anonymous crypto whale/investor – I think – who I chatted with over breakfast so much confirmed this: he basically complained that so many pitches he’s getting are on subjects that the entrepreneurs basically know nothing about. So real domain knowledge is missing for too many projects. (Which would explain many things, including why so many promising projects have run out of steam when it is time to actually deliver on the lofty vision),

The crypto “market” still seems to mostly consist of a bunch of relatively young people who have found a cool new technology, and are all in, but haven’t either felt the need to, nor have been successful at applying it to the real world. I guess billions of dollars of money flowing in crypto coins allowed them to ignore this so far. I wonder whether this attitude can last in this “crypto winter”.

But this is also a great opportunity. While 90% of what has been pitched in web3/crypto is probably crap and/or fraudulent (your number may be lower, or higher), it is not 100% and some things are truly intriguing. My personal favorites are DAOs, which have turned into this incredible laboratory for governance innovations. Given that we still vote – e.g. in the US – in largely the same way as hundreds of years ago, innovation in democratic governance has been glacial. All of a sudden we have groups that apply liquid democracy, and quadratic voting, and weigh votes by contributions, and lots of other ideas. It’s like somebody turned on the water in the desert, and instead of governance being all the same sand as always, there are now flowers of a 1000 different kinds that you have never seen before, blooming all over. (Of course many of those will not survive, as we don’t know how to do governance differently, but the innovation is inspiring.)

In personal view, the potential of crypto technologies is largely all about governance. The monetary uses should be considered a side effect of new forms of governance, not the other way around. Of course, almost nobody – technologist or not – has many thoughts on novel, better forms governance, because we have so been trained into believing that “western style democracy” cannot be improved on. Clearly, that is not true, and there are tons of spaces that need better governance than we have – my favorite pet peeve is the rules about the trees on my street – so all innovations in governance are welcome. If we could govern those trees better, perhaps we could also have a street fund to pay for their maintenance – which would be a great example for a local wallet with a “multisig”. Certainly it convinces me much more than some of the examples that I heard about at Consensus.

I think the early days are ending. The crypto winter will have a bunch of projects die, but the foundation has been laid for some new projects that could take over the world overnight, by leading with governance of an undergoverned, high-value space. Now what was your’s truly working on again? :-)

Monday, 13. June 2022

Simon Willison

Making Code Faster

Making Code Faster Tim Bray's detailed guide to using the Go profiler.

Making Code Faster

Tim Bray's detailed guide to using the Go profiler.


Doc Searls Weblog

Why the Celtics will win the NBA finals

Marcus Smart. Photo by Eric Drost, via Wikimedia Commons. Back in 2016, I correctly predicted that the Cleveland Cavaliers would win the NBA finals, beating the heavily favored Golden State Warriors, which had won a record 73 games in the regular season. In 2021, I incorrectly predicted that the Kansas City Chiefs would beat the Tampa […]

Marcus Smart. Photo by Eric Drost, via Wikimedia Commons.

Back in 2016, I correctly predicted that the Cleveland Cavaliers would win the NBA finals, beating the heavily favored Golden State Warriors, which had won a record 73 games in the regular season. In 2021, I incorrectly predicted that the Kansas City Chiefs would beat the Tampa Bay Buccaneers. I based both predictions on a theory: the best story would win. And maybe Tom Brady proved that anyway: a relative geezer who was by all measures the GOAT, proved that label.

So now I’m predicting that the Boston Celtics will win the championship because they will win because they have the better story.

Unless Steph Curry proves that he’s the GSOAT: Greatest Shooter Of All Time. Which he might. He sure looked like it in Game Four. That’s a great story too.

But I like the Celtics’ story better. Here we have a team of relative kids who were average at best by the middle of the season, but then, under their rookie coach, became a defensive juggernaut, racking up the best record through the remainder of the season, then blowing through three playoffs to get to the Finals. In Round One, they swept Kevin Durant, Kyrie Irving and the Brooklyn Nets, who were pre-season favorites to win the Eastern Conference. In Round Two, they beat Giannis Antentokuompo and the Milwaukee Bucks, who were defending champs, in six games. In Round Three, they won the conference championship by beating the Miami Heat, another great defensive team, and the one with the best record in the conference, in seven games. Now the Celtics are tied, 2-2, with the Western Conference champs, the Golden State Warriors, with Steph Curry playing his best, looking all but unbeatable, on a team playing defense that’s pretty much the equal of Boston’s.

Three games left, two at Golden State.

But I like the Celtics in this. They seem to have no problem winning on the road, and I think they want it more. And maybe even better.

May the best story win.

[Later…] Well, c’est le jeu. The Celtics lost the next two games, and the Warriors took the series.

After it was over, lots of great stories were told about the Warriors: the team peaked at the right time, they were brilliantly coached (especially on how to solve the Celtics), Steph moved up in all-time player rankings (maybe even into the top ten), Wiggins finally looked like the #1 draft choice he was years ago, the Dynasty is back. Long list, and it goes on. But the Celtics still had some fine stories of their own, especially around how they transformed from a mediocre team at mid-season to a proven title contender that came just two games away from winning it all. Not bad.


Ludo Sketches

ForgeRock Identity Live, Austin TX

A few weeks ago, ForgeRock organised the first Identity Live event of the season, in Austin TX. With more than 300 registered guests, an impeccable organisation by our Marketing team, the event was a great success. The first day was… Continue reading →

A few weeks ago, ForgeRock organised the first Identity Live event of the season, in Austin TX.

With more than 300 registered guests, an impeccable organisation by our Marketing team, the event was a great success.

The first day was sales oriented, with company presentations, roadmaps, products’ demonstrations but also testimony from existing customers. The second day was focusing on the technical side of the ForgeRock solutions, in an unconference format, where Product Managers, Technical Consultants et Engineers shared with the audience their experience, their knowledge.

It was great to meet again so many colleagues, partners, customers; to have lively conversations about the products, the projects and the overall directions of the Identity technology.

You can find more photos of the event in the dedicated album.


Damien Bod

Force MFA in Blazor using Azure AD and Continuous Access

This article shows how to force MFA from your application using Azure AD and a continuous access auth context. When producing software which can be deployed to multiple tenants, instead of hoping IT admins configure this correctly in their tenants, you can now force this from the application. Many tenants do not force MFA. Code: […]

This article shows how to force MFA from your application using Azure AD and a continuous access auth context. When producing software which can be deployed to multiple tenants, instead of hoping IT admins configure this correctly in their tenants, you can now force this from the application. Many tenants do not force MFA.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph). Add a CA policy which uses the authentication context. Implement an authentication challenge using the claims challenge in the Blazor WASM.

Creating a conditional access authentication context

A continuous access (CA) authentication context was created using Microsoft Graph and a policy was created to use this. See the first blog in this series for details on setting this up.

Force MFA in the Blazor application

Now that the continuous access (CA) authentication context is setup and a policy is created requiring MFA, the application can check that the required acrs with the correct value is present in the id_token. We do this is two places, in the login of the account controller and in the OpenID Connect event sending the authorize request. The account controller Login method can be used to set the claims parameter with the required acrs value. By requesting this, the Azure AD policy auth context is forced.

[HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { // var claims = // "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; // var claims = // "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } else { // lets force MFA using CAE for all sign in requests. properties.Items["claims"] = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; } return Challenge(properties); }

In the application an ASP.NET Core authorization policy can be implemented to force the MFA. All requests require a claim type acrs with the value c1, which we created in the Azure tenant using Microsoft Graph.

services.AddMicrosoftIdentityWebAppAuthentication(configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/v1.0", scopes) .AddInMemoryTokenCaches(); services.AddAuthorization(options => { options.AddPolicy("ca-mfa", policy => { policy.RequireClaim("acrs", AuthContextId.C1); }); });

By using the account controller login method, only the login request forces the auth context. If the context needs to be forced everywhere, then a middleware using the OnRedirectToIdentityProvider event can be used to add the extra request parameter on every OIDC authorize request. The OnRedirectToIdentityProvider event can be used to add this to all requests, which has not already added the claims parameter. You could also only use this without the login implementation in the account controller.

services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .RequireClaim("acrs", AuthContextId.C1) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); services.Configure<MicrosoftIdentityOptions>(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnRedirectToIdentityProvider = context => { if(!context.ProtocolMessage.Parameters.ContainsKey("claims")) { context.ProtocolMessage.SetParameter( "claims", "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"); } return Task.FromResult(0); }; });

Now all requests require the auth context which is used to require the CA MFA policy.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345


Here's Tom with the Weather

Good time to wear N95s

A few weeks ago, I ordered a box of N95 masks as I had been following the rising positivity rate. Both Vaccines may not prevent many symptoms of long covid, study suggests and 1 Of 5 With Covid May Develop Long Covid, CDC Finds were also persuasive.

A few weeks ago, I ordered a box of N95 masks as I had been following the rising positivity rate. Both Vaccines may not prevent many symptoms of long covid, study suggests and 1 Of 5 With Covid May Develop Long Covid, CDC Finds were also persuasive.

Sunday, 12. June 2022

Simon Willison

Twenty years of my blog

I started this blog on June 12th 2002 - twenty years ago today! To celebrate two decades of blogging, I decided to pull together some highlights and dive down a self-indulgent nostalgia hole. Some highlights Some of my more influential posts, in chronological order. A new XML-RPC library for PHP - 2nd September 2002 I was really excited about XML-RPC, one of the earliest technologies f

I started this blog on June 12th 2002 - twenty years ago today! To celebrate two decades of blogging, I decided to pull together some highlights and dive down a self-indulgent nostalgia hole.

Some highlights

Some of my more influential posts, in chronological order.

A new XML-RPC library for PHP - 2nd September 2002

I was really excited about XML-RPC, one of the earliest technologies for building Web APIs. IXR, the Incutio library for XML-RPC, was one of my earliest ever open source library releases. Here's a capture of the old site.

I've not touched anything relating to this project in over 15 years now, but it has lived on in both WordPress and Drupal (now only in Drupal 6 LTS).

It's also been responsible for at least one CVE vulnerability in those platforms!

getElementsBySelector() - 25th March 2003

Andrew Hayward had posted a delightful snippet of JavaScript called document.getElementsByClassName() - like document.getElementsByTagName() but for classes instead.

Inspired by this, I built document.getElementsBySelector() - a function that could take a CSS selector and return all of the matching elements.

This ended up being very influential indeed! Paul Irish offers a timeline of JavaScript CSS selector engines which tracks some of what happens next. Most notably, getElementsBySelector() was part of John Resig's inspiration in creating the first version of jQuery. To this day, the jQuery source includes this testing fixture which is derived from my original demo page.

I guess you could call document.getElementsBySelector() the original polyfill for document.querySelectorAll().

I'm in Kansas - 27th August 2003

In May 2003 Adrian Holovaty posted about a job opportunity for a web developer at at the Lawrence Journal-World newspaper in Lawrence, Kansas.

This coincided with my UK university offering a "year in industry" placement, which meant I could work for a year anywhere in the world with a student visa program. I'd been reading Adrian's blog for a while and really liked the way he thought about building for the web - we were big fans of Web Standards and CSS and cleanly-designed URLs, all of which were very hot new things at the time!

So I talked to Adrian about if this could work as a year-long opportunity, and we figured out how to make it work.

At the Lawrence Journal-Word Adrian and I decided to start using Python instead of PHP, in order to build a CMS for that local newspaper...

Introducing Django - 17th July 2005

... and this was the eventual outcome! Adrian and I didn't even know we were building a web framework at first - we called it "the CMS". But we kept having to solve new foundational problems: how should database routing work? What about templating? What's the best way to represent the incoming HTTP request?

I had left the Lawrence Journal-World in 2004, but by 2005 the team there had grown what's now known as Django far beyond where it was when I had left, and they got the go-ahead from the company to release it as open source (partly thanks to the example set by Ruby on Rails, which first released in August 2004).

In 2010 I wrote up a more detailed history of Django in a Quora answer, now mirrored to my blog.

Finally powered by Django - 15th December 2006

In which I replaced my duct-tape-and-mud PHP blogging engine with a new Django app. I sadly don't have the version history for this anymore (this was pre-git, I think I probably had it in Subversion somewhere) but today's implementation is still based on the same code, upgraded to Django 1.8 in 2015.

That 2006 version did include a very pleasing Flickr integration to import my photos (example on the Internet Archive):

How to turn your blog in to an OpenID - 19th December 2006

In late 2006 I got very, very excited about OpenID. I was convinced that Microsoft Passport was going to take over SSO on the internet, and that the only way to stop that was to promote an open, decentralized solution. I wrote posts about it, made screencasts (that one got 840 diggs! Annoyingly I was serving it from the Internet Archive who appear to have deleted it) and gave a whole bunch of conference talks about it too.

I spent the next few years advocating for OpenID - in particular the URL-based OpenID mechanism where any website can be turned into an identifier. It didn't end up taking off, and with hindsight I think that's likely for the best: expecting people to take control of their own security by chosing their preferred authentication provider sounded great to me in 2006, but I can understand why companies chose to instead integrate with a smaller, tightly controlled set of SSO partners over time.

A few notes on the Guardian Open Platform - 10th March 2009

In 2009 I was working at the Guardian newspaper in London in my first proper data journalism role - my work at the Lawrence Journal-World had hinted towards that a little, but I spent the vast majority of my time there building out a CMS.

In March we launched two major initiatives: the Datablog (also known as the Data Store) and the Guardian's Open Platform (an API that is still offered to this day).

The goal of the Datablog was to share the data behind the stories. Simon Rogers, the Guardian's data editor, had been collecting meticulous datasets about the world to help power infographics in the paper for years. The new plan was to share that raw data with the world.

We started out using Google Sheets for this. I desperately wanted to come up with something less proprietary than that - I spent quite some time experimenting with CouchDB - but Google Sheets was more than enough to get the project started.

Many years later my continued mulling of this problem formed part of the inspiration for my creation of Datasette, a story I told in my 2018 PyBay talk How to Instantly Publish Data to the Internet with Datasette.

Why I like Redis - 22nd October 2009

I got interested in NoSQL for a few years starting around 2009. I still think Redis was the most interesting new piece of technology to come out of that whole movement - an in-memory data structure server exposed over the network turns out to be a fantastic complement for other data stores, and even though I now default to PostgreSQL or SQLite for almost everything else I can still find problems for which Redis is a great solution.

In April 2010 I gave a three hour Redis tutorial at NoSQL Europe which I wrote up in Comprehensive notes from my three hour Redis tutorial.

Node.js is genuinely exciting - 23rd November 2009

In December 2009 I found out about Node.js. As a Python web developer I had been following the evolution of Twisted with great interest, but I'd also run into the classic challenge that once you start using event-driven programming almost every library you might want to use likely doesn't work for you any more.

Node.js had server-side event-driven programming baked into its very core. You couldn't accidentally make a blocking call and break your event loop because it didn't ever give you the option to do so!

I liked it so much I switched out my talk for Full Frontal 2009 at the last minute for one about Node.js instead.

I think this was an influential decision. I won't say who they are (for fear of mis-representing or mis-quoting them), but I've talked to entrepreneurs who built significant products on top of server-side JavaScript who told me that they heard about Node.js from me first.

Crowdsourced document analysis and MP expenses - 20th December 2009

In 2009 I was working at the Guardian newspaper in London in my first proper data journalism role - my work at the Lawrence Journal-World had hinted towards that a little, but I spent the vast majority of my time there building out a CMS.

The UK government had finally got around to releasing our Member of Parliament expense reports, and there was a giant scandal brewing about the expenses that had been claimed. We recruited our audience to help dig through 10,000s of pages of PDFs to help us find more stories.

The first round of the MP's expenses crowdsourcing project launched in June, but I was too busy working on it to properly write about it! Charles Arthur wrote about it for the Guardian in The breakneck race to build an application to crowdsource MPs' expenses.

In December we launched round two, and I took the time to write about it properly.

Here's a Google Scholar search for guardian mps expenses - I think it was pretty influential. It's definitely one of the projects I'm most proud of in my career so far.

WildlifeNearYou: It began on a fort... - 12th January 2010

In October 2008 I participated in the first /dev/fort - a bunch of nerds rent a fortress (or similar historic building) for a week and hack on a project together.

Following that week of work it took 14 months to add the "final touches" before putting the site we had built live (partly because I insisted on implementing OpenID for it) but in January 2010 we finally went live with WildlifeNearYou.com (sadly no longer available). It was a fabulous website, which crowdsourced places that people had seen animals in order to answer the crucial question "where is my nearest Llama?".

Here's what it looked like:

Although it shipped after the Guardian MP's expenses project most of the work on WildlifeNearYou had come before that - building WildlifeNearYou (in Django) was the reason I was confident that the MP's expenses project was feasible.

Getting married and going travelling - 21st June 2010

One June 5th 2010 I married Natalie Downe, and we both quit our jobs to set off travelling around the world and see how far we could get.

We got as far as Casablanca, Morocco before we accidentally launched a startup together: Lanyrd, launched in August 2010. "Sign in with Twitter to see conferences that your friends are speaking at, attending or tracking, then add your own events."

We ended up spending the next three years on this: we went through Y Combinator, raised a sizable seed round, moved to London, hired a team and shipped a LOT of features. We even managed to ship some features that made the company money!

This also coincided with me putting the blog on the back-burner for a few years.

Here's an early snapshot:

In 2013 we sold Lanyrd to Eventbrite, and moved our entire team (and their families) from London to San Francisco. It had been a very wild ride.

Sadly the site itself is no longer available: as Eventbrite grew it became impossible to justify the work needed to keep Lanyrd maintained, safe and secure. Especially as it started to attract overwhelming volumes of spam.

Natalie told the full story of Lanyrd on her blog in September 2013: Lanyrd: from idea to exit - the story of our startup.

Scraping hurricane Irma - 10th September 2017

In 2017 hurricane Irma devastated large areas of the Caribbean and the southern USA.

I got involved with the Irma Response project, helping crowdsource and publish critical information for people affected by the storm.

I came up with a trick to help with scraping: I ran scrapers against important information sources and recorded the results to a git repository, in order to cheaply track changes to those sources over time.

I later coined the term "Git scraping" for this technique, see my series of posts about Git scraping over time.

Getting the blog back together - 1st October 2017

Running a startup, and then working at Eventbrite afterwards, had resulted in an almost 7 year gap in blogging for me. In October 2017 I decided to finally get my blog going again. I also back-filled content for the intervening years by scraping my content from Quora and from Ask Metafilter.

If you've been meaning to start a new blog or revive an old one this is a trick that I can thoroughly recommend: just because you initially wrote something elsewhere doesn't mean you shouldn't repost it on a site you own.

Recovering missing content from the Internet Archive - 8th October 2017

The other step in recovering my old blog's content was picking up some content that was missing from my old database backup. Here's how I pulled in that content by scraping the Internet Archive.

Implementing faceted search with Django and PostgreSQL - 5th October 2017

I absolutely love building faceted search engines. I realized a while ago that most of my career has been spent applying the exact same trick - faceted search - to different problem spaces. WildlifeNearYou offered faceted search over animal sightings. MP's expenses had faceted search across crowdsourced expense analysis. Lanyrd was faceted search for conferences.

I implemented faceted search for this blog on top of PostgreSQL, and wrote about how I did it.

Datasette: instantly create and publish an API for your SQLite databases - 13th November 2017

I shipped the first release of simonw/datasette in Nevember 2017. Nearly five years later it's now my number-one focus, and I don't see myself losing interest in it for many decades to come.

Datasette was inspired by the Guardian Datablog, combined with my realization that Zeit Now (today called Vercel) meant you could bundle data up in a SQLite database and deploy it as part of an exploratory application almost for free.

My blog has 284 items tagged datasette at this point.

Datasette Facets - 20th May 2018

Given how much I love faceted search, it's surprising it took me until May 2018 to realize that I could bake them into Datasette itself - turning it into a tool for building faceted search engines against any data. It turns out to be my ideal solution to my favourite problem!

Documentation unit tests - 28th July 2018

I figured out a pattern for using unit tests to ensure that features of my projects were covered by the documentation. Four years later I can confirm that this technique works really well - though I wish I'd called it Test-driven documentation instead!

Letterboxing on Lundy - 18th September 2018

A brief foray into travel writing: Natalie and I spent a few days staying in a small castle on the delightful island of Lundy off the coast of North Devon, and I used it as an opportunity to enthuse about letterboxing and the Landmark Trust.

sqlite-utils: a Python library and CLI tool for building SQLite databases - 25th February 2019

Datasette helps you explore and publish data stored in SQLite, but how do you get data into SQLite in the first place?

sqlite-utils is my answer to that question - a combined CLI tool and Python library with all sorts of utilites for working with and creating SQLite databases.

It recently had its 100th release!

I commissioned an oil painting of Barbra Streisand’s cloned dogs - 7th March 2019

Not much I can add that's not covered by the title. It's a really good painting!

My JSK Fellowship: Building an open source ecosystem of tools for data journalism - 10th September 2019

In late 2019 I left Eventbrite to join the JSK fellowship program at Stanford. It was an opportunity to devote myself full-time to working on my growing collection of open source tools for data journalism, centered around Datasette.

I jumped on that opportunity with both hands, and I've been mostly working full-time on Datasette and associated projects (without being paid for it since the fellowship ended) ever since.

Weeknotes: ONA19, twitter-to-sqlite, datasette-rure - 13th September 2019

At the start of my fellowship I decided to publish weeknotes, to keep myself accountable for what I was working on now that I didn't have the structure of a full-time job.

I've managed to post them roughly once a week ever since - 128 posts and counting.

I absolutely love weeknotes as a format. Even if no-one else ever reads them, I find them really useful as a way to keep track of my progress and ensure that I have motivation to get projects to a point where I can write about them at the end of the week!

Using a self-rewriting README powered by GitHub Actions to track TILs - 20th April 2020

In April 2020 I started publishing TILs - Today I Learneds - at til.simonwillison.net.

The idea behind TILs is to dramatically reduce the friction involved in writing a blog post. If I learned something that was useful to me, I'll write it up as a TIL. These often take less than ten minutes to throw together and I find myself referring back to them all the time.

My main blog is a Django application, but my TILs run entirely using Datasette. You can see how that all works in the simonw/til GitHub repository.

Using SQL to find my best photo of a pelican according to Apple Photos - 21st May 2020

Dogsheep is my ongoing side project in which I explore ways to analyze my own personal data using SQLite and Datasette.

dogsheep-photos is my tool for extracting metadata about my photos from the undocumented Apple Photos SQLite database (building on osxphotos by Rhet Turnbull). I had been wanting to solve the photo problem for years and was delighted when osxphotos provided the capability I had been missing. And I really like pelicans, so I celebrated by using my photos of them for the demo.

Git scraping: track changes over time by scraping to a Git repository - 9th October 2020

If you really want people to engage with a technique, it's helpful to give it a name. I defined Git scraping in this post, and I've been promoting it heavily ever since.

There are now 275 public repositories on GitHub with the git-scraping topic, and if you sort them by recently updated you can see the scrapers on there that most recently captured some new data.

Personal Data Warehouses: Reclaiming Your Data - 14th November 2020

I gave this talk for GitHub's OCTO (previously Office of the CTO, since rebranded to GitHub Next) speaker series.

It's the Dogsheep talk, with a better title (thanks, Idan!) It includes a full video demo of my personal Dogsheep instance, including my dog's Foursquare checkins, my Twitter data, Apple Watch GPS trails and more.

I also explain why I called it Dogsheep: it's a devastatingly terrible pun on Wolfram.

I'm frustrated when information like this is only available in video format, so when I give particularly information-dense talks I like to turn them into full write-ups as well, providing extra notes and resources alongside screen captures from the talk.

For this one I added a custom template mechanism to my blog, to allow me to break out of my usual entry page design.

Trying to end the pandemic a little earlier with VaccinateCA - 28th February 2021

In February 2021 I joined the VaccinateCA effort to try and help end the pandemic a little bit earlier by crowdsourcing information about the best places to get vaccinated. It was a classic match-up for my skills and interests: a huge crowdsourcing effort that needed to be spun up as a fresh Django application as quickly as possible.

Django SQL Dashboard was one project that spun directly out of that effort.

The Baked Data architectural pattern - 28th July 2021

My second attempt at coining a new term, after Git scraping: Baked Data is the name I'm using for the architectural pattern embodied by Datasette where you bundle a read-only copy of your data alongside the code for your application, as part of the same deployment. I think it's a really good idea, and more people should be doing it.

How I build a feature - 12th January 2022

Over the years I’ve evolved a processes for feature development that works really well for me, and scales down to small personal projects as well as scaling up to much larger pieces of work. I described that in detail in this post.

Picking out these highlights wasn't easy. I ended up setting myself a time limit (to ensure I could put this post live within a minute of midnight UTC time on my blog's 20th birthday) so there's plenty more that I would have liked to dig up.

My tags index page includes a 2010s-style word cloud that you can visit if you want to explore the rest of my content. Or use the faceted search!

A few more project release highlights:

GraphQL in Datasette with the new datasette-graphql plugin - 7th August 2020 git-history: a tool for analyzing scraped data collected using Git and SQLite - 7th December 2021 shot-scraper: automated screenshots for documentation, built on Playwright - 10th March 2022 Django SQL Dashboard - 10th May 2021 Datasette Desktop—a macOS desktop application for Datasette - 8th September 2021 Datasette Lite: a server-side Python web application running in a browser - 4th May 2022 Evolution over time

I started my blog in my first year of as a student studying computer science at the University of Bath.

You can tell that Twitter wasn't a thing yet, because I wrote 107 posts in that first month. Lots of links to other people's blog posts (we did a lot of that back then) with extra commentary. Lots of blogging about blogging.

That first version of the site was hosted at http://www.bath.ac.uk/~cs1spw/blog/ - on my university's student hosting. Sadly the Internet Archive doesn't have a capture of it there, since I moved it to http://simon.incutio.com/ (my part-time employer at the time) in September 2002. Here's my note from then about rewriting it to use MySQL instead of flat file storage.

This is the earliest capture I could find on the Internet Archive, from June 2003:

Full entry on Using bookmarklets to experiment with CSS.

By November 2006 I had redesigned from orange to green, and started writing Blogmarks - the name I used for small, bookmark-style link posts. I've collected 6,304 of them over the years!

By 2010 I'd reached more-or-less my current purple on white design, albeit with the ability to sign in with OpenID to post a comment. I dropped comments entirely when I relaunched in 2017 - constantly fighting against spam comments makes blogging much less fun.

The source code for the current iteration of my blog is available on GitHub.

Taking screenshots of the Internet Archive with shot-scraper

Here's how I generated the screenshots in this post, using shot-scraper against the Internet Archive but with a line of JavaScript to hide the banner the display at the top of every archived page:

shot-scraper 'https://web.archive.org/web/20030610004652/http://simon.incutio.com/' \ --javascript 'document.querySelector("#wm-ipp-base").style.display="none"' \ --width 800 --height 600 --retina

mgdlbp on Hacker News pointed out that you can instead add if_ to the date part of the archive URLs to hide the banner, like this:

shot-scraper 'https://web.archive.org/web/20030610004652if_/http://simon.incutio.com/' \ --width 800 --height 600 --retina

@_Nat Zone

紛争下のアイデンティティ〜ウクライナ侵略をうけて

来る6月21日に、米国コロラド州デンバー近郊で行わ… The post 紛争下のアイデンティティ〜ウクライナ侵略をうけて first appeared on @_Nat Zone.

来る6月21日に、米国コロラド州デンバー近郊で行われるIdentiverse 2022で、「Identity in Conflict (紛争下のアイデンティティ)」と題するワークショップを行います。

Identity in Conflict Tuesday, June 21, 11:30 am – 12:20 pm MDT (日本時間 2:30 AM – 3:20 AM) In times of instability and uncertainty, the reliability and trustworthiness of our identity systems become especially important. This workshop examines two areas in particular—identity management for displaced people, and the protection of government identity systems—and seeks to establish some ground rules to ensure that critical identity systems are robust and fit for purpose.

このセッションは、 去る2月24日に始まったウクライナ侵略をうけて、わたしが主催者に提案して実現したものです。すでに一杯であったプログラムに無理やり押し込んでくれた主催者には感謝しかありません。

紛争下のアイデンティティの課題には大きく分けて以下の2つがあります。

避難民のアイデンティティ管理  どのように彼らに援助やその他のサービス(銀行業務など)を提供するか 避難民と彼らを取り巻く人々への標的型誤情報から彼らをどう守るか 政府関連システムのアイデンティティ管理 敵の攻撃をいかにしのぎ政府や援助団体のシステムを守るか 事業継続・再生戦略

それぞれ、いくらでも語ることがあるトピックですが、残念ながら時間が50分しかありませんし、政府システム防衛を担当しておられる方が急遽渡米してこのセッションに加わっていただけることになったので、主に項目2について検討することになるかと思います。

今回の侵略が開始された後、ウクライナ政府に対するフィッシング等の攻撃は3000%の増加を見せています。これに対応するために、Yubico社が20,000個のYubikeyを送付するなど、各所からの援助が届いています。一方で、現在ウクライナ政府が使っている暗号アルゴリズムがGOST(ロシア版NIST)開発によるものであり、これと、ほとんどすべての政府システムが既にハッキングを受けたという某筋からの情報を合わせると、いろいろ考えさせられるところがあります。

Identiverseにお越しの折には、ぜひご参加ください。

The post 紛争下のアイデンティティ〜ウクライナ侵略をうけて first appeared on @_Nat Zone.

Simon Willison

A tiny web app to create images from OpenStreetMap maps

Earlier today I found myself wanting to programmatically generate some images of maps. I wanted to create a map centered around a location, at a specific zoom level, and with a marker in a specific place. Some cursory searches failed to turn up exactly what I wanted, so I decided to build a tiny project to solve the problem, taking advantage of my shot-scraper tool for automating screenshots o

Earlier today I found myself wanting to programmatically generate some images of maps.

I wanted to create a map centered around a location, at a specific zoom level, and with a marker in a specific place.

Some cursory searches failed to turn up exactly what I wanted, so I decided to build a tiny project to solve the problem, taking advantage of my shot-scraper tool for automating screenshots of web pages.

The result is map.simonwillison.net - hosted on GitHub Pages from my simonw/url-map repository.

Here's how to generate a map image of Washington DC:

shot-scraper 'https://map.simonwillison.net/?q=washington+dc' \ --retina --width 600 --height 400 --wait 3000

That command generates a PNG 1200x800 image that's a retina screenshot of the map displayed at https://map.simonwillison.net/?q=washington+dc - after waiting three seconds to esure all of the tiles have fully loaded.

The website itself is documented here. It displays a map with no visible controls, though you can use gestures to zoom in and pan around - and the URL bar will update to reflect your navigation, so you can bookmark or share the URL once you've got it to the right spot.

You can also use query string parameters to specify the map that should be initially displayed:

https://map.simonwillison.net/?center=51.49,0&zoom=8 displays a map at zoom level 8 centered on the specified latitude, longitude coordinate pair. https://map.simonwillison.net/?q=islington+london geocodes the ?q= text using OpenStreetMap Nominatim and zooms to the level that best fits the bounding box of the first returned result. https://map.simonwillison.net/?q=islington+london&zoom=12 does that but zooms to level 12 instead of using the best fit for the bounding box https://map.simonwillison.net/?center=51.49,0&zoom=8&marker=51.49,0&marker=51.3,0.2 adds two blue markers to the specified map. You can pass &marker=lat,lon as many times as you like to add multiple markers. Annotated source code

The entire mapping application is contained in a single 68 line index.html file that mixes HTML and JavaScript. It's built using the fantastic Leaflet open source mapping library.

Since the code is so short, I'll enclude the entire thing here with some additional annotating comments.

It started out as a copy of the first example in the Leaflet quick start guide.

<!DOCTYPE html> <!-- Regular HTML boilerplate --> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>map.simonwillison.net</title> <!-- Leaflet's CSS and JS are loaded from the unpgk.com CDN, with the Subresource Integrity (SRI) integrity="sha512..." attribute to ensure that the exact expected code is served by the CDN. --> <link rel="stylesheet" href="https://unpkg.com/leaflet@1.8.0/dist/leaflet.css" integrity="sha512-hoalWLoI8r4UszCkZ5kL8vayOGVae1oxXe/2A4AO6J9+580uKHDO3JdHb7NzwwzK5xr/Fs0W40kiNHxM9vyTtQ==" crossorigin=""/> <script src="https://unpkg.com/leaflet@1.8.0/dist/leaflet.js" integrity="sha512-BB3hKbKWOc9Ez/TAwyWxNXeoV9c1v6FIeYiBieIWkpLjauysF18NzgR1MBNBXf8/KABdlkX68nAhlwcDFLGPCQ==" crossorigin=""></script> <!-- I want the map to occupy the entire browser window with no margins --> <style> html, body { height: 100%; margin: 0; } </style> </head> <body> <!-- The Leaflet map renders in this 100% high/wide div --> <div id="map" style="width: 100%; height: 100%;"></div> <script> function toPoint(s) { // Convert "51.5,2.1" into [51.5, 2.1] return s.split(",").map(parseFloat); } // An async function so we can 'await fetch(...)' later on async function load() { // URLSearchParams is a fantastic browser API - it makes it easy to both read // query string parameters from the URL and later to generate new ones let params = new URLSearchParams(location.search); // If the starting URL is /?center=51,32&zoom=3 this will pull those values out let center = params.get('center') || '0,0'; let initialZoom = params.get('zoom'); let zoom = parseInt(initialZoom || '2', 10); let q = params.get('q'); // .getAll() turns &marker=51.49,0&marker=51.3,0.2 into ['51.49,0', '51.3,0.2'] let markers = params.getAll('marker'); // zoomControl: false turns off the visible +/- zoom buttons in Leaflet let map = L.map('map', { zoomControl: false }).setView(toPoint(center), zoom); L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 19, attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>', // This option means retina-capable devices will get double-resolution tiles: detectRetina: true }).addTo(map); // We only pay attention to ?q= if ?center= was not provided: if (q && !params.get('center')) { // We use fetch to pass ?q= to the Nominatim API and get back JSON let response = await fetch( `https://nominatim.openstreetmap.org/search.php?q=${encodeURIComponent(q)}&format=jsonv2` ) let data = await response.json(); // data[0] is the first result - it has a boundingbox array of four floats // which we can convert into a Leaflet-compatible bounding box like this: let bounds = [ [data[0].boundingbox[0],data[0].boundingbox[2]], [data[0].boundingbox[1],data[0].boundingbox[3]] ]; // This sets both the map center and zooms to the correct level for the bbox: map.fitBounds(bounds); // User-provided zoom over-rides this if (initialZoom) { map.setZoom(parseInt(initialZoom)); } } // This is the code that updates the URL as the user pans or zooms around. // You can subscribe to both the moveend and zoomend Leaflet events in one go: map.on('moveend zoomend', () => { // Update URL bar with current location let newZoom = map.getZoom(); let center = map.getCenter(); // This time we use URLSearchParams to construct a center...=&zoom=... URL let u = new URLSearchParams(); // Copy across ?marker=x&marker=y from existing URL, if they were set: markers.forEach(s => u.append('marker', s)); u.append('center', `${center.lat},${center.lng}`); u.append('zoom', newZoom); // replaceState() is a weird API - the third argument is the one we care about: history.replaceState(null, null, '?' + u.toString()); }); // This bit adds Leaflet markers to the map for ?marker= query string arguments: markers.forEach(s => { L.marker(toPoint(s)).addTo(map); }); } load(); </script> </body> </html> <!-- See https://github.com/simonw/url-map for documentation -->

Saturday, 11. June 2022

Here's Tom with the Weather

Memory Fail

A long time ago, I read The Language Instinct. Inside the back page of my book are notes with page numbers. This is a practice I learned from a book by James Michener. At some point, I started sharing in conversations something I learned. Unfortunately, I had not made a note for this that I could check and the information was more complex than I remembered. Since I had shared this more than o

A long time ago, I read The Language Instinct. Inside the back page of my book are notes with page numbers. This is a practice I learned from a book by James Michener. At some point, I started sharing in conversations something I learned. Unfortunately, I had not made a note for this that I could check and the information was more complex than I remembered. Since I had shared this more than once, I thought I should really find the reference and it was not easy but I found it on page 293. The first part I had right.

In sum, acquisition of a normal language is guaranteed for children up to the age of six, is steadily compromised from then until shortly after puberty, and is rare thereafter.

Here is the part I screwed up.

We do know that the language-learning circuitry of the brain is more plastic in childhood; children learn or recover language when the left hemisphere of the brain is damaged or even surgically removed (though not quite at normal levels), but comparable damage in an adult usually leads to permanent aphasia.

While this itself is fascinating to me, I had been embellishing the story to say language is acquired in the brain’s right hemisphere in children and the left for adults. Now that I’m rereading it after so many years, it is clear that the book says this can happen but is not necessarily so.

Thursday, 09. June 2022

MyDigitalFootprint

Predator-prey models to model users

Predator-prey models are helpful and are often used in environmental science because they allow researchers to both observe the dynamics of animal populations and make predictions as to how they will develop/ change over time. I have been quiet as we have been unpacking an idea that with a specific data set, we can model user behaviour based on a dynamic competitive market. This Predator-prey

Predator-prey models are helpful and are often used in environmental science because they allow researchers to both observe the dynamics of animal populations and make predictions as to how they will develop/ change over time.

I have been quiet as we have been unpacking an idea that with a specific data set, we can model user behaviour based on a dynamic competitive market. This Predator-prey method, when applied to understand why users are behaving in a certain way, opens up a lot of questions we don’t have answers to.  

As a #CDO, we have to remain curious, and this is curious. 

Using the example of the rabbit and the fox. We know that there is a lag between growth in a rabbit population and the increase in a fox population.  The lag varies on each cycle, as does the peak and minimum of each animal.  We know that there is a lag between minimal rabbits and minimal foxes, as foxes can find other food sources and rabbits die of other causes.

Some key observations.  

The cycles, whilst they look similar, are very different because of externalities - and even over many time cycles where we end up with the same starting conditions, we get different outcomes.  Starting at any point and using the data from a different cycle creates different results; it is not a perfect science even with the application, say Euler's method, or bayesian network models.  Indeed we appear to have divergence and not convergence - between what we expect and what we see, even with the actual reality showing that over a long time, the numbers remain within certain boundaries. 

Each case begins with a set of initial conditions at a certain point in the cycle that will produce different outcomes for the function of the population of rabbits and foxes over a long period (100 years) - or user behaviours. 

This creates a problem, as the data and models look good in slide form, as we can fix one model into a box that makes everyone feel warm and fuzzy. With the same model and different starting parameters - the outcome does not marry the plan.  Decision-making is not always easier with data!

As a CEO

How do you test that the model that is being presented flexes and provides sensitivity to changes and not wildly different outcomes? It is easy to frame a model to give the outcome we want.   






Wednesday, 08. June 2022

Just a Theory

Bryce Canyon 1987

Back in 1987 I made a photo at the Bryce Canyon Park. And now I’m posting it, because it’s *spectacular!*

The hoodoos of Bryce Canyon National Park. Photo made in the summer of 1987. © 1987 David E. Wheeler

Back in 1987, my mom and I went on a trip around the American Southwest. I was 18, freshly graduated from high school. We had reservations to ride donkeys down into the Grand Canyon, but, sadly I got a flu and kept us in the hotel along the rim.

The highlight of the trip turned out to be Bryce Canyon, where I made this photo of its famous hoodoos. Likely shot with Kodachrome 64, my go-too for sunny summer shots at the time, on a Pentax ME Super SLR with, as I recall, a 28-105mm lens. Mom asked me yesterday if I’d scanned photos from that trip and, digging into my scans, the deeply saturated colors with those lovely evergreens took my breath away.

More about… Bryce Canyon Utah Hoodoo

Simon Willison

The End of Localhost

The End of Localhost swyx makes the argument for cloud-based development environments, and points out that many large companies - including Google, Facebook, Shopify and GitHub - have made the move already. I was responsible for the team maintaining the local development environment experience at Eventbrite for a while, and my conclusion is that with a large enough engineering team someone will

The End of Localhost

swyx makes the argument for cloud-based development environments, and points out that many large companies - including Google, Facebook, Shopify and GitHub - have made the move already. I was responsible for the team maintaining the local development environment experience at Eventbrite for a while, and my conclusion is that with a large enough engineering team someone will ALWAYS find a new way to break their local environment: the idea of being able to bootstrap a fresh, guaranteed-to-work environment in the cloud at the click of a button could save SO much time and money.


Announcing Pyston-lite: our Python JIT as an extension module

Announcing Pyston-lite: our Python JIT as an extension module The Pyston JIT can now be installed in any Python 3.8 virtual environment by running "pip install pyston_lite_autoload" - which includes a hook to automatically inject the JIT. I just tried a very rough benchmark against Datasette (ab -n 1000 -c 10) and got 391.20 requests/second without the JIT compared to 404.10 request/second with

Announcing Pyston-lite: our Python JIT as an extension module

The Pyston JIT can now be installed in any Python 3.8 virtual environment by running "pip install pyston_lite_autoload" - which includes a hook to automatically inject the JIT. I just tried a very rough benchmark against Datasette (ab -n 1000 -c 10) and got 391.20 requests/second without the JIT compared to 404.10 request/second with it.

Via Hacker News

Tuesday, 07. June 2022

Simon Willison

Weeknotes: Datasette Cloud ready to preview

I made an absolute ton of progress building Datasette Cloud on Fly this week, and also had a bunch of fun playing with GPT-3. Datasette Cloud Datasette Cloud is my upcoming hosted SaaS version of Datasette. I've been re-building my initial alpha on top of Fly because I want to be able to provide each team account with their own Datasette instance running in a dedicated Firecracker container, a

I made an absolute ton of progress building Datasette Cloud on Fly this week, and also had a bunch of fun playing with GPT-3.

Datasette Cloud

Datasette Cloud is my upcoming hosted SaaS version of Datasette. I've been re-building my initial alpha on top of Fly because I want to be able to provide each team account with their own Datasette instance running in a dedicated Firecracker container, and the recently announced Fly Machines lets me do exactly that.

As-of this weekend I have all of the different pieces in place, and I'm starting to preview it to potential customers.

Interested in trying it out? You can request access to the preview here.

GPT-3 explorations

Most of my GPT-3 explorations over the past week are covered by these two blog posts:

A Datasette tutorial written by GPT-3 is the point at which I really started taking GPT-3 seriously, after convincing myself that I could use it to help with real work, not just as a form of entertainment. How to play with the GPT-3 language model is a very quick getting started tutorial, because I polled people on Twitter and found that more than half didn't know you could try GPT-3 out now for free.

Searching my tweets for GPT captures a bunch of other, smaller experiments. A few highlights:

How to safely run two processes in one Docker container "Write a book outline about Datasette in Spanish" Getting GPT-3 to write yes-and improv scripts Generating and then explaining complex ffmpeg recipes Releases this week datasette-upload-csvs: 0.7 - (8 releases total) - 2022-06-03
Datasette plugin for uploading CSV files and converting them to database tables mbox-to-sqlite: 0.1a0 - 2022-05-31
Load email from .mbox files into SQLite datasette-graphql: 2.0.2 - (34 releases total) - 2022-05-30
Datasette plugin providing an automatic GraphQL API for your SQLite databases datasette-auth-existing-cookies: 1.0a1 - (12 releases total) - 2022-05-29
Datasette plugin that authenticates users based on existing domain cookies TIL this week Using just with Django

Monday, 06. June 2022

Heather Vescent

NFTs, ICOs and Ownership of Creative Ideas

Photo by Artur Aldyrkhanov on Unsplash In my March 2022 Biometric Update column, I explained that NFTs are a big deal because they are a unique digital identity layer native to the Ethereum blockchain. This is exciting because it can be used to track the ownership of digital items. While NFTs are not without their problems, there is a growing appetite to explore the possibilities thanks to a c
Photo by Artur Aldyrkhanov on Unsplash

In my March 2022 Biometric Update column, I explained that NFTs are a big deal because they are a unique digital identity layer native to the Ethereum blockchain. This is exciting because it can be used to track the ownership of digital items. While NFTs are not without their problems, there is a growing appetite to explore the possibilities thanks to a culture that views the world in a fresh way.

ICOs

To understand the importance of NFTs, we need to understand the context of the world when NFTs were originally designed. In 2018, there was a lot of energy around exploring alternate currencies as a funding mechanism. The term ICO — or initial coin offering — was a method to raise funds for a new venture. But the vision of ICOs weren’t only to raise money, but to create a community with shared values. It’s similar to an IPO or Kickstarter, but with one key difference, the community had its own currency that could be used for transactions. Many of the ICO projects used a cryptocurrency as part of the product design — a financial mechanism to enable cooperation in a complementary financial system (see Bernard Lietaer’s work on complementary currency systems). But an ICO was equally a signaling of belief in the project and a desire to innovate existing economic models.

ICOs were problematic for many reasons, but one thing legit ICO creators wanted was the ability to issue a receipt or stock token to something to show that you are part of this community. This functionality was not possible with the existing transactional tokens. Different business models became available with a token that had a unique identity and ran on the same infrastructure as transactional tokens.

ICOs combined crowd-funding and cryptocurrency, and challenged economics as we knew it at the time. Not all ICOs succeeded, and there were scams. But ICOs are a successful innovation, making a funding mechanism that was previously only available to an elitist few, available more broadly. And it paved the way for NFTs which extend the transactional nature of tokens to enable unique identity while using the same ETH rails. These innovations enable new business models.

Artists and Ownership of Ideas

Artists are explorers pushing the boundaries of what we think technology can be used for and how it can be used. There are many challenges of being an artist. Not only do you have to successfully mine the well of creativity to create your art; you have to have some business acumen and be lucky to find success. Often the financial success of artists comes late in their career, and many die before they see the impact they’ve had on human society.

I was just commenting about this in the Mac Cosmetics store last week, when seeing Mac’s new Keith Haring Viva Glam series. Keith Haring was a revolutionary artist that died of AIDS before he could even see the influence of his work. And one of my first future scenarios explored this idea, through an alternate currency created specifically so creators could pay for longevity treatments to live longer to see the impact of their lives.

Photo by Danny Lines on Unsplash

But Artists can be jerks. There are countless stories of lesser talented but more well known artists, stealing ideas from unknown geniuses. Yayoi Kusama’s earliest ideas were stolen and utilized by Andy Warhol, Claes Oldenburg and Lucas Samaras, the results of which made them famous, while Kusama still struggled. Seeing the success of her stolen ideas under a different name almost destroyed her. There was no way for her to prove creative provenance, especially when credit was not given.

Jodorosky’s Dune influenced the entire Sci-Fi industry, including the Star Wars, Alien and Blade Runner franchises. But none of this was known until relatively recently.

Then there are artists like Banksy, who create art in public spaces, on brick walls and billboards. I remember driving down La Brea in Los Angeles one morning in 2011 seeing the latest Banksy graffiti on a recent billboard, only to hear that the billboard owner took it down a few hours later– in order to capitalize on it! In another case a concrete wall was removed because of the Banksy piece on it.

Photo by Nicolas J Leclercq on Unsplash

This illustrates the problems of creative provenance and ownership. Creative commons licenses were created to provide a mechanism to license one’s work and allow others to use and remix it with attribution. But there aren’t good options for creators to protect against more powerful and better resourced people who can execute on their (borrowed or stolen) idea?

For artists who do sell their work, there is another conundrum. Artists only get paid on the first sale of their work, but their art can be sold at auctions later at an increased value. This makes art collectors in many ways investors, but the actual creator doesn’t get to benefit in the increased value after the art has left their hands. In some cases, the holder of the piece of art can make many millions more, than the actual creator of the piece. This use case inspired the creation of the ERC-2981 Royalty token, where artists can specify a royalty amount paid back to them, when the digital item is transferred.

Artists on one hand don’t always care about the ownership of ideas. On the other hand, you have to have money to live and keep making art. For anyone who has experienced someone take their ideas and execute on it, perhaps not even realizing it was not their own, is extremely painful. But the problem with ideas, if you want them to catch on, they have to become someone else’s. Unfortunately those with additional resources benefit when “borrowing” someone’s idea simply because they have the resources to execute on it.

Do NFTs give control?

NFTs sell the dream that artists can have control over what others can do with their creations. If you put an NFT on something to show provenance, that could help, but only if the laws around IP and ownership change too. Culture needs to change too. We’re all standing on the shoulders of the past, whether we acknowledge it or not.

We shouldn’t be surprised at the explosion of NFTs art — artists always use technology in novel ways that can be quite different from the use cases of the original creators. Traditional economic models haven’t supported creative efforts. And isn’t the point of artists to challenge the traditional ways we see the world? NFTs are an economic innovation that promises to give a tiny bit of power back to the artist.

I want to believe NFTs will help solve this problem, and I think they can partially address it. NFTs are a mechanism to give an artist more control and enable others to directly support them. But there is still the larger problem of living in a world that doesn’t often value creative expression in economic terms. And of those who have more power and resources utilizing the ideas of others for their own gain.


Damien Bod

Using math expressions in github markdown

This blog explores using and creating some standard mathematical expressions using github markdown. I was motivated to try this out after reading this blog. If you know the TEX Commands available in MathJax, then creating math documentation in github is a breeze. Github markdown uses MathJax. Code: https://github.com/damienbod/math-docs I decided to try out some basic […]

This blog explores using and creating some standard mathematical expressions using github markdown. I was motivated to try this out after reading this blog. If you know the TEX Commands available in MathJax, then creating math documentation in github is a breeze. Github markdown uses MathJax.

Code: https://github.com/damienbod/math-docs

I decided to try out some basic math functions.

Quadratic equations examples

$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$ $(a+b)^2$ $$\eqalign{ (a+b)^2 &= (a+b)(a+b) \\ &= a^2 + ab + ba + b^2 \\ &= a^2 + 2ab + b^2 }$$ $(a-b)^2$ $$\eqalign{ (a-b)^2 &= (a-b)(a-b) \\ &= a^2 - ab - ba + b^2 \\ &= a^2 - 2ab + b^2 }$$ $(a-b)(a+b)$ $$\eqalign{ (a+b)(a-b) &= a^2 - ab + ba - b^2 \\ &= a^2 - b^2 }$$

Produces https://github.com/damienbod/math-docs#quadratic-equations

Functions

$$ f(x) = {\sqrt{5x^2+2x-1}+(x-2)^2 } $$ $$ g(x) = {\\frac{a}{1-a^2} }$$ $$ f(x) = {(x + 2) \over (2x + 1)} $$ $$ f(x) = { \sqrt[3]{x^2} }$$ $$ \sqrt[5]{34}$$

Trigonometry examples

$$ cos^2 \theta + sin^2 \theta = 1 $$ $$ tan 2 \theta = {2tan \theta \over 1 - tan^2 \theta} $$ $$\eqalign{ cos 2 \theta = cos^2 \theta - sin^2 \theta \\ &= 2 cos^2 \theta -1 \\ &= 1 - 2sin^2 \theta }$$ Prove $ \sqrt{ 1 - cos^2 \theta \over 1- sin^2 \theta} = tan \theta $ $$ \sqrt{ 1 - cos^2 \theta \over 1- sin^2 \theta} = \sqrt{ sin^2 \theta \over cos^2 \theta} = {sin \theta \over cos \theta} = Tan \theta $$

Calculus examples

$$\eqalign{ f(x) = {3x^4} \implies {dy \over dx} = 12x^3 }$$ $$\eqalign{ f(x) = {2x^{-3/2}} \implies {dy \over dx} = -3x^{-5/2} &= -{3 \over \sqrt{x^5}} }$$ If $x = 2t + 1$ and $ y = t^2$ find ${dy \over dx}$ ? $$\eqalign{ x = 2t + 1 \implies {dx \over dt} = 2 \\ y = t^2 \implies {dy \over dt} = 2t \\ {dy \over dx} = {dy \over dt} \div {dx \over dt} \\ \implies 2t \div 2 = t }$$

Integration examples

Evaluate $\int_1^2 (x + 4)^2 dx $ $$\eqalign{ \int_1^2 (x + 4)^2 dx = \int_1^2 (x^2 + 8x + 16) dx \\ &= \left\lbrack {x^3 \over 3} + {8x^2 \over 2} + 16x \right\rbrack_1^2 \\ &= \left\lbrack {8 \over 3} + {8 * 4 \over 2} + 16 * 2 \right\rbrack - \left\lbrack {1 \over 3} + {8 \over 2} + 16 \right\rbrack }$$

Matrix example

$$ {\left\lbrack \matrix{2 & 3 \cr 4 & 5} \right\rbrack} * \left\lbrack \matrix{1 & 0 \cr 0 & 1} \right\rbrack = \left\lbrack \matrix{2 & 3 \cr 4 & 5} \right\rbrack $$

Sum examples

$$\sum_{n=1}^n n = {n \over 2} (n + 1) $$ $$\sum_{n=1}^n n^2 = {n \over 6} (n + 1)(2n + 1) $$

Notes

It is super easy to create great mathematical documentation now using github markdown. This is super useful for many use cases, schools, students and reports. This blog provides great help in finding the correct code to produce the docs.

Links

Render mathematical expressions in Markdown

https://www.mathjax.org/

http://docs.mathjax.org/en/latest/

https://docs.mathjax.org/en/v2.7-latest/tex.html

https://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm

Sunday, 05. June 2022

Identity Praxis, Inc.

MEF CONNECTS Personal Data & Identity Event & The Personal Data & Identity Meeting of The Waters: Things are Just Getting Started

This article was published by the MEF, on June 3, 2022.   Early last month, the MEF held its first-ever event dedicated to personal data and identity event: MEF CONNECTS Personal Data & Identity Hybrid, on May 10 and 11th, in London (watch all the videos here). It was unquestionably a huge success. Hundreds of people […] The post MEF CONNECTS Personal Data & Identity Event &

This article was published by the MEF, on June 3, 2022.

 

Early last month, the MEF held its first-ever event dedicated to personal data and identity event: MEF CONNECTS Personal Data & Identity Hybrid, on May 10 and 11th, in London (watch all the videos here). It was unquestionably a huge success. Hundreds of people came together to learn, interact, and make an impact.

A Transformative Agenda

The event covered a wide range of strategic, tactical, and technical topics. In addition to recruiting the speakers and programming the event, I spoke on The Personal Data & Identity Meeting of the Waters and introduced The Identity Nexus, an equation that illustrates the social and commercial personal data and identity equilibrium. Together we discussed:

Leading identification, authentication, and verification strategies The pros and cons and comparisons of various biometrics methods, including FaceID and VeinID People’s attitudes and sentiments at the nexus of personal data, identity, privacy and trust across 10-markets and U.S. undergraduate students Passwordless authentication and approaches to self-sovereign identity and personal data management Where personal data and identity meet physical and eCommerce retail, financial services, insurance, automotive, the U.S. military, and healthcare The role of carriers and data brokers in improving the customer experience along the customer journey, identification, and combating fraud Strategies for onboarding the over billion people today without an ID, let alone a digital ID. The rise of the personal information economy and seven different approaches to empowering individuals to give them agency, autonomy, and control over their personal data and identity Zero-party data strategies Demonstrable strategies for securing IoT data traffic Environment, Social, and Governance (ESG) investment personal data, and identity investment strategies Emergent people-centric/human-centric business models The rise of new regulations, including GDPR, CCPA, and new age-verification and age-gating regulations and the impact they’ll have on every business Frameworks to help business leaders at every level figure out what to keep doing, start doing, or do differently


MEF CONNECTS Personal Data & Identity 2022 Wordcloud

By the Numbers

The numbers tell it all –

Over 300 people engaging in-person and online 26 sessions 11 hours and 50 minutes of recorded on-demand video content 43 senior leaders speaking on a far range of topics: 11 CEOs and Presidents (Inc. Lieutenant Colonel, U.S. Army (Ret.)) 4 C-suite executives (Strategy, Commercial, Marketing) 3 Executive Directors & Co-founders 5 SVPs and VPs 7 Department Heads 13 SEMs 37 companies and brands represented on stage – MyDexMobile Ecosystem ForumMercedes-Benz CarsIdentity Praxis, Inc.IG4CapitalIdentity WomanLeading PointsWomen In IdentityVodafoneZARIOTSpokeoSinchAerPassHitachi Europe LtdInfobipInsights AngelsNickey HickmanBritish TelecomCheetah DigitalDataswiftDigi.meFingoAssurantAge Verification Providers AssociationTwilioVolvoCtrl-ShiftIPificationPool DataNatWestiProov LtdVisaXConnectpolyPolySkechersGlobal Messaging Service [GMS]World Economic Forum 5 Sponsors – Assurant, XConnect, Infobip, Cheetah Digital, and Sinch 1 book announcement, the pre-release announcement of my new book – “The Personal Data & Identity Meeting of the Waters: A Global Market Assessment

It was an honor to share the stage with so many talented people and a hug shout-out needs to be given to our sponsors and to the Mobile Ecosystem Forum team who executed flawlessly.

We’re Just Getting Started and I’m Here for You

The global data market generates $11.7 trillion annually for the global economy. Industry experts forecast that efficient use of personal data and identity (not including the benefits of innovation, improving mental health and social systems, IoT interactions, banking and finance, road safety, reducing multi-trillion-dollar cybercrime losses, and more), can add one percent to thirteen percent of a country’s gross domestic product (GDP). And we’re just getting started. The personal data and identity tsunami is just now reaching and washing over the shores of every society and economy. No region, no community, no country, no government, no enterprise, no individual, no thing, is immune to its effects.

I’m here to help. I can help you get involved with the MEF Personal Data & Identity Working Group, understand the global and regional personal data and identity market, build and execute a balanced personal data & identity strategy and products, build people-centric customer experiences at every touchpoint along your customer journey, meet new people and identify and source partners, educate your team, impact global regulations, standards, and protocols, identify programs and events that can help you and your organization learn, grown and make a difference. Connect with me on LinkedIn or schedule a call with me here.

Meet with Me In Person in June

I’ll be speaking at the MyData 2022 in Helsinki on June 20~23. If you can make it, please connect with me, and let’s meet up (ping me if you need a discount code to attend).

#APIs #AgeAssurance #AgeEstimation #AgeGating #AgeVerification #Agency #Aggregator #Assurance #Authentication #AuthenticatorApp #Biometrics #C #CampaignRegistry #Carrier #CarrierIdentification #Compliance #ConnectedCar #ConsumerSentiment #Control #CustomerRelationships #Data #DataBroker #DataCooperative #DataTrust #DataUnion #DecentralizedIdentity #DigitalID #ESG #Econometrics #End-To-EndEncryption #EnvironmentalSocialGovernance #FaceID #FinancialServices #FingerprintID #Fraud #FraudMitigation #Identity #Infomediary #Investing #IoT #LoyaltyProgram #MEF #MarTech #MeetingOfWaters #MeetingofWaters #MeetingoftheWaters #Messaging #MilitaryConnect #MobileCarrier #MobileOperator #NumberInteligence #Organizational-CentricApproach #OrganizationalCentricApproach #PasswordlessAuthentication #People-CentricApproach #PeopleCentricApproach #Pers #PersonalControl #PersonalData #PersonalDataStore #Personalization #Privacy #RAISEFramework #Regulation #Research #Retail #SMS #SMSOTP #Self-regulation #SelfSovereignty #ServiceProvider #TheIdentityNexus #Trust #TrustFramework #TrustbutVerify #VeinID #ZeroKnowledgeAuthentication #ZeroPartyData #data #decentrlaizedidentity #eCommerce #eIDAS #euConsent #identity #personaldata #privacy #selfsovereignidentity #usercentricidentity @AegisMobile @AerPass @AgeVerificationProvidersAssocation @Assurant @BritishTelecom @CheetahDigital @Ctrl-Shift @Dataswift @Digi.me @Fingo @Freelancer @GlobalMessagingService[GMS] @HitachiEuropeLtd @IG4Capital @IPification @IdentityPraxis @IdentityPraxis,Inc. @IdentityWoman @Infobip @InsightsAngels @LeadingPoints @Mercedes-BenzCars @MobileEcosystemForum @MyDex @NatWest @PoolData @Sinch @Skechers @Spokeo @Twilio @Visa @Vodafone @Volvo @WomenInIdentity @WorldEconomicForum @XConnect @ZARIOT @iProovLtd @polyPoly @WEF

The post MEF CONNECTS Personal Data & Identity Event & The Personal Data & Identity Meeting of The Waters: Things are Just Getting Started appeared first on Identity Praxis, Inc..


Jon Udell

What happened to simple, basic web hosting?

For a friend’s memorial I signed up to make a batch of images into a slideshow. All I wanted was the Simplest Possible Thing: a web page that would cycle through a batch of images. It’s been a while since I did something like this, so I looked around and didn’t find anything that seemed … Continue reading What happened to simple, basic web hosting?

For a friend’s memorial I signed up to make a batch of images into a slideshow. All I wanted was the Simplest Possible Thing: a web page that would cycle through a batch of images. It’s been a while since I did something like this, so I looked around and didn’t find anything that seemed simple enough. The recipes I found felt like overkill. Here’s all I wanted to do:

Put the images we’re gathering into a local folder Run one command to build slideshow.html Push the images plus slideshow.html to a web folder

Step 1 turned out to be harder than expected because a bunch of the images I got are in Apple’s HEIC format, so I had to find a tool that would convert those to JPG. Sigh.

For step 2 I wrote the script below. A lot of similar recipes you’ll find for this kind of thing will create a trio of HTML, CSS, and JavaScript files. That feels to me like overkill for something as simple as this, I want as few moving parts as possible, so the Python script bundles everything into slideshow.html which is the only thing that needs to be uploaded (along with the images).

Step 3 was simple: I uploaded the JPGs and slideshow.html to a web folder.

Except, whoa, not so fast there, old-timer! True, it’s easy for me, I’ve maintained a personal web server for decades and I don’t think twice about pushing files to it. Once upon a time, when you signed up with an ISP, that was a standard part of the deal: you’d get web hosting, and would use an FTP client — or some kind of ISP-provided web app — to move files to your server.

As I realized a few years ago, that’s now a rare experience. It seems that for most people, it’s far from obvious how to push a chunk of basic web stuff to a basic web server. People know how to upload stuff to Google Drive, or WordPress, but those are not vanilla web hosting environments.

It’s a weird situation. The basic web platform has never been more approachable. Browsers have converged nicely on the core standards. Lots of people could write a simple app like this one. Many more could at least /use/ it. But I suspect it will be easier for many nowadays to install Python and run this script than to push its output to a web server.

I hate to sound like a Grumpy Old Fart. Nobody likes that guy. I don’t want to be that guy. So I’ll just ask: What am I missing here? Are there reasons why it’s no longer important or useful for most people to be able to use the most basic kind of web hosting?

import os l = [i for i in os.listdir() if i.endswith('.jpg')] divs = '' for i in l: divs += f""" <div class="slide"> <img src="{i}"> </div> """ # Note: In a Python f-string, CSS/JS squiggies ({}) need to be doubled html = f""" <html> <head> <title>My Title</title> <style> body {{ background-color: black }} .slide {{ text-align: center; display: none; }} img {{ height: 100% }} </style> </head> <body> <div id="slideshow"> <div role="list"> {divs} </div> </div> <script> const slides = document.querySelectorAll('.slide') const time = 5000 slides[0].style.display = 'block'; let i = 0 setInterval( () => {{ i++ if (i === slides.length) {{ i = 0 }} for (let j = 0; j <= i; j++ ) {{ if ( j === i ) {{ slides[j].style.display = 'block' }} else {{ slides[j].style.display = 'none' }} }} }}, time) </script> </body> </html> """ with open('slideshow.html', 'w') as f: f.write(html)

Simon Willison

How to play with the GPT-3 language model

I ran a Twitter poll the other day asking if people had tried GPT-3 and why or why not. The winning option, by quite a long way, was "No, I don't know how to". So here's how to try it out, for free, without needing to write any code. You don't need to use the API to try out GPT-3 I think a big reason people have been put off trying out GPT-3 is that OpenAI market it as the OpenAI API. This sou

I ran a Twitter poll the other day asking if people had tried GPT-3 and why or why not. The winning option, by quite a long way, was "No, I don't know how to". So here's how to try it out, for free, without needing to write any code.

You don't need to use the API to try out GPT-3

I think a big reason people have been put off trying out GPT-3 is that OpenAI market it as the OpenAI API. This sounds like something that's going to require quite a bit of work to get started with.

But access to the API includes access to the GPT-3 playground, which is an interface that is incredibly easy to use. You get a text box, you type things in it, you press the "Execute" button. That's all you need to know.

How to sign up

To try out GPT-3 for free you need three things: an email address, a phone number that can receive SMS messages and to be located in one of this list of supported countries and regions.

Create an account at https://openai.com/join/ - you can create an email/password address or you can sign up using your Google or Microsoft account Verify your email address (click the link in the email they send you) Enter your phone number and wait for their text Enter the code that they texted to you

New accounts get $18 of credit for the API, which expire after three months. Each query should cost single digit cents to execute, so you can do a lot of experimentation without needing to spend any money.

How to use the playground

Once you've activated your account, head straight to the Playground:

https://beta.openai.com/playground

The interface looks like this (it works great on mobile too):

The only part of this interface that matters is the text box and the Submit button. The right hand panels can be used to control some settings but the default settings work extremely well - I've been playing with GPT-3 for months and 99% of my queries used those defaults.

Now you can just type stuff into the box and hit that "Submit" button.

Try this one to get you started:

Three reasons to start a succulent garden

Prompt engineering

The text that you entered there is called a "prompt". Everything about working with GPT-3 is prompt engineering - trying different prompts, and iterating on specific prompts to see what kind of results you can get.

It's a programming activity that actually feels a lot more like spellcasting. It's almost impossible to reason about: I imagine even the creators of GPT-3 could not explain to you why certain prompts produce great results while others do not.

It's also absurdly good fun.

Adding more to the generated text

GPT-3 will often let you hit the Submit button more than once - especially if the output to your question has the scope to keep growing in length - "Tell me an ongoing saga about a pelican fighting a cheesecake" for example.

Each additional click of "Submit" costs more credit.

You can also add your own text anywhere in the GPT-3 output, or at the end. You can use this to prompt for more output, or ask for clarification. I like saying "Now add a twist" to story prompts to see what it comes up with.

Further reading A Datasette tutorial written by GPT-3 describes my experiments getting GPT-3 to write a tutorial for my Datasette project How GPT3 Works - Visualizations and Animations is a great explanation of how GPT-3 works, illustrated with animations OpenAI GPT-3: Everything You Need to Know offers a good overview of GPT-3 GPT-3 as a muse: generating lyrics by Vincent Bons walks through some advanced GPT-3 fine tuning techniques to get it to output usable song lyrics inspired by the styles of various existing artists

Tuesday, 31. May 2022

Simon Willison

Compiling Black with mypyc

Compiling Black with mypyc Richard Si is a Black contributor who recently obtained a 2x performance boost by compiling Black using the mypyc tool from the mypy project, which uses Python type annotations to generate a compiled C version of the Python logic. He wrote up this fantastic three-part series describing in detail how he achieved this, including plenty of tips on Python profiling and cle

Compiling Black with mypyc

Richard Si is a Black contributor who recently obtained a 2x performance boost by compiling Black using the mypyc tool from the mypy project, which uses Python type annotations to generate a compiled C version of the Python logic. He wrote up this fantastic three-part series describing in detail how he achieved this, including plenty of tips on Python profiling and clever optimization tricks.

Via Łukasz Langa


A Datasette tutorial written by GPT-3

I've been playing around with OpenAI's GPT-3 language model playground for a few months now. It's a fascinating piece of software. You can sign up here - apparently there's no longer a waiting list. I just got a particularly interesting result from it which I thought I would share. Inspired by this tweet about automating documentation, the prompt I fed it was "a step by step tutorial for getti

I've been playing around with OpenAI's GPT-3 language model playground for a few months now. It's a fascinating piece of software. You can sign up here - apparently there's no longer a waiting list.

I just got a particularly interesting result from it which I thought I would share.

Inspired by this tweet about automating documentation, the prompt I fed it was "a step by step tutorial for getting started with Datasette". I then kept on clicking the generate button until it stopped adding additional paragraphs.

Something I've learned about GPT-3 is that you need to fact check everything that it produces. It's great at writing convincing text, but there's no guarantees that anything will actually be true!

Here's what it came up with (original here). I've indented its output in order to add annotations.

This is a guide for getting started with Datasette. Datasette is a tool for creating and publishing data-driven websites. It is designed to make it easy to publish structured data, such as the results of a database query, in a way that is highly visible and interactive.

A really strong start. This almost captures exactly what Datasette does.

One tiny nitpick: Datasette is used to publish entire database tables, not just the results of specific queries.

This is honestly a really good attempt at an elevator pitch for the tool - something I've consistently found difficult to capture in my own writing.

Datasette can be used to create websites that allow users to explore and visualize data, or to build applications that expose data via APIs. It can also be used as a static site generator, creating a completely static HTML website that can be deployed anywhere.

Again, mostly right. Datasette doesn't work as a static site generator, yet - though there's an open issue to explore options around that. And of course there's Datasette Lite.

This guide will cover the basics of how to install and use Datasette. It will also show you how to create a simple data-driven website using Datasette.

1. Install Datasette

Datasette can be installed using pip, the Python package manager. To install Datasette, simply run the following command:

pip install datasette

Yup, that's how you install Datasette. brew install datasette works too.

2. Create a database

Datasette requires a SQLite database file in order to function. This database can be created using any SQLite client, such as the sqlite3 command-line tool.

Once you have created a SQLite database file, you can use Datasette to create a website that exposes the data in that database.

Entirely correct.

3. Run the datasette command

The datasette command is used to start a Datasette server. To start a server for the SQLite database file mydatabase.db , simply run the following command:

datasette mydatabase.db

This will start a Datasette server on http://localhost:8001 by default. You can now view the website in your web browser.

Honestly, it's getting spooky at this point. This is exactly right - it even got the correct port number (8001). I'd love to understand the training data that helped GPT-3 reach this conclusion. Has it been trained on the entire contents of GitHub perhaps?

4. Explore the data

Datasette provides a number of ways to explore the data in your database. The homepage of the website lists all of the tables in your database, as well as a search box that can be used to find specific data.

It made up that search box on the homepage, though it's a pretty great idea for a feature.

(You can get something a bit like that if you install the datasette-search-all plugin.)

Each table has its own page, which lists all of the columns in the table and provides a form that can be used to query the data. The results of each query are displayed in a tabular format, and can be downloaded as CSV files.

Again, spookily accurate.

5. Create an API endpoint

Datasette also provides a JSON API that can be used to programmatically access the data in your database. Each table in your database has its own API endpoint, which can be accessed using a URL of the form:

http://localhost:8001/mydatabase.db/mytable?_size=100

Here's where things get interesting. The description here is exactly correct, but that URL isn't quite right - it should be:

http://localhost:8001/mydatabase/mytable.json?_size=100

I'd love to know how it synthesized the incorrect URL here.

This URL will return the first 100 rows from the mytable table in JSON format. The _size parameter can be used to control the number of rows returned, and the _offset parameter can be used to skip rows (for example, _offset=100 will return rows 100-199).

I am deeply amused by this: the _size parameter is described correctly, but the _offset parameter here is a total fabrication: Datasette deliberately doesn't offer offset/limit pagination because it's inefficient against large tables. It implements keyset pagination instead, using a ?_next= parameter.

The Datasette JSON API can be used to build data-driven applications on top of your database.

Good copy - I may use this elsewhere!

6. Deploy your website

Once you have created a Datasette website, you can deploy it to any web hosting provider that supports Python. Datasette is designed to be lightweight and fast, so it can be easily deployed on any platform, including Heroku, Google App Engine, and Amazon Web Services.

I've not seen Datasette deployed on Google App Engine, but it runs great on Google Cloud Run.

Thoughts

Firstly, the amount of detail GPT-3 has accumulated about Datasette is fascinating: I really want to understand where it picked up all of this knowledge. I remember asking it about Datasette around a year ago and not getting results anything like this, so my hunch is that it's been trained on a new corpus of data at some point in the past twelve months.

I certainly don't see this as a replacement for professional technical writers, but as a tool for trying out new ideas, beating writer's block and generally brainstorming how documentation could work it's already something I would continue using.

A really interesting aspect of this is "prompt engineering" - iterating on prompts and figuring out what can get the best results out of GPT-3. I wrote this article partly to highlight how good the results for "a step by step tutorial for getting started with ..." could be.

Marketing landing pages

As a quick bonus: I think GPT-3 really excels when it comes to marketing copy (maybe because I don't rate myself at all highly in that department).

Here's its response to the prompt: Write a marketing landing page for Datasette Cloud

Datasette Cloud is a new way to host and manage your data. It's built on top of the Datasette open source project and offers all the same features - including instant full-text search, SQLite database downloads and JSON API access - but with added benefits like automatic backups, 24/7 uptime monitoring and support from the Datasette team.

If you're looking for an easy way to host and manage your data, Datasette Cloud is the perfect solution. Try it today and see how easy it is to get started.

Honestly this does a significantly better job of describing the hosted Datasette product I've been building than I've managed myself so far.

Fascinating.


Lesser Known Features of ClickHouse

Lesser Known Features of ClickHouse I keep hearing positive noises about ClickHouse. I learned about a whole bunch of capabilities from this article - including that ClickHouse can directly query tables that are stored in SQLite or PostgreSQL. Via Hacker News

Lesser Known Features of ClickHouse

I keep hearing positive noises about ClickHouse. I learned about a whole bunch of capabilities from this article - including that ClickHouse can directly query tables that are stored in SQLite or PostgreSQL.

Via Hacker News

Monday, 30. May 2022

Simon Willison

Dragonfly: A modern replacement for Redis and Memcached

Dragonfly: A modern replacement for Redis and Memcached I was initially pretty skeptical of the tagline: does Redis really need a "modern" replacement? But the Background section of the README makes this look like a genuinely interesting project. It re-imagines Redis to have its keyspace partitioned across multiple threads, and uses the VLL lock manager described in a 2014 paper to "compose atom

Dragonfly: A modern replacement for Redis and Memcached

I was initially pretty skeptical of the tagline: does Redis really need a "modern" replacement? But the Background section of the README makes this look like a genuinely interesting project. It re-imagines Redis to have its keyspace partitioned across multiple threads, and uses the VLL lock manager described in a 2014 paper to "compose atomic multi-key operations without using mutexes or spinlocks". The initial benchmarks show up to a 25x increase in throughput compared to Redis. It's written in C++.

Via Hacker News


Phil Windley's Technometria

Twenty Years of Blogging

Summary: Blogging has been good to me. Blogging has been good for me. Leslie Lamport said "If you think you understand something, and don’t write down your ideas, you only think you’re thinking." I agree wholeheartedly. I often think "Oh, I get this" and then go to write it down and find all kinds of holes in my understanding. I write to understand. Consequently, I write my blog fo

Summary: Blogging has been good to me. Blogging has been good for me.

Leslie Lamport said "If you think you understand something, and don’t write down your ideas, you only think you’re thinking." I agree wholeheartedly. I often think "Oh, I get this" and then go to write it down and find all kinds of holes in my understanding. I write to understand. Consequently, I write my blog for me. But I hope you get something out of it too!

I started blogging in May 2002, twenty years ago today. I'd been thinking about blogging for about a year before that, but hadn't found the right tool. Jon Udell, who I didn't know then, mentioned his blog in an InfoWorld column. He was using Dave Winer's Radio Userland, so I downloaded it and started writing. At the time I was CIO for the State of Utah, so I garnered a bit of noteriety as a C-level blogger. And I had plenty of things to blog about.

Later, I moved to MovableType and then, like many developers who blog, wrote my own blogging system. I was tired of the complexity of blogging platforms that required a database. I didn't want the hassle. I write the body of each post using Emacs using custom macros I created. Then my blogging system generates pages from the bodies using a collection of templates. I use rsync to push them up to my server on AWS. Simple, fast, and completely under my control.

Along the way, I've influenced my family to blog. My wife, Lynne, built a blog to document her study abroad to Europe in 2019. My son Bradford has a blog where he publishes short stories. My daughter Alli is a food blogger and entrepreneur with a large following. My daughter Samantha is an illustrator and keeps her portfolio on her blog.

Doc Searls, another good friend who I met through blogging, says you can make money from your blog or because of it. I'm definately in the latter camp. Because I write for me, I don't want to do the things necessary to grow an audience and make my blog pay. But my life and bank account are richer because I blog. Jon, Dave, and Doc are just a few of countless friends I've made blogging. I wouldn't have written my first book if Doug Kaye, another blogging friend, hadn't suggested it. I wouldn't have started Internet Identity Workshop or been the Executive Producer of IT Conversations. I documented the process of creating my second startup, Kynetx on my blog. And, of course, I've written a bit (402 posts so far, almost 10% of the total) on identity. I've been invited to speak, write, consult, and travel because of what I write.

After 20 years, blogging has become a way of life. I think about things to write all the time. I can't imagine not blogging. Obviously, I recommend it. You'll become a better writer if you blog regularly. And you'll better understand what you write about. Get a domain name so you can move it, because you will, and you don't want to lose what you've written. You may not build a brand, but you'll build yourself and that's the ultimate reward for blogging.

Photo Credit: MacBook Air keyboard 2 from TheumasNL (CC BY-SA 4.0)

Tags: blogging


Damien Bod

Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core

This post shows how to force an Azure AD policy using Azure AD Continuous Access (CA) in an ASP.NET Core Blazor application. An authentication context is used to require MFA. The “acrs” claim in the id_token is used to validate whether or not an Azure AD CAE policy has been fulfilled. If the claim is […]

This post shows how to force an Azure AD policy using Azure AD Continuous Access (CA) in an ASP.NET Core Blazor application. An authentication context is used to require MFA. The “acrs” claim in the id_token is used to validate whether or not an Azure AD CAE policy has been fulfilled. If the claim is missing, an OpenID Connect challenge is sent to the Azure AD identity provider to request and require this. In this sample, MFA is required.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph) Add a CA policy which uses the authentication context. Implement the Blazor backend to handle the CA validation correctly Implement an authentication challenge using the claims challenge in the Blazor WASM

Setup overview

A Blazor WASM application is implemented and hosted in an ASP.NET Core application. This is one single application, or also know as a server rendered application. The single application is secured using a single confidential client and the security is implemented in the trusted backend with no sensitive token data stored in the browser. Cookies are used to store the sensitive data. Microsoft.Identity.Web is used to implement the security. The Microsoft.Identity.Web lib is an OpenID Connect client wrapper from Microsoft with some Microsoft Azure specifics.

Creating a conditional access authentication context

A continuous access evaluation (CAE) authentication context was created using Microsoft Graph and a policy was created to use this. See the first blog in this series for details on setting this up.

Validate the CAE in the Blazor backend

The CaeClaimsChallengeService class is used to implement the CAE check in the application. The class checks for the acrs claim and returns a claims challenge requesting the claim if this is missing.

namespace BlazorBffAzureAD.Server; /// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// /// This class is only required if using a standalone AuthContext check /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } public string? CheckForRequiredAuthContextIdToken(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var cae = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}"; return cae; } } return null; } }

The AdminApiCallsController is used to provide data for the Blazor WASM UI. If the identity does not have the required authorization, an unauthorized response is returned to the UI with the claims challenge.

using Microsoft.AspNetCore.Authentication.Cookies; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace BlazorBffAzureAD.Server.Controllers; [ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [ApiController] [Route("api/[controller]")] public class AdminApiCallsController : ControllerBase { private readonly CaeClaimsChallengeService _caeClaimsChallengeService; public AdminApiCallsController(CaeClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [HttpGet] public IActionResult Get() { // if CAE claim missing in id token, the required claims challenge is returned var claimsChallenge = _caeClaimsChallengeService .CheckForRequiredAuthContextIdToken(AuthContextId.C1, HttpContext); if (claimsChallenge != null) { return Unauthorized(claimsChallenge); } return Ok(new List<string>() { "Admin data 1", "Admin data 2" }); } }

Handling the authentication challenge in the Blazor WASM client

The Blazor WASM client handles the unauthorized response by authenticating again using Azure AD. If the claims challenge is returned, a step up authentication is sent to Azure AD with the challenge. The CaeStepUp method is used to implement the UI part of this flow.

using System.Net; namespace BlazorBffAzureAD.Client.Services; // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth public class AuthorizedHandler : DelegatingHandler { private readonly HostAuthenticationStateProvider _authenticationStateProvider; public AuthorizedHandler(HostAuthenticationStateProvider authenticationStateProvider) { _authenticationStateProvider = authenticationStateProvider; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var authState = await _authenticationStateProvider.GetAuthenticationStateAsync(); HttpResponseMessage responseMessage; if (authState.User.Identity!= null && !authState.User.Identity.IsAuthenticated) { // if user is not authenticated, immediately set response status to 401 Unauthorized responseMessage = new HttpResponseMessage(HttpStatusCode.Unauthorized); } else { responseMessage = await base.SendAsync(request, cancellationToken); } if (responseMessage.StatusCode == HttpStatusCode.Unauthorized) { var content = await responseMessage.Content.ReadAsStringAsync(); // if server returned 401 Unauthorized, redirect to login page if (content != null && content.Contains("acr")) // CAE { _authenticationStateProvider.CaeStepUp(content); } else // standard { _authenticationStateProvider.SignIn(); } } return responseMessage; } }

The CaeStepUp navigates to the authorization URL of the backend application with the claims challenge passed as a parameter.

public void CaeStepUp(string claimsChallenge, string? customReturnUrl = null) { var returnUrl = customReturnUrl != null ? _navigation.ToAbsoluteUri(customReturnUrl).ToString() : null; var encodedReturnUrl = Uri.EscapeDataString(returnUrl ?? _navigation.Uri); var logInUrl = _navigation.ToAbsoluteUri( $"{LogInPath}?claimsChallenge={claimsChallenge}&returnUrl={encodedReturnUrl}"); _navigation.NavigateTo(logInUrl.ToString(), true); }

The Login checks for claims challenge and starts an authentication process using Azure AD and the Microsoft.Identity.Web client.

[Route("api/[controller]")] public class AccountController : ControllerBase { [HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { // var claims = "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; // var claims = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } return Challenge(properties); }

Using CAE is a useful way in applications to force authorization or policies in an Azure applications. This can be implemented easily with an ASP.NET Core application.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Sunday, 29. May 2022

Here's Tom with the Weather

Linking to OpenLibrary for read posts

My blog is now linking to openlibrary.org for read posts. If you have the book’s ISBN, then it is trivial to link to openlibrary’s page for your book. It would be cool if those pages accepted webmention so that you could see who is reading the book.

My blog is now linking to openlibrary.org for read posts. If you have the book’s ISBN, then it is trivial to link to openlibrary’s page for your book. It would be cool if those pages accepted webmention so that you could see who is reading the book.



Rebecca Rachmany

Tokenomics: Three Foundations for Creating a Token Economy

Requests for tokenomics consulting have been bombarding me lately. My theory is that it’s because almost everyone recognizes that nothing any tokenomics expert has told them makes any sense. Quite a few clients have reported that all the tokenomics experts tell them it’s about the marketing and the burn rate and they, as founders, can’t understand where the real value it. In other words, tok

Requests for tokenomics consulting have been bombarding me lately. My theory is that it’s because almost everyone recognizes that nothing any tokenomics expert has told them makes any sense. Quite a few clients have reported that all the tokenomics experts tell them it’s about the marketing and the burn rate and they, as founders, can’t understand where the real value it.

In other words, tokenomics seems to be complete baloney to many founders. And they’re not wrong. In this post I’m going to go into three considerations in constructing your tokenomics model and how each one should affect the model. I won’t go into any fancy mathematical formulae, just the basic logic.

The three considerations, in order of importance are:

The founders’ goals and desires. What “the market/ investors” will invest in. Sustainable and logical tokenomics that move the project forward.

Just separating these three out is a revelation for many founders. It may be completely possible to raise a lot of funds on a model that is completely at odds with the sustainability of the project. It’s equally easy to raise a lot of money and have a successful company and not accomplish any of your personal goals. Many entrepreneurs go into business thinking that once they succeed, they’ll have more money and time for the things they really want to do. How’s that working out?

What do you REALLY want?

Issuing a token seems to be a fast way to big money, and there’s also some stuff about freedom and democracy, so blockchain naturally attracts a huge crowd. Let’s assume that you do raise the money you want for your project.

What do you really want as a founder?

To create a successful long-term business that contributes value to the world? To expand or get a better valuation for an existing company? To build a better form of democracy? To build cool tech stuff? To rescue the rain forests? To prove yourself in the blockchain industry so you’ll have a future career? To have enough money to buy an island and retire? To provide a way for poor people to make a living in crypto? To get rich and show everyone they were wrong about how crypto is a bubble? To get out of the existing rat race before the economy completely collapses? To save others from the collapse by getting everyone a bitcoin wallet and a few satoshis?

Usually you and the other founders will have a combination of personal goals, commitments to your family, values that you want to promote, and excitement about a particular project.

The tokenomics should align with your goals. Generally speaking:

There are serious legal implications and potential repercussions to raising money through a token launch. If you have an existing, profitable business, you do have something to lose by getting it wrapped up in crypto. Projects do need money and pretending you have a good tokenomics model can get you there. If you have an idea for a blockchain project, chances are 98% that someone else has already done something similar. Ask yourself honestly why you aren’t just joining them. If you think you can do it better, ask yourself why you don’t just help them be better. Do your research to understand the challenges they are facing, because you are about to face them. If your main inspiration is building a great business or getting career experience, joining a project that already raised money might get you there faster. If you are doing a “social good” project, monetary incentives will corrupt the project. If you love DeFi, yield farming, and all that stuff, and just want to make money, you probably will do better working hard and investing in the right projects rather than taking the legal and personal risks involved in your own token.

Personally, my core reason for writing whitepapers and consulting in Web3 is because I love helping people accomplish their goals. I’ve been working with entrepreneurs for 30 years, and nothing beats the satisfaction of watching people accomplish their dreams.

Investors, what’s an investor?

The second consideration is what the “investors” will perceive as a good tokenomics model. If you’ve gotten this far, you’ve already decided to raise money through a token sale. The only way to do that is to create tokenomics that investors will love.

It does not matter if the tokenomics model makes sense. It does not matter if the tokenomics model works in practice. It does not matter if the tokenomics model works in theory.

All of those models of burn rate and stuff do not matter for the purposes of selling your token EXCEPT that they need to align with what the investor-de-jour thinks the calculation-du-jour should be. All of those charts and models are pure poppycock. With rare exception, none of the people who are modelling are central bankers, monetary theory experts, mathematicians or data scientists. If they were, they would either tell you it doesn’t work or that it’s speculative and unproven, or they would create something that would never pass your legal team.

The good news is that you don’t have to understand tokenomics or make something sensible to create a useful tokenomics model. You just have to copy the thing-du-jour and have good marketing.

After all, these people aren’t really investors, are they? They are going to dump your coin as soon as they can. They aren’t going to use the token for the “ecosystem”. They aren’t going to advise or critique you on anything beyond the token price. They are in for the quick profits. Your job is to figure out how to pump the coin for as long as possible and do whatever it was you planned in step one (what you want) with the money you raised. Nobody is being fooled around here. Let’s not pretend that there was actually some project you wanted to do with the money. If there was, also fine, and you just keep doing that with the funds raised, and if it succeeds, that’s a bonus, but it doesn’t matter to these “investors”. They weren’t hodling for the long term.

If you have real investors who are in it for the long term, BTW, you might be coming to me to write a whitepaper for you, but you wouldn’t be reading my advice on tokenomics. You’ve probably got those people on board already.

To summarize, in today’s market, what you are trying to create is a speculative deflationary model that you can market for as long as possible. This is not sustainable for the actual product, as I’ll cover in the next section.

What would actually work?

As far as I can tell based on my experience working with more than 300 project, there is no empirical evidence that any of the tokenomics models work, other than Security Tokens where you really give investors equity in the project.

Token models are not designed to give the token holders profit. So far, all of the cryptocurrency market is based on speculation. You can potentially to argue that Bitcoin and a few other coins are really useful as a store and exchange of value, but it is too late to invent Bitcoin again.

Let me clarify that, because it’s not customary for a tokenomics expert to say “none of the token models work,” so let me discuss Ethereum as one of the best possible outcomes but which also triggers a failure mode.

Ethereum is one of the very few cryptocurrencies that works as advertised in the whitepaper: it is used as the utility token on the Ethereum network. It’s also a great investment, because it has generally gone up in value. So it “works” as advertised, but the failure mode is that it’s gotten too damn expensive. Yes, it rose in value, which is great for investors. But using the Ethereum network is prohibitively expensive. The best thing to do with ETH is hodl, not utilize. For your project, think of the following models:

You create the BeanCoin which allows the holder to get a bushel of beans for 1 coin. You are successful and the BeanCoin is now worth $500, but nobody wants to spend $500 to for a bushel of beans. Investors are happy but the coin is useless. You create the PollCoin, a governance token that allows the holder to vote for community proposals and elections. You are successful and the PollCoin is worth $500 and now it costs a minimum of $500 to become a citizen of the community and $2500 to submit a proposal to the community. The best companies/people to do the work don’t want to submit a proposal because the risk is too high of losing that money. Anyone who bought in early to PollCoin sells because they would rather have money than a vote in a community of elitist rich people with poor execution because nobody wants to submit proposals to do work.

In other words, when you create a deflationary coin or token, by default, the success of the token is also it’s failure.

But what about Pay-to-Earn models? Haven’t they been a success? How about DAOs where the community does the work?

First of all, any project younger than 3 years old can’t be considered as a model for long-term tokenomics success. The best we can say is “good so far”. Secondly, nobody has ever been able to explain to me how “giving away money” is a potential long-term business model.

A token for everyone

A surprising number of people who contact me have not thought deeply about what they want on a six-month, two-year, or ten-year scale when they launch these projects. Many people think a token is an easy way to raise money, which it is, relative to many other ways of raising money. But keep in mind that every step you take in your entrepreneurial journey is just a step closer to the next, usually bigger, problem. As you launch your token, make sure to check in with yourself and your other founders that you’re ready for the next challenge down the pike.

Friday, 27. May 2022

Simon Willison

Architecture Notes: Datasette

Architecture Notes: Datasette I was interviewed for the first edition of Architecture Notes - a new publication (website and newsletter) about software architecture created by Mahdi Yusuf. We covered a bunch of topics in detail: ASGI, SQLIte and asyncio, Baked Data, plugin hook design, Python in WebAssembly, Python in an Electron app and more. Mahdi also turned my scrappy diagrams into beautiful

Architecture Notes: Datasette

I was interviewed for the first edition of Architecture Notes - a new publication (website and newsletter) about software architecture created by Mahdi Yusuf. We covered a bunch of topics in detail: ASGI, SQLIte and asyncio, Baked Data, plugin hook design, Python in WebAssembly, Python in an Electron app and more. Mahdi also turned my scrappy diagrams into beautiful illustrations for the piece.

Via @arcnotes

Thursday, 26. May 2022

Phil Windley's Technometria

Using a Theory of Justice to Build a Better Web3

Summary: Building a better internet won't happen by chance or simply maximizing freedom. We have to build systems that support justice. How can we do that? Philosophy discussions are the black hole of identity. Once you get in, you can't get out. Nevertheless, I find that I'm drawn to them. I'm a big proponent of self-sovereign identity (SSI) precisely because I believe that autono

Summary: Building a better internet won't happen by chance or simply maximizing freedom. We have to build systems that support justice. How can we do that?

Philosophy discussions are the black hole of identity. Once you get in, you can't get out. Nevertheless, I find that I'm drawn to them. I'm a big proponent of self-sovereign identity (SSI) precisely because I believe that autonomy and agency are a vital part of building a new web that works for everyone. Consequently, I read Web3 Is Our Chance to Make a Better Internet with interest because it applied John Rawls's thought experiment known as the "veil of ignorance1," from his influential 1971 work A Theory of Justice to propose three things we can do in Web3 to build a more fair internet:

Promote self-determination and agency Reward participation, not just capital Incorporate initiatives that benefit the disadvantaged

Let's consider each of these in turn.

Promoting Self-Determination and Agency

As I wrote in Web3: Self-Sovereign Authority and Self-Certifying Protocols,

Web3, self-sovereign authority enabled by self-certifying protocols, gives us a mechanism for creating a digital existence that respects human dignity and autonomy. We can live lives as digitally embodied beings able to operationalize our digital relationships in ways that provide rich, meaningful interactions. Self-sovereign identity (SSI) and self-certifying protocols provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. Web3, built in this way, allows people to act as full-fledged participants in the digital realm.

There are, of course, ways to screw this up. Notably, many Web3 proponents don't really get identity and propose solutions to identity problems that are downright dangerous and antithetical to their aim of self-determination and agency. Writing about Central Bank Digital Currencies (CBDCs), Dave Birch said this:

The connection between digital identity and digital currency is critical. We must get the identity side of the equation right before we continue with the money side of the equation. As I told the Lords' committee at the very beginning of my evidence, "I am a very strong supporter of retail digital currency, but I am acutely aware of the potential for a colossal privacy catastrophe". From Identity And The New Money
Referenced 2022-05-18T16:14:50-0600

Now, whether you see a role for CBDCs in Web3 or see them as the last ditch effort of the old guard to preserve their relevance, Dave's points about identity are still true regardless of what currency systems you support. We don't necessarily want identity in Web3 for anti-money laundering and other fraud protection mechanisms (although those might be welcomed in a Web3 world that isn't a hellhole), but because identity is the basis for agency. And if we do it wrong, we destroy the very thing we're trying to promote. Someone recently said (I wish I had a reference) that using your Ethereum address for your online identity is like introducing yourself at a party using your bank balance. A bit awkward at least.

Rewarding Participation

If you look at the poster children of Web3, cryptocurrencies and NFTs, the record is spotty for how well these systems reward participation rather than rewarding early investors. But that doesn't have to be the case. In Why Build in Web3, Jad Esber and Scott Duke Kominers describe the "Adam Bomb" NFT:

For example, The Hundreds, a popular streetwear brand, recently sold NFTs themed around their mascot, the "Adam Bomb." Holding one of these NFTs gives access to community events and exclusive merchandise, providing a way for the brand's fans to meet and engage with each other — and thus reinforcing their enthusiasm. The Hundreds also spontaneously announced that it would pay royalties (in store credit) to owners of the NFTs associated to Adam Bombs that were used in some of its clothing collections. This made it roughly as if you could have part ownership in the Ralph Lauren emblem, and every new line of polos that used that emblem would give you a dividend. Partially decentralizing the brand's value in this way led The Hundreds's community to feel even more attached to the IP and to go out of their way to promote it — to the point that some community members even got Adam Bomb tattoos. From Why Build in Web3
Referenced 2022-05-17T14:42:53-0600

NFTs are a good match for this use case because they represent ownership and are transferable. The Hundreds doesn't likely care if someone other than the original purchaser of an Adam Bomb NFT uses it to get a discount so long as they can authenticate it. Esber and Kominers go on to say:

Sharing ownership allows for more incentive alignment between products and their derivatives, creating incentives for everyone to become a builder and contributor.

NFTs aren't the only way to reward participation. Another example is the Helium Network. Helium is a network of more than 700,000 LoRaWAN hotspots around the world. Operators of the hotspots, like me, are rewarded in HNT tokens for providing the hotspot and network backhaul using a method called "proof of coverage" that ensures the hotspot is active in a specific geographic area. The reason the network is so large is precisely because Helium uses its cryptocurrency to reward participants for the activities that grow the network and keep it functioning.

Building web3 ecosystems that reward participation is in stark contrast to Web 2.0 platforms that treat their participants as mere customers (at best) or profit from surveillance capitalism (at worst).

Incorporating Initiatives that Benefit the Disadvantaged

The HBR article acknowledges that this is the hardest one to enable using technology. That's because this is often a function of governance. One of the things we tried to do at Sovrin Foundation was live true to the tagline: Identity for All. For example, we spent a lot of time on governance for just this reason. For example, many of the participants in the Foundation worked on initiatives like financial inclusion and guardianship to ensure the systems we were building and promoting worked for everyone. These efforts cost us the support of some of our more "business-oriented" partners and stewards who just wanted to get to the business of quickly building a credential system that worked for their needs. But we let them walk away rather than cutting back on governance efforts in support of identity for all.

The important parts of Web3 aren't as sexy as ICOs and bored apes, but they are what will ensure we build something that supports a digital life worth living. Web 2.0 didn't do so well in the justice department. I believe Web3 is our chance to build a better internet, but only if we promote self-determination, reward participation, and build incentives that benefit the disadvantaged as well as those better off.

Notes The "veil of ignorance" asks a system designer to consider what system they would design if they were in a disadvantaged situation, rather than their current situation. For example, if you're designing a cryptocurrency, assume you're one of the people late to the game. What design decisions would make the system fair for you in that situation?

Photo Credit: Artists-impressions-of-Lady-Justice from Lonpicman (CC BY-SA 3.0)

Tags: web3 freedom agency ssi


Simon Willison

Weeknotes: Building Datasette Cloud on Fly Machines, Furo for documentation

Hosting provider Fly released Fly Machines this week. I got an early preview and I've been working with it for a few days - it's a fascinating new piece of technology. I'm using it to get my hosting service for Datasette ready for wider release. Datasette Cloud Datasette Cloud is the name I've given my forthcoming hosted SaaS version of Datasette. I'm building it for two reasons: This is a

Hosting provider Fly released Fly Machines this week. I got an early preview and I've been working with it for a few days - it's a fascinating new piece of technology. I'm using it to get my hosting service for Datasette ready for wider release.

Datasette Cloud

Datasette Cloud is the name I've given my forthcoming hosted SaaS version of Datasette. I'm building it for two reasons:

This is an obvious step towards building a sustainable business model for my open source project. It's a reasonably well-trodden path at this point: plenty of projects have demonstrated that offering paid hosting for an open source project can build a valuable business. GitLab are an especially good example of this model. There are plenty of people who could benefit from Datasette, but the friction involved in hosting it prevents them from taking advantage of the software. I've tried to make it as easy to host as possible, but without a SaaS hosted version I'm failing to deliver value to the people that I most want the software to help.

My previous alpha was built directly on Docker, running everything on a single large VPS. Obviously it needed to scale beyond one machine, and I started experimenting with Kubernetes to make this happen.

I also want to allow users to run their own plugins, without risk of malicious code causing problems for other accounts. Docker and Kubernetes containers don't offer the isolation that I need to feel comfortable doing this, so I started researching Firecracker - constructed by AWS to power Lambda and Fargate, so very much designed with potentially malicious code in mind.

Spinning up Firecracker on a Kubernetes cluster is no small lift!

And then I heard about Fly Machines. And it looks like it's exactly what I need to get this project to the next milestone.

Fly Machines

Fly's core offering allows you to run Docker containers in regions around the world, compiled (automatically by Fly) to Firecracker containers with geo-load-balancing so users automatically get routed to an instance running near them.

Their new Fly Machines product gives you a new way to run containers there: you get full control over when containers are created, updated, started, stopped and destroyed. It's the exact level of control I need to build Datasette Cloud.

It also implements scale-to-zero: you can stop a container, and Fly will automatically start it back up again for you (generally in less than a second) when fresh traffic comes in.

(I had built my own version of this for my Datasette Cloud alpha, but the spin up time took more like 10s and involved showing the user a custom progress bar to help them see what was going on.)

Being able to programatically start and stop Firecracker containers was exactly what I'd been trying to piece together using Kubernetes - and the ability to control which global region they go in (with the potential for Litestream replication between regions in the future) is a feature I hadn't expected to be able to offer for years.

So I spent most of this week on a proof of concept. I've successfully demonstrated that the Fly Machines product has almost exactly the features that I need to ship Datasette Cloud on Fly Machines - and I've confirmed that the gaps I need to fill are on Fly's near-term roadmap.

I don't have anything to demonstrate publicly just yet, but I do have several new TILs.

If this sounds interesting to you or your organization and you'd like to try it out, drop me an email at swillison @ Google's email service.

The Furo theme for Sphinx

My shot-scraper automated screenshot tool's README had got a little too long, so I decided to upgrade it to a full documentation website.

I chose to use MyST and Sphinx for this, hosted on Read The Docs.

MyST adds Markdown syntax to Sphinx, which is easier to remember (and for people to contribute to) than reStructuredText.

After putting the site live, Adam Johnson suggested I take a look at the Furo theme. I'd previously found Sphinx themes hard to navigate because they had so much differing functionality, but a personal recommendation turned out to be exactly what I needed.

Furo is really nice - it fixed a slight rendering complaint I had about nested lists in the theme I was using, and since it doesn't use web fonts it dropped the bytes transferred for a page of documentation by more than half!

I switched shot-scraper over to Furo, and liked it so much that I switched over Datasette and sqlite-utils too.

Here's what the shot-scraper documentation looks like now:

Screenshot taken using shot-scraper itself, like this:

shot-scraper \ https://shot-scraper.datasette.io/en/latest/ \ --retina --height 1200

Full details of those theme migrations (including more comparative screenshots) can be found in these issues:

shot-scraper: Switch to Furo theme #77 datasette: Switch documentation theme to Furo #1746 sqlite-utils: Switch to Furo documentation theme #435 Releases this week datasette-unsafe-expose-env: 0.1 - 2022-05-25
Datasette plugin to expose some environment variables at /-/env for debugging shot-scraper: 0.14.1 - (16 releases total) - 2022-05-22
A comand-line utility for taking automated screenshots of websites google-calendar-to-sqlite: 0.1a0 - 2022-05-21
Create a SQLite database containing your data from Google Calendar datasette-upload-dbs: 0.1.1 - (2 releases total) - 2022-05-17
Upload SQLite database files to Datasette datasette-insert: 0.7 - (7 releases total) - 2022-05-16
Datasette plugin for inserting and updating data TIL this week GitHub Actions job summaries Optimizing PNGs in GitHub Actions using Oxipng Using the Fly Docker registry Writing Fly logs to S3 Wildcard DNS and SSL on Fly

upptime

upptime "Open-source uptime monitor and status page, powered entirely by GitHub Actions, Issues, and Pages." This is a very creative (ab)use of GitHub Actions: it runs a scheduled action to check the availability of sites that you specify, records the results in a YAML file (with the commit history tracking them over time) and can automatically open a GitHub issue for you if it detects a new inc

upptime

"Open-source uptime monitor and status page, powered entirely by GitHub Actions, Issues, and Pages." This is a very creative (ab)use of GitHub Actions: it runs a scheduled action to check the availability of sites that you specify, records the results in a YAML file (with the commit history tracking them over time) and can automatically open a GitHub issue for you if it detects a new incident.

Via Ray Voelker


Benjamin "Zags" Zagorsky: Handling Timezones in Python

Benjamin "Zags" Zagorsky: Handling Timezones in Python The talks from PyCon US have started appearing on YouTube. I found this one really useful for shoring up my Python timezone knowledge: It reminds that if your code calls datetime.now(), datetime.utcnow() or date.today(), you have timezone bugs - you've been working with ambiguous representations of instances in time that could span a 26 hour

Benjamin "Zags" Zagorsky: Handling Timezones in Python

The talks from PyCon US have started appearing on YouTube. I found this one really useful for shoring up my Python timezone knowledge: It reminds that if your code calls datetime.now(), datetime.utcnow() or date.today(), you have timezone bugs - you've been working with ambiguous representations of instances in time that could span a 26 hour interval from UTC-12 to UTC+14. date.today() represents a 24 hour period and hence is prone to timezone surprises as well. My code has a lot of timezone bugs!

Monday, 23. May 2022

Simon Willison

Bundling binary tools in Python wheels

I spotted a new (to me) pattern which I think is pretty interesting: projects are bundling compiled binary applications as part of their Python packaging wheels. I think it’s really neat. pip install ziglang Zig is a new programming language lead by Andrew Kelley that sits somewhere near Rust: Wikipedia calls it an "imperative, general-purpose, statically typed, compiled system programming lan

I spotted a new (to me) pattern which I think is pretty interesting: projects are bundling compiled binary applications as part of their Python packaging wheels. I think it’s really neat.

pip install ziglang

Zig is a new programming language lead by Andrew Kelley that sits somewhere near Rust: Wikipedia calls it an "imperative, general-purpose, statically typed, compiled system programming language".

One of its most notable features is that it bundles its own C/C++ compiler, as a “hermetic” compiler - it’s completely standalone, unaffected by the system that it is operating within. I learned about this usage of the word hermetic this morning from How Uber Uses Zig by Motiejus Jakštys.

The concept reminds me of Gregory Szorc's python-build-standalone, which provides redistributable Python builds and was key to getting my Datasette Desktop Electron application working with its own hermetic build of Python.

One of the options provided for installing Zig (and its bundled toolchain) is to use pip:

% pip install ziglang ... % python -m ziglang cc --help OVERVIEW: clang LLVM compiler USAGE: zig [options] file... OPTIONS: -### Print (but do not run) the commands to run for this compilation --amdgpu-arch-tool=<value> Tool used for detecting AMD GPU arch in the system. ...

This means you can now pip install a full C compiler for your current platform!

The way this works is really simple. The ziglang package that you install has two key files: A zig binary (155MB on my system) containing the full Zig compiled implementation, and a __main__.py module containing the following:

import os, sys, subprocess sys.exit(subprocess.call([ os.path.join(os.path.dirname(__file__), "zig"), *sys.argv[1:] ]))

The package also bundles lib and doc folders with supporting files used by Zig itself, unrelated to Python.

The Zig project then bundles and ships eight different Python wheels targetting different platforms. Here's their code that does that, which lists the platforms that are supported:

for zig_platform, python_platform in { 'windows-i386': 'win32', 'windows-x86_64': 'win_amd64', 'macos-x86_64': 'macosx_10_9_x86_64', 'macos-aarch64': 'macosx_11_0_arm64', 'linux-i386': 'manylinux_2_12_i686.manylinux2010_i686', 'linux-x86_64': 'manylinux_2_12_x86_64.manylinux2010_x86_64', 'linux-armv7a': 'manylinux_2_17_armv7l.manylinux2014_armv7l', 'linux-aarch64': 'manylinux_2_17_aarch64.manylinux2014_aarch64', }.items(): # Build the wheel here...

They suggest that if you want to run their tools from a Python program you do so like this, to ensure your script can find the installed binary:

import sys, subprocess subprocess.call([sys.executable, "-m", "ziglang"])

I find this whole approach pretty fascinating. I really love the idea that I can add a full C/C++ compiler as a dependency to any of my Python projects, and thanks to Python wheels I'll automatically get a binary excutable compiled for my current platform.

Playwright Python

I spotted another example of this pattern recently in Playwright Python. Playwright is Microsoft's open source browser automation and testing framework - a kind of modern Selenium. I used it recently to build my shot-scraper screenshot automation tool.

Playwright provides a full-featured API for controlling headless (and headful) browser instances, with implementations in Node.js, Python, Java and .NET.

I was intrigued as to how they had developed such a sophisticated API for four different platforms/languages at once, providing full equivalence for all of their features across all four.

So I dug around in their Python package (from pip install playwright) and found this:

77M ./venv/lib/python3.10/site-packages/playwright/driver/node

That's a full copy of the Node.js binary!

% ./venv/lib/python3.10/site-packages/playwright/driver/node --version v16.13.0

Playwright Python works by providing a Python layer on top of the existing JavaScript API library. It runs a Node.js process which does the actual work, the Python library just communicates with the JavaScript for you.

As with Zig, the Playwright team offer seven pre-compiled wheels for different platforms. The list today is:

playwright-1.22.0-py3-none-win_amd64.whl playwright-1.22.0-py3-none-win32.whl playwright-1.22.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl playwright-1.22.0-py3-none-manylinux1_x86_64.whl playwright-1.22.0-py3-none-macosx_11_0_universal2.whl playwright-1.22.0-py3-none-macosx_11_0_arm64.whl playwright-1.22.0-py3-none-macosx_10_13_x86_64.whl

I wish I could say "you can now pip install a browser!" but Playwright doesn't actually bundle the browsers themselves - you need to run python -m playwright install to download those separately.

Pretty fascinating example of the same pattern though!

pip install a SQLite database

It's not quite the same thing, since it's not packaging an executable, but the one project I have that fits this mould if you squint a little is my datasette-basemap plugin.

It's a Datasette plugin which bundles a 23MB SQLite database file containing OpenStreetMap tiles for the first seven zoom levels of their world map - 5,461 tile images total.

I built it so that people could use my datasette-cluster-map and datasette-leaflet-geojson entirely standalone, without needing to load tiles from a central tile server.

You can play with a demo here. I wrote more about that project in Serving map tiles from SQLite with MBTiles and datasette-tiles. It's pretty fun to be able to run pip install datasette-basemap to install a full map of the world.

Seen any other interesting examples of pip install being (ab)used in this way? Ping them to me on Twitter.

Update: Paul O'Leary McCann points out that PyPI has a default 60MB size limit for packages, though it can be raised on a case-by-case basis. He wrote about this in Distributing Large Files with PyPI Packages.


Damien Bod

Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API

This article shows how to implement Azure AD Continuous Access (CA) in a Blazor application which uses a Web API. The API requires an Azure AD conditional access authentication context. In the example code, MFA is required to use the external API. If a user requests data from the API using the required access token […]

This article shows how to implement Azure AD Continuous Access (CA) in a Blazor application which uses a Web API. The API requires an Azure AD conditional access authentication context. In the example code, MFA is required to use the external API. If a user requests data from the API using the required access token without the required acr claim, an unauthorized response is returned with the missing claims. The Blazor application returns the claims challenge to the WASM application and the application authenticates again with the step up claims challenge. If the user has authenticated using MFA, the authentication is successful and the data from the API can be retrieved.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph). Add a CA policy which uses the authentication context. Implement the CA Azure AD authentication context authorization in the API. Implement the Blazor backend to handle the CA unauthorized responses correctly. Implement an authentication challenge using the claims challenge in the Blazor WASM.

Setup overview

Creating a Conditional access Authentication Context

A continuous access (CA) authentication context was created using Microsoft Graph and a policy was created to use this. See the previous blog for details on setting this up.

External API setup

The external API is setup to validate Azure AD JWT Bearer access tokens and to validate that the required continuous access evaluation (CAE) policy is fulfilled. The CAE policy uses the authentication context required for this API. If the user is authorized, the correct claims need to be presented in the access token.

[Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] [ApiController] [Route("[controller]")] public class ApiForUserDataController : ControllerBase { private readonly CAEClaimsChallengeService _caeClaimsChallengeService; public ApiForUserDataController( CAEClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [HttpGet] public IEnumerable<string> Get() { // returns unauthorized exception with WWW-Authenticate header // if CAE claim missing in access token // handled in the caller client exception with challenge returned // if not ok _caeClaimsChallengeService .CheckForRequiredAuthContext(AuthContextId.C1, HttpContext); return new List<string> { "admin API CAE protected data 1", "admin API CAE protected data 2" }; } }

The CaeClaimsChallengeService class implements the CAE requirement to use the API. If the user access token has insufficient claims, an unauthorized response is returned to the application requesting data from the API. The WWW-Authenticate header is set with the correct data as defined in the OpenID Connect signals and events specification.

using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Configuration; using System; using System.Globalization; using System.Linq; using System.Net; using System.Text; namespace AdminCaeMfaRequiredApi; /// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } /// <summary> /// Retrieves the acrsValue from database for the request method. /// Checks if the access token has acrs claim with acrsValue. /// If does not exists then adds WWW-Authenticate and throws UnauthorizedAccessException exception. /// </summary> public void CheckForRequiredAuthContext(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { if (IsClientCapableofClaimsChallenge(context)) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var base64str = Convert.ToBase64String(Encoding.UTF8.GetBytes("{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}")); context.Response.Headers.Append("WWW-Authenticate", $"Bearer realm=\"\", authorization_uri=\"https://login.microsoftonline.com/common/oauth2/authorize\", client_id=\"" + clientId + "\", error=\"insufficient_claims\", claims=\"" + base64str + "\", cc_type=\"authcontext\""); context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; string message = string.Format(CultureInfo.InvariantCulture, "The presented access tokens had insufficient claims. Please request for claims requested in the WWW-Authentication header and try again."); context.Response.WriteAsync(message); context.Response.CompleteAsync(); throw new UnauthorizedAccessException(message); } else { throw new UnauthorizedAccessException("The caller does not meet the authentication bar to carry our this operation. The service cannot allow this operation"); } } } } /// <summary> /// Evaluates for the presence of the client capabilities claim (xms_cc) and accordingly returns a response if present. /// </summary> public bool IsClientCapableofClaimsChallenge(HttpContext context) { string clientCapabilitiesClaim = "xms_cc"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var ccClaim = context.User.FindAll(clientCapabilitiesClaim).FirstOrDefault(x => x.Type == "xms_cc"); if (ccClaim != null && ccClaim.Value == "cp1") { return true; } return false; } }

The API can be used by any application and user which presents the correct access token including the claims required by the CAE.

Using Continuous Access Evaluation (CAE) in an ASP.NET Core hosted Blazor application.

The Blazor application is authenticated using MSAL and the backend for frontend (BFF) architecture. The Blazor application administrator page uses data from the CAE protected API. The Blazor ASP.NET Core hosted WASM application is protected using a MSAL confidential client.

services.AddMicrosoftIdentityWebAppAuthentication( Configuration, "AzureAd", subscribeToOpenIdConnectMiddlewareDiagnosticsEvents: true) .EnableTokenAcquisitionToCallDownstreamApi( new[] { "api://7c839e15-096b-4abb-a869-df9e6b34027c/access_as_user" }) .AddMicrosoftGraph( Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches();

The AdminApiClientService class is used to request data from the external API. The http client uses an Azure AD user delegated access token. If the API returns an unauthorized response, a WebApiMsalUiRequiredException is created with the WWW-Authenticate header payload.

public class AdminApiClientService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; public AdminApiClientService( ITokenAcquisition tokenAcquisition, IHttpClientFactory clientFactory) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; } public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); var scopes = new List<string> { "api://7c839e15-096b-4abb-a869-df9e6b34027c/access_as_user" }; var accessToken = await _tokenAcquisition .GetAccessTokenForUserAsync(scopes); client.BaseAddress = new Uri("https://localhost:44395"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForUserData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer.DeserializeAsync<List<string>>(stream); return payload; } throw new WebApiMsalUiRequiredException( $"Unexpected status code in the HttpResponseMessage: {response.StatusCode}.", response); } }

The AdminApiCallsController implements the API used by the Blazor WASM client. This is protected using cookies. The controller would return an unauthorized response with the claims challenge, if the WebApiMsalUiRequiredException is thrown.

public class AdminApiCallsController : ControllerBase { private readonly AdminApiClientService _userApiClientService; public AdminApiCallsController( AdminApiClientService userApiClientService) { _userApiClientService = userApiClientService; } [HttpGet] public async Task<IActionResult> Get() { try { return Ok(await _userApiClientService.GetApiDataAsync()); } catch (WebApiMsalUiRequiredException hex) { var claimChallenge = WwwAuthenticateParameters .GetClaimChallengeFromResponseHeaders(hex.Headers); return Unauthorized(claimChallenge); } } }

In the Blazor WASM client, an AuthorizedHandler is implemented to handle the unauthorized response from the API. If the “acr” claim is returned, the CAE step method is called.

public class AuthorizedHandler : DelegatingHandler { private readonly HostAuthenticationStateProvider _authenticationStateProvider; public AuthorizedHandler( HostAuthenticationStateProvider authenticationStateProvider) { _authenticationStateProvider = authenticationStateProvider; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var authState = await _authenticationStateProvider .GetAuthenticationStateAsync(); HttpResponseMessage responseMessage; if (authState.User.Identity!= null && !authState.User.Identity.IsAuthenticated) { // if user is not authenticated, // immediately set response status to 401 Unauthorized responseMessage = new HttpResponseMessage( HttpStatusCode.Unauthorized); } else { responseMessage = await base.SendAsync( request, cancellationToken); } if (responseMessage.StatusCode == HttpStatusCode.Unauthorized) { var content = await responseMessage.Content.ReadAsStringAsync(); // if server returned 401 Unauthorized, redirect to login page if (content != null && content.Contains("acr")) // CAE { _authenticationStateProvider.CaeStepUp(content); } else // standard { _authenticationStateProvider.SignIn(); } } return responseMessage; } }

The CaeStepUp method is implemented in the Blazor WASM client and creates a claims challenge with the defined claims challenge and the URL of the WASM client page for the redirect.

public void CaeStepUp(string claimsChallenge, string? customReturnUrl = null) { var returnUrl = customReturnUrl != null ? _navigation.ToAbsoluteUri(customReturnUrl).ToString() : null; var encodedReturnUrl = Uri.EscapeDataString(returnUrl ?? _navigation.Uri); var logInUrl = _navigation.ToAbsoluteUri( $"{LogInPath}?claimsChallenge={claimsChallenge}&returnUrl={encodedReturnUrl}"); _navigation.NavigateTo(logInUrl.ToString(), true); }

The account login sends a challenge to Azure AD to request the claims for the CAE.

[HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { //var claims = "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } return Challenge(properties); }

The Microsoft.Identity.Web client package requires the cp1 ClientCapabilities.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" },

To test both the Blazor server application and the API can be started and the CAE claims are required to use the API.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Sunday, 22. May 2022

Just a Theory

Feynman’s Genius

A while back I reviewed James Gleick's "Genius" on Goodreads. It died along with my Goodreads account. Now it's back!

Yours truly, in a 2018 review of Genius, by James Gleick:

Because our ways of understanding the universe are not the universe itself. They’re explanatory tools we develop, use, and sometimes discard in favor of newer, more effective tools. They’re imperfect, products of their times and cultures. But sometimes, in the face of an intractable problem, a maverick mind, cognizant of this reality, will take the radical step of discarding some part of the prevailing doctrine in an attempt to simplify the problem, or just to see what might happen. Feynman was such a mind, as Gleick shows again and again.

In case you’re wondering why I’m linking to my own blog, while this piece dates from 2018, I posted it only a few weeks ago. Originally I posted it on Goodreads, but when Goodreads unceremoniously deleted my account I thought it was gone for good. But two months later, Goodreads sent me my content. I was back in business! With my data recovered and added to my StoryGraph profile, I also took the opportunity to post the one review I had put some effort into on my own site. So here were are.

In other words, I’m more likely to post book reviews on Just a Theory from here on, but meanwhile, I’d be happy to be your friend on StoryGraph.

More about… Books James Gleick Richard Feynman Genius

Simon Willison

Paint Holding - reducing the flash of white on same-origin navigations

Paint Holding - reducing the flash of white on same-origin navigations I missed this when it happened back in 2019: Chrome (and apparently Safari too - not sure about Firefox) implemented a feature where rather than showing a blank screen in between page navigations Chrome "waits briefly before starting to paint, especially if the page is fast enough". As a result, fast loading multi-page applic

Paint Holding - reducing the flash of white on same-origin navigations

I missed this when it happened back in 2019: Chrome (and apparently Safari too - not sure about Firefox) implemented a feature where rather than showing a blank screen in between page navigations Chrome "waits briefly before starting to paint, especially if the page is fast enough". As a result, fast loading multi-page applications become almost indistinguishable from SPAs (single-page apps). It's a really neat feature, and now that I know how it works I realize that it explains why page navigations have felt a lot snappier to me over the past few years.


The balance has shifted away from SPAs

The balance has shifted away from SPAs "There’s a feeling in the air. A zeitgeist. SPAs are no longer the cool kids they once were 10 years ago." Nolan Lawson offers some opinions on why the pendulum seems to be swinging back in favour of server-side rendering over rendering every page entirely on the client. He argues that paint holding, back-forward caching and service workers have made the be

The balance has shifted away from SPAs

"There’s a feeling in the air. A zeitgeist. SPAs are no longer the cool kids they once were 10 years ago." Nolan Lawson offers some opinions on why the pendulum seems to be swinging back in favour of server-side rendering over rendering every page entirely on the client. He argues that paint holding, back-forward caching and service workers have made the benefits of SPAs over MPAs much less apparent. I'm inclined to agree.

Via Hacker News

Saturday, 21. May 2022

Simon Willison

GOV.UK Guidance: Documenting APIs

GOV.UK Guidance: Documenting APIs Characteristically excellent guide from GOV.UK on writing great API documentation. "Task-based guidance helps users complete the most common integration tasks, based on the user needs from your research." Via @jamietanna

GOV.UK Guidance: Documenting APIs

Characteristically excellent guide from GOV.UK on writing great API documentation. "Task-based guidance helps users complete the most common integration tasks, based on the user needs from your research."

Via @jamietanna

Friday, 20. May 2022

MyDigitalFootprint

Great leadership is about knowing what to optimise for & when.

I participated in a fantastic talk in May 2022 on “Ideological Polarization and Extremism in the 21st Century” led by Jonathan Leader Maynard who is a Lecturer in International Politics at King's College London.  The purpose here focuses on a though I took from Jonathan's talk and his new book, “Ideology and Mass Killing: The Radicalized Security Politics of Genocides and Deadly Atrocities

I participated in a fantastic talk in May 2022 on “Ideological Polarization and Extremism in the 21st Century” led by Jonathan Leader Maynard who is a Lecturer in International Politics at King's College London.  The purpose here focuses on a though I took from Jonathan's talk and his new book, “Ideology and Mass Killing: The Radicalized Security Politics of Genocides and Deadly Atrocities,” published by Oxford University Press.  

When I started thinking about writing about Peak Paradox, it was driven by a desire to answer a core question I asked myself, individuals, boards and teams; “what are we/ you optimising for?” .  It has become my go-to question when I want to explore the complexity of decision making and team dynamics as the timeframe (tactical vs strategic) is determined by the person answering the question. Ultimately individuals in a team, which give the team its capabilities, are driven by different purposes and ideals, which means incentives work differently as each person optimises for different things in different ways in different time frames.

Nathalie Oestmann put up this post with the diagram below, talking about communication and all having the same message.  My comment was that if you want stability this is good thinking, if you need change it will be less so as it will build resistance.   If you want everyone to have the same message then again this is helpful thinking, but, if you need innovation alignment is less useful.  When we all optimise for one thing and do the same thing - what do we become?  A simple view is that 91 lines in the final idea becomes one as we will perpetuate the same, building higher walls with  confimative incentives, feedback loops and echo chambers to ensure that the same is defensible.   

What we become, when we optimise for one thing was also set out by Jonathan in his talk. He effectively said (to me) that if you optimise for one thing, you are an extremist.  You have opted that this one thing is (or very few things are) more important than anything else.   We might *not* like to think of ourselves as extremists but it is in fact what we are when we optimise for a single goal.  Natalie’s post confirms that if you have enough people optimizing, for one thing, you have a tribe, movement, power, and voice.  The very action of limiting optimisation from many to single creates bias and framing. 

Extremism can be seen as a single optimisation when using Peak Paradox 

Bill George wrote supporting the 24-hour rule from this INC article, essentially, whatever happens, is good or bad, you have 24 hours to celebrate or stew. Tomorrow, it’s a new day. It’s a good way to stay focused on the present.  The problem is that this optimising appears good at one level but to improve leadership decision making and judgement skills, moving on without much stewing or celebrating removes critical learning. Knowing when to use the 24-hour is perfect, to blanket apply is probably a less useful rule. Leadership needs to focus on tomorrow's issues based on yesterday's learning whilst ensuring surviving today, to get to tomorrow.  

So much about what we love (embrase/ take one/ follow) boils ideas down to a single optimisation. Diets, games, books, movies, etc..  Is it that we find living with complexity and optimising for more than one thing difficult or exhausting, or that one thing is so easy that we given our energy retention preference there is a natural bias to simplification? Religion and faith, politics, science, maths, friends family and life in general however requires us to optimise for opposing ideas all the same time, creating tensions and compromises. 

Implication to boards, leadership and management

For way-to-long, the mantra for the purpose of a business was to “maximise the wealth for the shareholders.”  This was the singular objective and optimisation of the board and instruction to management “maximise profits.”   We have come a little way since then, but as I have written before, the single purity of Jenson and Mecking's work in 1976 left a legacy of incentives and education to optimise for one thing ignoring other thinking such as Peter Drucker’s 1973 insight that the only valid purpose of a firm is to create a customer, which itself has problems. 

We have shifted to “optimsing for stakeholders” but is that really a shift on the peak paradox framework or a dumbing down of one purpose, one idea, one vision?  A movement from the simple and singular to a nuanced paradoxical mess?  Yes it is a move from the purity of “shareholder primacy” but does it really engage in optimising for the other peaks on the map?  What does become evident is that when we move from the purity of maximising shareholder value is that decision making becomes more complex.  I am not convinced that optimising for all stakeholders really embraces the requirement for the optimising for sustainability, it is a washed out north star where we are likely to deliver neither.   

Here is the issue for the leadership.  

Optimising for one thing is about being extreme.  When we realise we are being extreme it is not a comfortable place to be and it naturally drives out creativity, innovation and change.  

The pandemic made us focus on one issue, and it showed us that when we, irrespective of industry, geography and resources had to focus on just one thing we can make the amazing happen.  Often sited is the 10 years of progress in 10 months, especially in digital and changing work patterns.  However, we did not consider the tensions, compromises or unintended consequences, we just acted.  Doing one thing, and being able to do just one thing is extreme. 

Extreme is at edges of the peak paradox model.  When we move from optimising for one thing to a few things we struggle to determine which is the priority. This is the journey from the the edges to the centre of the peak paradox model.  When we have to optimise for many things we are at Peak Paradox. We know that optimising for many trends is tiring and the volume of data/ information we need for decision making increases by the square of the number of factors you want to optimise for.   It is here that we find complexity but also realise that we cannot optimise or drive for anything.  Whilst living with complexity is a critical skill for senior teams, it is here that we find that we cannot optimise and we appear to a ship adrift in a storm being pulled everywhere, with no direction or clarity of purpose.  A true skill of great leadership is about knowing what to optimise for & when. Given the ebbs and flows of a market, there is a time to dwell and live with complexity optimising for many but knowing when to draw out, provide clarity, and optimise for one thing is critical. 

The questions we are left to reflect on are:

how far from a single optimisation do your skills enable you to move?

how far from a single optimisation do your team's collective skills enable you to collectively move?

when optimising for conflicting purposes, how do you make decisions?

when optimising for conflicting purposes, how does the team make collective decisions?

When we finally master finding clarity to optimise for one thing and equally living with the tensions, conflicts and compromises of optimising for many things; we move from average to outperforming in terms of delivery and from decision making to judgment. 

Great leaders and teams appear to be able to exist equally in the optimisation for both singular and complex purposes at the same time.

This viewpoint suggests that when the optimisation is for a singular focus, such as a three-word mission and purpose statement that provides a perfect clarity of purpose, is actually only half the capability that modern leadership needs to demonstrate.


Wednesday, 18. May 2022

Simon Willison

Comby

Comby Describes itself as "Structural search and replace for any language". Lets you execute search and replace patterns that look a little bit like simplified regular expressions, but with some deep OCaml-powered magic that makes them aware of comment, string and nested parenthesis rules for different languages. This means you can use it to construct scripts that automate common refactoring or

Comby

Describes itself as "Structural search and replace for any language". Lets you execute search and replace patterns that look a little bit like simplified regular expressions, but with some deep OCaml-powered magic that makes them aware of comment, string and nested parenthesis rules for different languages. This means you can use it to construct scripts that automate common refactoring or code upgrade tasks.

Via Hacker News

Tuesday, 17. May 2022

Doc Searls Weblog

A thermal theory of basketball

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now. Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and […]

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now.

Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and situation to situation. Every team is a collection of its own human compounds of many elements: physical skills and talents, conditioning, experience, communication skills, emotional and mental states, beliefs, and much else.

Sometimes one team comes in pre-melted, with no chance of winning. Bad teams start with a low melting point, arriving in liquid form and spilling all over the floor under heat and pressure from better teams.

Sometimes both teams might as well be throwing water balloons at the hoop.

Sometimes both teams are great, neither melts, and you get an overtime outcome that’s whatever the score said when the time finally ran out. Still, one loser and one winner. After all, every game has a loser, and half the league loses every round. Whole conferences and leagues average .500. That’s their melting point: half solid, half liquid.

Yesterday we saw two meltdowns, neither of which was expected and one of which was a complete surprise.

First, the Milwaukee Bucks melted under the defensive and scoring pressures of the Boston Celtics. There was nothing shameful about it, though. The Celtics just ran away with the game. It happens. Still, you could see the moment the melting started. It was near the end of the first half. The Celtics’ offense sucked, yet they were still close. Then they made a drive to lead going into halftime. After that, it became increasingly and obviously futile to expect the Bucks to rally, especially when it was clear that Giannis Antetokounmpo, the best player in the world, was clearly less solid than usual. The team melted around him while the Celtics rained down threes.

To be fair, the Celtics also melted three times in the series, most dramatically at the end of game five, on their home floor. But Marcus Smart, who was humiliated by a block and a steal in the closing seconds of a game the Celtics had led almost all the way, didn’t melt. In the next two games, he was more solid than ever. So was the team. And they won—this round, at least. Against the Miami Heat? We’ll see.

Right after that game, the Phoenix Suns, by far the best team in the league through the regular season, didn’t so much play the Dallas Mavericks as submit to them. Utterly.

In chemical terms, the Suns showed up in liquid form and turned straight into gas. As Arizona Sports put it, “We just witnessed one of the greatest collapses in the history of the NBA.” No shit. Epic. Nobody on the team will ever live this one down. It’s on their permanent record. Straight A’s through the season, then a big red F.

Talk about losses: a mountain of bets on the Suns also turned to vapor yesterday.

So, what happened? I say chemistry.

Maybe it was nothing more than Luka Dončić catching fire and vaporizing the whole Suns team. Whatever, it was awful to watch, especially for Suns fans. Hell, they melted too. Booing your team when it needs your support couldn’t have helped, understandable though it was.

Applying the basketball-as-chemistry theory, I expect the Celtics to go all the way. They may melt a bit in a game or few, but they’re more hardened than the Heat, which comes from having defeated two teams—the Atlanta Hawks and the Philadelphia 76ers—with relatively low melting points. And I think both the Mavs and the Warriors have lower melting points than either the Celtics or the Heat.

But we’ll see.

Through the final two rounds, look at each game as a chemistry experiment. See how well the theory works.

 

 

Monday, 16. May 2022

Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing IETF Last Call Comments

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were: Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers. The specification is available at: https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were:

Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Phil Windley's Technometria

Decentralizing Agendas and Decisions

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk. Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almo

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk.

Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almost feel the excitement in the air as people met with old friends and made new ones. Rich the barista was back, along with Burrito Wednesday. I loved watching people in small groups having intense conversations over meals, drinks, and snacks.

Also back was IIW's trademark open space organization. Open space conferences are workshops that don't have pre-built agendas. Open space is like an unconference with a formal facilitator trained in using open space technology. IIW is self-organizing, with participants setting the agenda every morning before we start. IIW has used open space for part or all of the workshop since the second workshop in 2006. Early on, Kaliya Young, one of my co-founders (along with Doc Searls), convinced me to try open space as a way of letting participants shape the agenda and direction. For an event this large (300-400 participants), you need professional facilitation. Heidi Saul has been doing that for us for years. The results speak for themselves. IIW has nurtured many of the ideas, protocols, and trends that make up modern identity systems and thinking.

Welcome to IIW 34! (click to enlarge) mDL Discussion at IIW 34 (click to enlarge) Agenda Wall at IIW 34 (Day 1) (click to enlarge)

Last month was the first in-person CTO Breakfast since early 2020. CTO Breakfast is a monthly gathering of technologists in the Provo-Salt Lake City area that I've convened for almost 20 years. Like IIW, CTO Breakfast has no pre-planned agenda. The discussion is freewheeling and active. We have just two rules: (1) no politics and (2) one conversation at a time. Topics from the last meeting included LoRaWAN, Helium network, IoT, hiring entry-level software developers, Carrier-Grade NATs, and commercial real estate. The conversation goes where it goes, but is always interesting and worthwhile.

When we built the University API at BYU, we used decentralized decision making to make key architecture, governance, and implementation decisions. Rather than a few architects deciding everything, we had many meetings, with dozens of people in each, over the course of a year hammering out the design.

What all of these have in common is decentralized decision making by a group of people that results in learning, consensus, and, if all goes well, action. The conversation at IIW, CTO Breakfast, and BYU isn't the result a few smart people deciding what the group needed to hear and then arranging meetings to push it at them. Instead, the group decides. Empowering the group to make decisions about the very nature and direction of the conversation requires trust and trust always implies vulnerability. But I've become convinced that it's really the best way to achieve real consensus and make progress in heterogeneous groups. Thanks Kaliya!

Tags: decentralization iiw cto+breakfast byu university+api


Damien Bod

Using multiple Azure B2C user flows from ASP.NET Core

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot […]

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot of the default settings. I solved this by implementing an account controller to handle the Azure B2C signup user flow initial request and set the Azure B2C policy. It can be useful to split the user flows in the client application, if the user of the application needs to be guided better.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Azure B2C user flows can be implemented as simple user flows. I used a signup flow and a signin, signup flow.

The AddMicrosoftIdentityWebAppAuthentication is used to implement a standard Azure B2C client. There is no need to implement a second scheme and override the default settings of the Microsoft.Identity.Web client because we use a controller to select the flow.

string[]? initialScopes = Configuration.GetValue<string>( "UserApiOne:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.Configure<MicrosoftIdentityOptions>( OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnTokenValidated = async context => { if (ApplicationServices != null && context.Principal != null) { using var scope = ApplicationServices.CreateScope(); context.Principal = await scope.ServiceProvider .GetRequiredService<MsGraphClaimsTransformation>() .TransformAsync(context.Principal); } }; });

The AzureAdB2C app settings configures the sign in, sign up flow. The SignUpSignInPolicyId setting is used to configure the default user flow policy.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "8cbb1bd3-c190-42d7-b44e-42b20499a8a1", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_signup_signin", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc", // "ClientSecret": "--use-secrets--" },

The AccountSignUpController is used to set the policy of the flow we would like to use. The SignUpPolicy method just challenges the Azure B2C OpenID Connect server with the correct policy.

using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace Microsoft.Identity.Web.UI.Areas.MicrosoftIdentity.Controllers; [AllowAnonymous] [Route("MicrosoftIdentity/[controller]/[action]")] public class AccountSignUpController : Controller { [HttpGet("{scheme?}")] public IActionResult SignUpPolicy( [FromRoute] string scheme, [FromQuery] string redirectUri) { scheme ??= OpenIdConnectDefaults.AuthenticationScheme; string redirect; if (!string.IsNullOrEmpty(redirectUri) && Url.IsLocalUrl(redirectUri)) { redirect = redirectUri; } else { redirect = Url.Content("~/")!; } scheme ??= OpenIdConnectDefaults.AuthenticationScheme; var properties = new AuthenticationProperties { RedirectUri = redirect }; properties.Items[Constants.Policy] = "B2C_1_signup"; return Challenge(properties, scheme); } }

The Razor page opens a link to the new controller and challenges the OIDC server with the correct policy.

<li class="nav-item"> <a class="nav-link text-dark" href="/MicrosoftIdentity/AccountSignUp/SignUpPolicy">Sign Up</a> </li>

With this approach, an ASP.NET Core application can be extended with multiple user flows and the UI can be improved for the end user as required.

Links

https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-local

https://docs.microsoft.com/en-us/azure/active-directory/

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/azure-ad-b2c

https://github.com/azure-ad-b2c/azureadb2ccommunity.io

https://github.com/azure-ad-b2c/samples

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://docs.microsoft.com/en-us/graph/api/user-post-users?view=graph-rest-1.0&tabs=csharp https://docs.microsoft.com/en-us/graph/api/invitation-post?view=graph-rest-1.0&tabs=csharp

Sunday, 15. May 2022

Werdmüller on Medium

A quiet morning in America

A disconnect that gets under your skin Continue reading on Human Parts »

A disconnect that gets under your skin

Continue reading on Human Parts »

Friday, 13. May 2022

Jon Udell

Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about. Solar Wind and Wave. Can this ocean hybrid platform nail all three? New energy storage tech breathing life and … Continue reading Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about.

Solar Wind and Wave. Can this ocean hybrid platform nail all three?

New energy storage tech breathing life and jobs back into disused coal power plants

Agrivoltaics. An economic lifeline for American farmers?

Solar PV film roll. Revolutionary new production technology

All of Dave’s presentations are carefully researched and presented. A detail that has long fascinated me: how the show displays source material. Dave often cites IPCC reports and other sources that are, in raw form, PDF files. He spices up these citations with some impressive animated renderings. Here’s one from the most recent episode.

The progressive rendering of the chart in this example is an even fancier effect than I’ve seen before, and it prompted me to track down the original source. In that clip Dave cites IRENA, the International Renewable Energy Agency, so I visited their site, looked for the cited report, and found it on page 8 of World Energy Transitions Outlook 2022. That link might or might not take you there directly, if not you can scroll to page 8 where you’ll find the chart that’s been animated in the video.

The graphical finesse of Just Have a Think is only icing on the cake. The show reports a constant stream of innovations that collectively give me hope we might accomplish the transition and avoid worst-case scenarios. But still, I wonder. That’s just a pie chart in a PDF file. How did it become the progressive rendering that appears in the video?

In any case, and much more importantly: Dave, thanks for the great work you’re doing!

Thursday, 12. May 2022

Doc Searls Weblog

Lens vs. Camera

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler… …and a grandma (right after she starred as the oldest alumnus at a […]

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler…

…and a grandma (right after she starred as the oldest alumnus at a high school reunion I where I took hundreds of other shots):

This lens is new, sharp, versatile, earns good reviews (e.g. here) and is so loved already that it’s hard to get, despite the price: more than $3k after taxes. And, though it’s very compact and light (2.3 lbs) for what it is and does, the thing is big:

So I ordered one, which Amazon won’t charge me for before it ships, on May 23, for delivery on the 24th.

But I’m having second, third, and fourth thoughts, which I just decided to share here.

First, I’m not a fine art photographer. I’m an amateur who mostly shoots people and subjects that interest me, such as what I can see out airplane windows, or choose to document for my own odd purposes—such as archiving photos of broadcast towers and antennas, most of which will fall out of use over the next two decades, after being obsolesced by the Internet, wi-fi and 5G.

All the photos I publish are Creative Commons licensed to encourage use by others, which is why more than 1600 of them have found their way into Wikimedia Commons. Some multiple of those accompany entries in Wikipedia. This one, for example, is in 9 different Wikipedia entries in various languages:

Here is the original, shot with a tiny Canon pocket camera I pulled from the pocket of my ski jacket.

In other words, maybe I’ll be better off with a versatile all-in-one camera that will do much of what this giant zoom does, plus much more.

After much online research, I’ve kind of settled on considering the Sony Cyber-shot DSC-RX10 IV. It has a smaller sensor than I’d like, but it is exceptionally versatile and gets great reviews. While my Sony a7r with its outstanding 24-105mm f/4 FE G OSS lens is versatile as well, and light for a full-frame DSLR, I really need a long lens for a lot of the stuff I shoot. And I suspect this “bridge” camera will do the job.

So here is the choice:

Leave the order stand, and pay $3k for a fully fabulous 70-200 zoom that I’m sure to love but will be too big to haul around in many of the settings where I’ll be shooting. Cancel that order, and instead pay half that for the DSC-RX10 IV—and get it in time for my trip to Hawaii next week.

[Later…] I decided to let the order stand. Two reasons. First, I’ve shot a couple thousand photos so far with the 70-200 zoom, and find it a near-flawless instrument that I enjoy playing. One reason I do is that it’s as close to uncompromising as a lens can be—especially a zoom, which by design involves many compromises. Second, I’ve never played with the DSC-RX10 IV, and that’s kind of a prerequisite. I also know that one of its compromises I won’t be able to overcome is the size of its sensor. I know megapixels are a bit of a head trip, but they do matter, and 36.4 Mpx vs 20.1 “effective” Mpx is non-trivial.

Additionally, I may choose in the long run to also get an a7iv camera, so my two lenses will have two bodies. We’ll see.

 

 

Wednesday, 11. May 2022

Heather Vescent

Six insights about the Future of Biometrics

Photo by v2osk on Unsplash Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are no
Photo by v2osk on Unsplash

Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are not always secure or equitable. In the end, each criminal, activist, and product misstep inspires innovation and new solutions.

The benefit of biometrics is they are unique and can be trusted to be unique. It’s not impossible, but it is very hard for someone to spoof a biometric. Using a biometric raises the bar a bit, and makes that system less attractive to target — up to a point. Any data is only as secure as the system in which it is stored. Sometimes these systems can be easily penetrated due to poor identity and access management protocols. This has nothing to do with the security of biometrics — that has to do with the security of stored data. Apple FaceID is unbelievably convenient! Once I set up FaceID to unlock my phone, I can configure it to unlock other apps — like banking apps. Rather than typing in or selecting my password from a password manager — I just look at my phone! This makes it easy for me to access my sensitive data. From a user experience perspective, this is wonderful, but I have to trust Apple’s locked down tech stack. The first versions of new technologies will still have issues. All new technology is antifragile, and thus will have more bugs. As the technology is used, the bugs are discovered (thanks hackers!) and fixed, and the system becomes more secure over time. Attackers will move on to more vulnerable targets. Solve for every corner case and you’ll have a rigid yet secure system that probably doesn’t consider the human interface very well. Leave out a corner case and you might be leaving an open door for attack. Solving for the “right” situation is a balance. Which means, either extreme can be harmful to different audiences. Learn from others, share and collaborate on what you have learned. Everyone has to work together to move the industry forward.

Curious to learn more insights about the Future of Digital Identity? I’ll be joining three speakers on the Future of Digital Identity Verification panel.

Tuesday, 10. May 2022

@_Nat Zone

Decentralized, Global, Human-Owned. 理想的なweb3世界(もしあれば)においてIDMが果たす役割

Keynote Panel at the Euro… The post Decentralized, Global, Human-Owned. 理想的なweb3世界(もしあれば)においてIDMが果たす役割 first appeared on @_Nat Zone.

Keynote Panel at the European Identity and Cloud Conference, Friday, May 13, 2022 09:20—09:40 Location: BCC Berlin, C01

告知が遅くなりましたが、今週の金曜日(5/13)にベルリンで行われるEuropean Identity & Cloud Conference のキーノートパネルに登場します。英文タイトルは「Decentralized, Global, Human-Owned. The Role of IDM in an Ideal (If there is One) Web3 World です。

詳細のリンクは、https://www.kuppingercole.com/sessions/5092/1 です。以下、そのDeepL翻訳です。

インターネットはIDレイヤーを持たずに作られ、認証、承認、プライバシー、アクセスをWebサイトやアプリケーションに任せていました。ユーザー名とパスワードは依然として支配的なパラダイムであり、さらに重要なことは、ユーザーが個人を特定する情報をコントロールできないことです。データの悪用、ハッキングや不正操作のリスクは重要な課題となっており、web3の出現と価値の伝達というその中核機能の時代において、新しいアプローチが必要とされています。分散型DLTベースのアイデンティティは、最終的にDeFi、NFT、DAOを可能にするソリューションとなるのでしょうか?この素晴らしいキーノートのパネルに参加して、このトピックを侃々諤々議論してください。

(出所)https://www.kuppingercole.com/sessions/5092/1

パネリスト

André Durand, CEO, Ping Identity Martin Kuppinger, CEO, KuppingerCole Nat Sakimura, Chairman, OpenID Foundation Drs. Jacoba C. Sieders, Advisory board member, EU SSIF-lab

The post Decentralized, Global, Human-Owned. 理想的なweb3世界(もしあれば)においてIDMが果たす役割 first appeared on @_Nat Zone.

Doc Searls Weblog

Laws of Identity

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of […]

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

Monday, 09. May 2022

Damien Bod

Use a gateway service for a software UI with micro services architecture?

In this post, I would like to look at some of the advantages and disadvantages of using an implemented gateway service to process all UI API requests, optimize the business and remove some of the complexity from the user interface application. Setup with UI using APIs directly Modern public facing applications APIs used by UI […]

In this post, I would like to look at some of the advantages and disadvantages of using an implemented gateway service to process all UI API requests, optimize the business and remove some of the complexity from the user interface application.

Setup with UI using APIs directly

Modern public facing applications APIs used by UI apps are mostly protected using user delegated access tokens and the applications APIs are in the public zone because the SPA application or whatever client you use needs to request data. The API data is merged directly in the client. Each API can be secured with a separate scope and the client application would need to manage multiple access tokens. Each API would need to allow CORS for an SPA client. (You could use something like Azure Gateway as a workaround for this.) If a server rendered application was used to access the API, this is not required. With this approach, multiple APIs are exposed in the public zone.

Characteristics of this solution

UI responsible for joining API calls for different views

Multiple UIs per module, API, micro service possible

Setup with gateway implementation (BFF or API implementation)

All applications still need to implement HTTPS and the gateway should not terminate the HTTPS. All applications still require HTTPS only access and HTTP should not be used (in development as well). Then all the security headers can be applied in all services to the max.

Many micro services implementations do not optimize the API for the UI. This is not really a good idea, if using the APIs directly in a UI. By implementing a gateway service, the micro service APIs can be optimized for the UI through the gateway. This makes it possible to remove lots of unrequired or unoptimized APIs calls and reduces the amount of logic required in the UI to implement the required business.

The application gateway should not be used to apply missing security headers to new APIs or APIs created as part of your system. The security headers should be implemented where the responses are created. If using a legacy application where the security headers are missing and cannot be implemented at the source, then using the gateway in this way is good.

The application gateway could be implemented as a reverse proxy to delegate the requests onto further applications, APIs. This is good because the further APIs do not need to be exposed to the internet and the attack surface is reduced as well as the amount of APIs in the public zone. Less is more.

Characteristics of this solution

API calls can be optimized for the UI

Security can be improved by reducing the public attack surface

User authorization could be moved to the gateway service

UI and gateway could be deployed together with improved security (BFF security arch and single security client)

Gateway single point of failure

Comparing the security of the Gateway, no Gateway solutions

Improved application security with Gateway

By using a gateway, the attack surface can be reduced in the public zone. The gateway API can use a user delegated access token (or a cookie if using a server rendered application or BFF architecture) All other applications can use application tokens (OAuth client credentials flow), certificate authentication or Azure managed identities to protect the APIs. It is important that the APIs are secured and that the public user access token does not work with the private APIs.

Easier to implement systems tests for services

It is hard to implement system tests for APIs using user access tokens. These tokens can only be created using a UI test tool. The correct flow requires user interaction to authenticate. Creating a back door with some type of application access is not a good idea as this usually gets deployed in some form or other. With a gateway, it is still hard to test the gateway, but with all the private APIs, system tests can be created to test with little effort.

CORS, same domain, same site

Only the public facing gateway API needs to allow CORS if using from an SPA application. All the private APIs can completely disable this and all can use the full set of security headers. If using cookies, same site and same domain can be forced on the single gateway, UI application.

User authorization

Authorization for identities with users can be reduced and most of the implementation would be rolled out in the gateway as well as been used in the user interface. The private APIs would need less complicated authorization as only the trusted application requests must be validated and not user authorization. It is not a must, that the system implements in this way, different security setups are possible like the OBO flow for the private APIs.

More choice for authorization in the private APIs.

As the private APIs are trusted APP to APP security, managed identities or certificate authentication could be used as well as OAuth client credentials. This allows for more choice but the APIs must still implement security in the application and not relay on the network security alone. I usually use the OAuth CC flow with a scope claim or role claim validation. The access tokens used for access must be validated. I never mix user access tokens and application tokens in a single API.

Notes

KISS is probably the most important way of thinking when producing software. Most solutions can be implemented as a single monolith application and the end client gets what is required with the least amount of effort. Micro services are normally not required for most solutions. Once using micro services, it is important to implement this correctly and with a solid security architecture which requires effort and planning. Do not be afraid to merge services if you see that the are tightly coupled and depend on each other. Define the public zone and private zone and where each micro service belongs. Implement the hosting environment which allows for a good application security implementation as well as a good network infrastructure. HTTP should not be used anywhere.

Thursday, 05. May 2022

Hans Zandbelt

A WebAuthn Apache module?

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below. At … Continue reading →

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below.

At first glance it seems very useful to have an Apache server that authenticates users using a state-of-the-art authentication protocol that is implemented in modern browsers and platforms. Even more so, that Apache server could function as a reverse proxy in front of any type of resources you want to protect. This will allow for those resources to be agnostic to the type of authentication and its implementation, a pattern that I’ve been promoting for the last decade or so.

But in reality the functionality that you are looking for already exists…

The point is that deploying WebAuthn means that you’ll not just be authenticating users, you’ll also have to take care of signing up new users and managing credentials for those users. To that end, you’ll need to facilitate an onboarding process and manage a user database. That type of functionality is best implemented in a server-type piece of software (let’s call it “WebAuthn Provider”) written in a high-level programming language, rather than embedding it in a C-based Apache module. So in reality it means that any sensible WebAuthn/FIDO2 Apache module would rely on an externally running “Provider” software component to offload the heavy-lifting of onboarding and managing users and credentials. Moreover, just imagine the security sensitivity of such a software component.

Well, all of the functionality described above is exactly something that your average existing Single Sign On Identity Provider software was designed to do from the very start! And even more so, those Identity Providers typically already support WebAuthn and FIDO2 for (“local”) user authentication and OpenID Connect for relaying the authentication information to (“external”) Relying Parties.

And yes, one of those Relying Parties could be mod_auth_openidc, the Apache module that enables users to authenticate to an Apache webserver using OpenID Connect.

So there you go: rather than implementing WebAuthn or FIDO2 (and user/credential management…) in a single Apache module, or write a dedicated WebAuthn/FIDO2 Provider alongside of it and communicate with that using a proprietary protocol, the more sensible choice is to use the already existing OpenID Connect protocol. The Apache OpenID Connect module (mod_auth_openidc) will send users off to the OpenID Connect Provider for authentication. The Provider can use WebAuthn or FIDO2, as a single factor, or as a 2nd factor combined with traditional methods such as passwords or stronger methods such as PKI, to authenticate users and relay the information about the authenticated user back to the Apache server.

To summarise: using WebAuthn or FIDO2 to authenticate users to an Apache server/reverse-proxy is possible today by using mod_auth_openidc’s OpenID Connect implementation. This module can send user off for authentication towards a WebAuthn/FIDO2 enabled Provider, such as Keycloak, Okta, Ping, ForgeRock etc. This setup allows for a very flexible approach that leverages existing standards and implementations to their maximum potential: OpenID Connect for (federated) Single Sign On, WebAuthn and FIDO2 for (centralized) user authentication.

Wednesday, 04. May 2022

Mike Jones: self-issued

OAuth DPoP Specification Addressing WGLC Comments

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking. As Brian reminded us […]

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking.

As Brian reminded us during the OAuth Security Workshop today, the name DPoP was inspired by a Deutsche POP poster he saw on the S-Bahn during the March 2019 OAuth Security Workshop in Stuttgart:

He considered it an auspicious sign seeing another Deutsche PoP sign in the Vienna U-Bahn during IETF 113 the same day WGLC was requested!

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html

Wednesday, 04. May 2022

Identity Woman

The Future of You Podcast with Tracey Follows

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty. The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty.

The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Monday, 02. May 2022

Phil Windley's Technometria

Is an Apple Watch Enough?

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days. Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center wa

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days.

Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center was required to keep it while they ordered the new battery from Apple (yeah, I think that's a stupid policy too). I was without my phone for 2 days and decided it was an excellent time to see if I could get by using my Apple Watch as my primary device. Here's how it went.

First things first. For this to be any kind of success you need a cellular plan for your watch and a pair of Airpods or other bluetooth earbuds. The first thing I noticed is that the bathroom, standing in the checkout line, and other places are boring without the distraction of my phone to read news, play Wordle, or whatever. Siri is your friend. I used Siri a lot more than normal due to the small screen. I'd already set up Apple Pay and while I don't often use it from my watch under normal circumstances, it worked great here. Answering the phone means keeping your Airpods in or fumbling for them every time there's a call. I found I rejected a lot of calls to avoid the hassle. (But never your's, Lynne!) Still, I was able to take and make calls just fine without a phone. Voicemail access is a problem. You have to call the number and retrieve them just like it's 1990 or something. This messed with my usual strategy of not answering calls from numbers I don't recognize and letting them go to voicemail, then reading the transcript to see if I want to call them back. Normal texts don't work that I could tell, but Apple Messages do. I used voice transcription almost exclusively for sending messages, but read them on the watch. Most crypto wallets are unusable without the phone. For the most part, I just used the Web for banking as a substitute for mobile apps and that worked fine. The one exception was USAA. The problem with USAA was 2FA. Watch apps for 2FA are "companion apps" meaning they're worthless without the phone. For TOTP 2FA, I'd mirrored to my iPad, so that worked fine. I had to use the pre-set tokens for Duo that I'd gotten when I set it up. USAA uses Verisign's VIP. It can't be mirrored. What's more, USAA's recovery relies on SMS. I didn't have my phone, so that didn't work. I was on the phone with USAA for an hour trying to figure this out. Eventually USAA decided it was hopeless and told me to conduct banking by voice. Ugh. Listening to music on the watch worked fine. I read books on my Kindle, so that wasn't a problem. There are a number of things I fell back to my iPad for. I've already mentioned 2FA, another is maps. Maps don't work on the watch. I didn't realize how many pictures I take in a day, sometimes just for utility. I used the iPad when I had to. Almost none of my IoT services or devices did much with the watch beyond issuing a notification. None of the Apple HomeKit stuff worked that I could see. For example, I often use a HomeKit integration with my garage door opener. That no longer worked without a phone. Battery life on the watch is more than adequate in normal situations. But hour long phone calls and listening to music challenge battery life when it's your primary device. I didn't realize how many things are tied just to my phone number.

Using just my Apple Watch with some help from my iPad was mostly doable, but there are still rough spots. The Watch is a capable tool for many tasks, but it's not complete. I can certainly see leaving my phone at home more often now since most things work great—especially when you know you can get back to your phone when you need to. Not having my phone with me feels less scary now.

Photo Credit: IPhone 13 Pro and Apple Watch from Simon Waldherr (CC BY-SA 4.0)

Tags: apple watch iphone


Damien Bod

Implement an OpenIddict identity provider using ASP.NET Core Identity with Keycloak federation

This post shows how to setup a Keycloak external authentication in an OpenIddict identity provider using ASP.NET Core identity. Code: https://github.com/damienbod/AspNetCoreOpeniddict Setup The solution context implements OpenID Connect clients which use an OpenIddict identity provider and ASP.NET Core Identity to manage the accounts. All clients authenticate using the OpenIddict server. Keycloak i

This post shows how to setup a Keycloak external authentication in an OpenIddict identity provider using ASP.NET Core identity.

Code: https://github.com/damienbod/AspNetCoreOpeniddict

Setup

The solution context implements OpenID Connect clients which use an OpenIddict identity provider and ASP.NET Core Identity to manage the accounts. All clients authenticate using the OpenIddict server. Keycloak is used as an external authentication provider on the OpenIddict identity provider. Users can be created in either identity provider. If only users are created on the Keycloak server, the direct sign in with username, password can be completely disabled in the OpenIddict server. This setup allows for great flexibility and the MFA can be forced anywhere depending on the requirements. Companies using the product can use their own identity provider.

Integrating an OpenID Connect conform client is really simple in ASP.NET Core and no extra Nuget packages are required. You only need extra packages when the IDP are not conform or do something vendor specific. The AddOpenIdConnect method is used to implement the Keycloak server. I set this up using this github repo and the Keycloak docs. The SignInScheme needs to be set to the correct value. ASP.NET Core Identity is used to map the external identities and the “Identity.External” is the default scheme used for this. If you need to disable local identity users, the ASP.NET Core Identity logic can be scaffolded into the project and adapted.

services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddOpenIdConnect("KeyCloak", "KeyCloak", options => { options.SignInScheme = "Identity.External"; //Keycloak server options.Authority = Configuration.GetSection("Keycloak")["ServerRealm"]; //Keycloak client ID options.ClientId = Configuration.GetSection("Keycloak")["ClientId"]; //Keycloak client secret in user secrets for dev options.ClientSecret = Configuration.GetSection("Keycloak")["ClientSecret"]; //Keycloak .wellknown config origin to fetch config options.MetadataAddress = Configuration.GetSection("Keycloak")["Metadata"]; //Require keycloak to use SSL options.GetClaimsFromUserInfoEndpoint = true; options.Scope.Add("openid"); options.Scope.Add("profile"); options.SaveTokens = true; options.ResponseType = OpenIdConnectResponseType.Code; options.RequireHttpsMetadata = false; //dev options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = ClaimTypes.Role, ValidateIssuer = true }; });

The Keycloak configuration is added to the app.settings. We do not need much, a standard OpenID connect confidential code flow client with PKCE is setup to authenticate using Keycloak. This can be adapted, changed in almost any way depending on the server requirements. You should stick to the standards when implementing this. Using PKCE is required now on most deployments when using the OIDC code flow. Any identity providers solutions which does not support this should be avoided.

"Keycloak": { "ServerRealm": "http://localhost:8080/realms/myrealm", "Metadata": "http://localhost:8080/realms/myrealm/.well-known/openid-configuration", "ClientId": "oidc-code-pkce", // "ClientSecret": "--in user secrets or keyvault--" },

The Keycloak server is setup to use the standard settings. You could improve the security of this using token exchange or further supported specifications from Keycloak and ASP.NET Core.

Notes:

Using OpenIddict and ASP.NET Core to federate to further identity providers, it is really easy to support best practice application security and also integrate any third party identity systems without surprises. Having full control of your identity provider is a good thing and by using federation, you do not need to manage the user accounts. This can be fully implemented on the client system. If the application requires strong MFA like FIDO2, this can also be easily implemented in ASP.NET Core. Using some of the cloud solution IDPs prevents you implementing strong application security. These cloud systems do provide excellent user accounting and it would be nice to use this and combine this with a top identity provider.

If implementing a product which needs to support multiple different identity providers from different clients, then you should implement the identity provider as part of your solution context and federate to the client systems.

Links

https://documentation.openiddict.com/

https://github.com/openiddict/openiddict-core

https://docs.microsoft.com/en-us/java/openjdk/download

https://github.com/tuxiem/AspNetCore-keycloak

https://wjw465150.gitbooks.io/keycloak-documentation/content/server_installation/topics/network/https.html

https://www.keycloak.org/documentation.html

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/identity



Wednesday, 27. April 2022

Mike Jones: self-issued

OpenID Presentations at April 2022 OpenID Workshop and IIW

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019: Introduction to OpenID Connect (PowerPoint) […]

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019:

Introduction to OpenID Connect (PowerPoint) (PDF)

Tuesday, 26. April 2022

Phil Windley's Technometria

We Need a Self-Sovereign Model for IoT

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential. Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a sm

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential.

Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a smart home," the company seems to have abruptly shut down their cloud service without warning or providing a way for customers to continue using their products, which depend on Insteon's privacy cloud. High-ranking Insteon execs even removed their affiliation with Insteon from their LinkedIn profiles. Eek!

Fortunately, someone reverse-engineered the Insteon protocol a while back and there are some open-source solutions for people who are able to run their own servers or know someone who can do it for them. Home Assistant is one. OpenHAB is another.

Insteon isn't alone. Apparently iHome terminated its service on April 2, 2022. Other smarthome companies or services who have gone out of business include Revolv, Insignia, Wink, and Staples Connect.

The problem with Insteon, and every other IoT and Smarthome company I'm aware of is that their model looks like this:

Private cloud IoT model; grey box represents domain of control

In this model, you:

Buy the device Download their app Create an account on the manufacturer's private cloud Register your device Control the device from the app

All the data and the device are inside the manufacturer's private cloud. They administer it all and control what you can do. Even though you paid for the device, you don't own it because it's worthless without the service the manufacturer provides. If they take your account away (or everyone's account, in the case of Insteon), you're out of luck. Want to use your motion detector to turn on the lights? Good luck unless they're from the same company1. I call this the CompuServe of Things.

The alternative is what I call the self-sovereign IoT (SSIoT) model:

Self-sovereign IoT model; grey box represents domain of control

Like the private-cloud model, in the SSIoT model, you also:

Buy the device Download an app Establish a relationship with a compatible service provider Register the device Control the device using the app

The fact that the flows for these two models are the same is a feature. The difference lies elsewhere: in SSIoT, your device, the data about you, and the service are all under your control. You might have a relationship with the device manufacturer, but you and your devices are not under their administrative control. This might feel unworkable, but I've proven it's not. Ten years ago we built a connected-car platform called Fuse that used the SSIoT model. All the data was under the control of the person or persons who owned the fleet and could be moved to an alternate platform without loss of data or function. People used the Fuse service that we provided, but they didn't have to. If Fuse had gotten popular, other service providers could have provided the same or similar service based on the open-model and Fuse owners would have had a choice of service providers. Substitutability is an indispensable property for the internet of things.

All companies die. Some last a long time, but even then they frequently kill off products. Having to buy all your gear from a single vendor and use their private cloud puts your IoT project at risk of being stranded, like Insteon customers have been. Hopefully, the open-source solutions will provide the basis for some relief to them. But the ultimate answer is interoperability and self-sovereignty as the default. That's the only way we ditch the CompuServe of Things for a real internet of things.

Notes Apple HomeKit and Google Home try to solve this problem, but you're still dependent on the manufacturer to provide the basic service. And making the administrative domain bigger is nice, but doesn't result in self-sovereignty.

Tags: picos iot interoperability cloud fuse ssi

Wednesday, 20. April 2022

Damien Bod

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API

This article shows how Azure AD continuous access (CA) can be used in an ASP.NET Core UI application to force MFA when using an administrator API from a separate ASP.NET Core application. Both applications are secured using Microsoft.Identity.Web. An ASP.NET Core Razor Page application is used to implement the UI application. The API is implemented […]

This article shows how Azure AD continuous access (CA) can be used in an ASP.NET Core UI application to force MFA when using an administrator API from a separate ASP.NET Core application. Both applications are secured using Microsoft.Identity.Web. An ASP.NET Core Razor Page application is used to implement the UI application. The API is implemented with swagger open API and ASP.NET Core. An Azure AD conditional access authentication context is used to implement the MFA requirement. An Azure AD CAE policy is setup which requires the defines MFA and uses the context.

Code https://github.com/damienbod/AspNetCoreAzureADCAE

History

2022-05-25 Update info about AAD license requirements.

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

CAE is available to everyone , including free tenants.
CA Auth Context requires AAD P1, as its dependent on the CA feature, which is only available to AAD P1 and above.

Requirements

Azure AD tenant with P1 license for CA Auth Context Microsoft Graph

Create a Conditional access Authentication Context

A Continuous access evaluation (CAE) authentication context was created using Microsoft Graph and can be viewed in the portal. In this demo, like the Microsoft sample application, three authentication contexts are created using Microsoft Graph. The Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess permissions are required to change the CAE authentication contexts.

This is only needed to create the CA authentication contexts. Once created, this can be used in the target applications.

public async Task CreateAuthContextViaGraph(string acrKey, string acrValue) { await _graphAuthContextAdmin.CreateAuthContextClassReferenceAsync( acrKey, acrValue, $"A new Authentication Context Class Reference created at {DateTime.UtcNow}", true); } public async Task<AuthenticationContextClassReference?> CreateAuthContextClassReferenceAsync( string id, string displayName, string description, bool IsAvailable) { try { var acr = await _graphServiceClient .Identity .ConditionalAccess .AuthenticationContextClassReferences .Request() .AddAsync(new AuthenticationContextClassReference { Id = id, DisplayName = displayName, Description = description, IsAvailable = IsAvailable, ODataType = null }); return acr; } catch (ServiceException e) { _logger.LogWarning( "We could not add a new ACR: {exception}", e.Error.Message); return null; } }

The created conditional access authentication context can be viewed in the portal in the Security blade of the Azure AD tenant.

If you open the context, you can see the id used. This is used in the applications to check the MFA requirement.

Create a CAE policy to use the context

Now that a authentication context exists, a CAE policy can be created to use this. I created a policy to require MFA.

Implement the API and use the CAE context

The API application needs to validate if the access token contains the acrs claim with the c1 value. If CAE is activated and the claim is included in the token, then any policies which use this CAE authentication context must be fulfilled or no events have been received which inform the client that this access token is invalid. A lot of things need to be implemented correctly for this to work. If configured correctly, a MFA step up authentication is required to use the API. The API returns an unauthorized response as specified in the OpenID Connect signals and events specification, if the claim is missing from the access token. This is handled by the calling UI application.

/// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } /// <summary> /// Retrieves the acrsValue from database for the request method. /// Checks if the access token has acrs claim with acrsValue. /// If does not exists then adds WWW-Authenticate and throws UnauthorizedAccessException exception. /// </summary> public void CheckForRequiredAuthContext(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { if (IsClientCapableofClaimsChallenge(context)) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var base64str = Convert.ToBase64String(Encoding.UTF8.GetBytes("{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}")); context.Response.Headers.Append("WWW-Authenticate", $"Bearer realm=\"\", authorization_uri=\"https://login.microsoftonline.com/common/oauth2/authorize\", client_id=\"" + clientId + "\", error=\"insufficient_claims\", claims=\"" + base64str + "\", cc_type=\"authcontext\""); context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; string message = string.Format(CultureInfo.InvariantCulture, "The presented access tokens had insufficient claims. Please request for claims requested in the WWW-Authentication header and try again."); context.Response.WriteAsync(message); context.Response.CompleteAsync(); throw new UnauthorizedAccessException(message); } else { throw new UnauthorizedAccessException("The caller does not meet the authentication bar to carry our this operation. The service cannot allow this operation"); } } } } /// <summary> /// Evaluates for the presence of the client capabilities claim (xms_cc) and accordingly returns a response if present. /// </summary> public bool IsClientCapableofClaimsChallenge(HttpContext context) { string clientCapabilitiesClaim = "xms_cc"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var ccClaim = context.User.FindAll(clientCapabilitiesClaim).FirstOrDefault(x => x.Type == "xms_cc"); if (ccClaim != null && ccClaim.Value == "cp1") { return true; } return false; } }

The API uses the CAE scoped service to validate the CAE authentication context and either the data is returned or an unauthorized exception is returned. The Authorize attribute is also used to validate the JWT bearer token and validate that the authentication policy is supported. You could probably implement middleware to check the CAE authentication context as well.

[Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] [ApiController] [Route("[controller]")] public class ApiForUserDataController : ControllerBase { private readonly CaeClaimsChallengeService _caeClaimsChallengeService; public ApiForUserDataController(CaeClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [HttpGet] public IEnumerable<string> Get() { // returns unauthorized exception with WWW-Authenticate header if CAE claim missing in access token // handled in the caller client exception with challenge returned if not ok _caeClaimsChallengeService.CheckForRequiredAuthContext(AuthContextId.C1, HttpContext); return new List<string> { "admin API CAE protected data 1", "admin API CAE protected data 2" }; } }

The program file adds the services and secure the API using Microsoft.Identity.Web. A policy is created to used on the controllers.

var builder = WebApplication.CreateBuilder(args); builder.Services.AddScoped<CaeClaimsChallengeService>(); builder.Services.AddDistributedMemoryCache(); builder.Services.AddMicrosoftIdentityWebApiAuthentication(builder.Configuration) .EnableTokenAcquisitionToCallDownstreamApi() .AddMicrosoftGraph(builder.Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches(); JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); JwtSecurityTokenHandler.DefaultMapInboundClaims = false; //IdentityModelEventSource.ShowPII = true; builder.Services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); builder.Services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { // Validate id of application for which the token was created // In this case the UI application validateAccessTokenPolicy.RequireClaim("azp", builder.Configuration["AzpValidClientId"]); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

The program file is used to setup the API ASP.NET Core API project like any Azure AD Microsoft.Identity.Web client.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" }, "AzpValidClientId": "7c839e15-096b-4abb-a869-df9e6b34027c", "GraphBeta": { "BaseUrl": "https://graph.microsoft.com/beta", "Scopes": "Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess" },

Now that the unauthorized exception is returned to the calling UI interactive client, this needs to be handled.

Implement the ASP.NET Core Razor Page with step up MFA check

The UI project implements a Web APP project. The Admin API scope is requested to access the admin API.

builder.Services.AddDistributedMemoryCache(); builder.Services.AddMicrosoftIdentityWebAppAuthentication(builder.Configuration, "AzureAd", subscribeToOpenIdConnectMiddlewareDiagnosticsEvents: true) .EnableTokenAcquisitionToCallDownstreamApi(new[] { builder.Configuration.GetSection("AdminApi")["Scope"] }) .AddMicrosoftGraph(builder.Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches();

The app uses a scoped service to request data from the administrator API. Using the ITokenAcquisition interface, an access token is request for the API. If an unauthorized response is returned, then a WebApiMsalUiRequiredException exception is thrown with the response headers.

public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); var scopes = new List<string> { _adminApiScope }; var accessToken = await _tokenAcquisition .GetAccessTokenForUserAsync(scopes); client.BaseAddress = new Uri(_adminApiBaseUrl); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add( new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForUserData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer .DeserializeAsync<List<string>>(stream); return payload; } // This exception can be used to handle a claims challenge throw new WebApiMsalUiRequiredException( $"Unexpected status code in the HttpResponseMessage: {response.StatusCode}.", response); }

The ASP.NET Core Razor page is used to handled the WebApiMsalUiRequiredException exception. If this is returned, a new ClaimChallenge is created with the request for the authentication context. This is returned to the UI. If this response is returned, the user is redirected to authenticate again for the new scope which must fulfil the CAE policy using this.

public async Task<IActionResult> OnGet() { try { Data = await _userApiClientService.GetApiDataAsync(); return Page(); } catch (WebApiMsalUiRequiredException hex) { // Challenges the user if exception is thrown from Web API. try { var claimChallenge = WwwAuthenticateParameters .GetClaimChallengeFromResponseHeaders(hex.Headers); _consentHandler.ChallengeUser( new string[] { "user.read" }, claimChallenge); return Page(); } catch (Exception ex) { _consentHandler.HandleException(ex); } _logger.LogInformation("{hexMessage}", hex.Message); } return Page(); }

MFA is configured in a policy using the CAE conditional access authentication context.

Notes

The application will only work with Azure AD and if the continuous access evaluation policies are implemented correctly by the Azure IT tenant admin. You cannot force this in the application, you can only use this.

Links

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Tuesday, 19. April 2022

Phil Windley's Technometria

John Oliver on Surveillance Capitalism

Summary: John Oliver's Last Week Tonight took on data brokers and surveillance capitalism in a recent show and he did his usual great job of explaining a serious topic in an entertaining way. Definitely worth the watch. Surveillance capitalism is a serious subject that can be hard to explain, let alone make interesting. I believe that it threatens our digital future. The question is "

Summary: John Oliver's Last Week Tonight took on data brokers and surveillance capitalism in a recent show and he did his usual great job of explaining a serious topic in an entertaining way. Definitely worth the watch.

Surveillance capitalism is a serious subject that can be hard to explain, let alone make interesting. I believe that it threatens our digital future. The question is "what to do about it?"

John Oliver's Last Week Tonight recently took on the task of explaining surveillance capitalism, how it works, and why it's a threat. I recommend watching it. Oliver does a great job of explaining something important, complex, and, frankly, a little boring in a way that is both funny and educational.

But he didn't just explain it. He took some steps to do something about it.

In researching this story, we realized that there is any number of perfectly legal bits of f—kery that we could engage in. We could, for example, use data brokers to go phishing for members of congress, by creating a demographic group consisting of men, age 45 and up, in a 5-mile radius of the U.S. Capitol, who had previously visited sites regarding or searched for terms including divorce, massage, hair loss and mid-life crisis.

The result is a collection of real data from their experiment that Oliver threatens to reveal if Congress doesn't act. The ads they ran were Marriage shouldn't be a prison, Can you vote twice?, and Ted Cruz erotic fan fiction. I'm not sure it will actually light a fire under so moribund an institution as Congress, but it's worth a shot!

Tags: privacy surveillance+capitalism humor

Saturday, 16. April 2022

Jon Udell

Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. … Continue reading Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. But 10 of those 24 inches came in one big burst on Oct 24.

Here are a bunch of those raindrops sailing down the Santa Rosa creek to the mouth of the Russian River at Jenner.

With Sam Learner’s amazing River Runner we can follow a drop that fell in the Mayacamas range as it makes its way to the ocean.

Until 2014 I’d only ever lived east of the Mississipi River, in Pennsylvania, Michigan, Maryland, Massachusetts, and New Hampshire. During those decades there may never have been a month with zero precipitation.

I still haven’t adjusted to a region where it can be dry for many months. In 2017, the year of the devastating Tubbs Fire, there was no rain from April through October.

California relies heavily on the dwindling Sierra snowpack for storage and timed release of water. Clearly we need a complementary method of storage and release, and this passage in Kim Stanley Robinson’s Ministry for the Future imagines it beautifully.

Typically the Sierra snowpack held about fifteen million acre-feet of water every spring, releasing it to reservoirs in a slow melt through the long dry summers. The dammed reservoirs in the foothills could hold about forty million acre-feet when full. Then the groundwater basin underneath the central valley could hold around a thousand million acre-feet; and that immense capacity might prove their salvation. In droughts they could pump up groundwater and put it to use; then during flood years they needed to replenish that underground reservoir, by capturing water on the land and not allow it all to spew out the Golden Gate.

Now the necessity to replumb the great valley for recharge had forced them to return a hefty percentage of the land to the kind of place it had been before Europeans arrived. The industrial agriculture of yesteryear had turned the valley into a giant factory floor, bereft of anything but products grown for sale; unsustainable ugly, devastated, inhuman, and this in a place that had been called the “Serengeti of North America,” alive with millions of animals, including megafauna like tule elk and grizzly bear and mountain lion and wolves. All those animals had been exterminated along with their habitat, in the first settlers’ frenzied quest to use the valley purely for food production, a kind of secondary gold rush. Now the necessity of dealing with droughts and floods meant that big areas of the valley were restored, and the animals brought back, in a system of wilderness parks or habitat corridors, all running up into the foothills that ringed the central valley on all sides.

The book, which Wikipedia charmingly classifies as cli-fi, grabbed me from page one and never let go. It’s an extraordinary blend of terror and hope. But this passage affected me in the most powerful way. As Marc Reisner’s Cadillac Desert explains, and as I’ve seen for myself, we’ve already engineered the hell out of California’s water systems, with less than stellar results.

Can we redo it and get it right this time? I don’t doubt our technical and industrial capacity. Let’s hope it doesn’t take an event like the one the book opens with — a heat wave in India that kills 20 million people in a week — to summon the will.


Werdmüller on Medium

Elon, Twitter, and the future of social media

There’s no world where nationalists get what they want. Continue reading on Medium »

There’s no world where nationalists get what they want.

Continue reading on Medium »

Wednesday, 13. April 2022

Habitat Chronicles

Game Governance Domains: a NFT Support Nightmare

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer »»

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer support emails dealt with requests to reverse a trade because of some kind of trade scams. When I saw Hearthstone’s dust system, I realized it was genius; they probably cut their support costs by around 90% with that move alone.”

Ian Schreiber
A Game’s Governance Domain

There have always been key governance requirements for object trading economies in online games, even before user-generated-content enters the picture.  I call this the game’s object governance domain.

Typically, an online game object governance domain has the following features (amongst others omitted for brevity):

There is usually at least one fungible token currency There is often a mechanism for player-to-player direct exchange There is often one or more automattic markets to exchange between tokens and objects May be player to player transactions May be operator to player transactions (aka vending and recycling machinery) Managed by the game operator There is a mechanism for reporting problems/disputes There is a mechanism for adjudicating conflicts There are mechanisms for resolving a disputes, including: Reversing transactions Destroying objects Minting and distributing objects Minting and distributing tokens Account, Character, and Legal sanctions Rarely: Changes to TOS and Community Guidelines


In short, the economy is entirely in the ultimate control of the game operator. In effect, anything can be “undone” and injured parties can be “made whole” through an entire range of solutions.

Scary Future: Crypto? Where’s Undo?

Introducing blockchain tokens (BTC, for example) means that certain transactions become “irreversible”, since all transactions on the chain are 1) Atomic and 2) Expensive. In contrast, many thousands of credit-card transactions are reversed every minute of every day (accidental double charges, stolen cards, etc.) Having a market to sell an in-game object for BTC will require extending the governance domain to cover very specific rules about what happens when the purchaser has a conflict with a transaction. Are you really going to tell customers “All BTC transactions are final. No refunds. Even if your kid spent the money without permission. Even if someone stole your wallet”?

Nightmare Future: Game UGC & NFTs? Ack!

At least with your own game governance domain, you had complete control over IP presented in your game and some control, or at least influence, over the games economy. But it gets pretty intense to think about objects/resources created by non-employees being purchased/traded on markets outside of your game governance domain.

When your game allows content that was not created within that game’s governance domain, all bets are off when it comes to trying to service customer support calls. And there will be several orders of magnitude more complaints. Look at Twitter, Facebook, and Youtube and all of the mechanisms they need to support IP-related complaints, abuse complaints, and robot-spam content. Huge teams of folks spending millions of dollars in support of Machine Learning are not able to stem the tide. Those companies’ revenue depends primarily on UGC, so that’s what they have to deal with.

NFTs are no help. They don’t come with any governance support whatsoever. They are an unreliable resource pointer. There is no way to make any testable claims about any single attribute of the resource. When they point to media resources (video, jpg, etc.) there is no way to verify that the resource reference is valid or legal in any governance domain. Might as well be whatever someone randomly uploaded to a photo service – oh wait, it is.

NFTs have been stolen, confused, hijacked, phished, rug-pulled, wash-traded, etc. NFT Images (like all internet images) have been copied, flipped, stolen, misappropriated, and explicitly transformed. There is no undo, and there is no governance domain. OpenSea, because they run a market, gets constant complaints when there is a problem, but they can’t reverse anything. So they madly try to “prevent bad listings” and “punish bad accounts” – all closing the barn door after the horse has left. Oh, and now they are blocking IDs/IPs from sanctioned countries.

So, even if a game tries to accept NFT resources into their game – they end up in the same situation as OpenSea – inheriting all the problems of irreversibility, IP abuse, plus new kinds of harassment with no real way to resolve complaints.

Until blockchain tokens have RL-bank-style undo, and decentralized trading systems provide mechanisms for a reasonable standard of governance, online games should probably just stick with what they know: “If we made it, we’ll deal with any governance problems ourselves.”









Phil Windley's Technometria

Easier IoT Deployments with LoraWan and Helium

Summary: Connectivity requirements add lots of friction to large-scale IoT deployments. LoRaWAN, and the Helium network, just might be a good, universal solution. I've been interested in the internet of things (IoT) for years, even building and selling a connected car product called Fuse at one point. One of the hard parts of IoT is connectivity, getting the sensors on some network so t

Summary: Connectivity requirements add lots of friction to large-scale IoT deployments. LoRaWAN, and the Helium network, just might be a good, universal solution.

I've been interested in the internet of things (IoT) for years, even building and selling a connected car product called Fuse at one point. One of the hard parts of IoT is connectivity, getting the sensors on some network so they can send data back to wherever it's aggregated, analyzed, or used to take action. Picos are a good solution for the endpoint—where the data ends up—but the sensor still has to get connected to the internet.

Wifi, Bluetooth, and cellular are the traditional answers. Each has their limitations in IoT.

Wifi has limited range and, outside the home environment, usually needs a separate device-only network because of different authentication requirements. If you're doing a handful of devices it's fine, but it doesn't easily scale to thousands. Wifi is also power hungry, making it a poor choice for battery-powered applications. Bluetooth's range is even more limited, requiring the installation of Bluetooth gateways. Bluetooth is also not very secure. Bluetooth is relatively good with power. I've had temperature sensor on Bluetooth that ran over a year on a 2025 battery. But still, battery replacement can end up being rel maintenance headache. Cellular is relatively ubiquitous, but it can be expensive and hard to manage. Batteries for for cell phones because people charge them every night. That's not reasonable for many IoT applications, so cellular-based sensors usually need to be powered.

Of course, there are other choices using specialized IoT protocols like ZWave, Zigbee, and Insteon, for example. These all require specialize hubs that must be bought, managed, and maintained. To avoid single points of failure, multiple hubs are needed. For a large industrial deployment this might be worth the cost and effort. Bottom line: Every large IoT project spends a lot of time and money designing and managing the connectivity infrastructure. This friction reduces the appeal of large-scale IoT deployments.

Enter LoraWAN, a long-range (10km), low-power wireless protocol for IoT. Scott Lemon told me about LoRaWAN recently and I've been playing with it a bit. Specifically, I've been playing with Helium, a decentralized LoRaWAN network.

Helium is a LoRaWAN network built from hotspots run by almost anyone. In one of the most interesting uses of crypto I've seen, Helium pays people helium tokens for operating hotspots. They call the model "proof of coverage". You get paid two ways: (1) providing coverage for a given geographical area and (2) moving packets from the radio to the internet. This model has provided amazing coverage with over 700,000 hotspots deployed to date. And Helium expended very little capital to do it, compared with building out the infrastructure on their own.

I started with one of these Dragino LHT65 temperature sensors. The fact that I hadn't deployed my own hotspot was immaterial because there's plenty of coverage around me.

LHT65 Temperature Sensor (click to enlarge)

Unlike a Wifi network, you don't put the network credentials in the device, you put the devices credentials (keys) in the network. Once I'd done that, the sensor started connecting to hotspots near my house and transmitting data. Today I've been driving around with it in my truck and it's roaming onto other hotspots as needed, still reporting temperatures.

Temperature Sensor Coverage on Helium (click to enlarge)

Transmitting data on the Helium network costs money. You pay for data use with data credits (DC). You buy DC with the Helium token (HNT). Each DC costs a fixed rate of $0.00001 per 24 bytes of data. That's about $0.42/Mb, which isn't dirt cheap when compared to your mobile data rate, but you're only only paying for the data you use. For 100 sensors, transmitting 3 packets per hour for a year would cost $2.92. If each of those sensors needed a SIM card and cellular account, the comparable price would be orders of magnitude higher. So, the model fits IoT sensor deployments well. And the LHT65 has an expected battery life of 10 years (at 3 packets per hour) which is also great for large-scale sensor deployments.

Being able to deploy sensors without having to also worry about building and managing the connection infrastructure is a big deal. I could put 100 sensors up around a campus, a city, a farm, or just about anywhere and begin collecting the data from them without worrying about the infrastructure, the cost, or maintenance. My short term goal is to start using these with Picos and build out some rulesets and the UI for using and managing LoRaWAN sensors. I also have one of these SenseCAP M1 LoRaWAN gateways that I'm going to deploy in Idaho later (there are already several hotspots near my home in Utah). I'll let you know how all this goes.

Photo Credit: Helium discharge tube from Heinrich Pniok (CC BY-NC-ND 3.0). Image was cropped vertically.

Tags: iot helium lorawan picos

Monday, 11. April 2022

Justin Richer

The GNAPathon

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned. Our Goals GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in o

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned.

Our Goals

GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in our limited timeframe. While GNAP’s transaction negotiation patterns make the protocol fail gracefully when two sides don’t have matching features, we wanted to aim for success. As a consequence, we decided to focus on a few key interoperability points:

HTTP Message Signatures for key proofing, with Content Digest for protecting the body of POST messages. Redirect-based interaction, to get there and back. Dynamic keys, not relying on pre-registration at the AS. Single access tokens.

While some of the components built out did support additional features, these were the ones we chose as a baseline to make everything work as best as it could. We laid out our goals to get these components to talk to each other in increasingly complete layers.

Our goal of the hackathon wasn’t just to create code, we wanted to replicate a developer’s experience when approaching GNAP for the first time. Wherever possible, we tried to use libraries to cover existing functionality, including HTTP Signatures, cryptographic primitives, and HTTP Structured Fields. We also used the existing XYZ Java implementation of GNAP to test things out.

New Clients

With all of this in hand, we set about building some clients from scratch. Since we had a functioning AS to build against, focusing on the clients allowed us to address different platforms and languages than we otherwise had. We settled on three very different kinds of client software:

A single page application, written in JavaScript with no backend components. A command line application, written in PHP. A web application, written in PHP.

By the end of the weekend, we were able to get all three of these working, and the demonstration results are available as part of the hackathon readout. This might not seem like much, but the core functionality of all three clients was written completely from scratch, including the HTTP Signatures implementation.

Getting Over the Hump

Importantly, we also tried to work in such a way that the different components could be abstracted out after the fact. While we could have written very GNAP-specific code to handle the key handling and signing, we opted to instead create generic functions that could sign and present any HTTP message. This decision had two effects.

First, once we had the signature method working, the rest of the GNAP implementation went very, very quickly. GNAP is designed in such a way as to leverage HTTP, JSON, and security layers like HTTP Message Signatures as much as it can. What this meant meant for us during implementation is that getting the actual GNAP exchange to happen was a simple set of HTTP calls and JSON objects. All the layers did their job appropriately, keeping abstractions from leaking between them.

Second, this will give us a chance to extract the HTTP Message Signature code into truly generic libraries across different languages. HTTP Message Signatures is used in places other than GNAP, and so a GNAP implementor is going to want to use a dedicated library for this core function instead of having to write their own like we did.

We had a similar reaction to elements like structured field libraries, which helped with serialization and message-building, and cryptographic functions. As HTTP Message Signatures in particular gets built out more across different ecosystems, we’ll see more and more support for fundamental tooling.

Bug Fixes

Another important part of the hackathon was the discovery and patching of bugs in the existing XYZ authorization server and Java Servlet web-based client code. At the beginning of the weekend, these pieces of software worked with each other. However, it became quickly apparent that there were a number of issues and assumptions in the implementation. Finding things like this is one of the best things that can come out of a hackathon — by putting different code from different developers against each other, you can figure out where code is weak, and sometimes, where the specification itself is unclear.

Constructing the Layers

Probably the most valuable outcome of the hackathon, besides the working code itself, is a concrete appreciation of how clear the spec is from the eyes of someone trying to build to it. We came out of the weekend with a number of improvements that need to be made to GNAP and HTTP Message Signatures, but also ideas on what additional developer support there should be in the community at large. These things will be produced and incorporated over time, and hopefully make the GNAP ecosystem brighter and stronger as a result.

In the end, a specification isn’t real unless you have running code to prove it. Even more if people can use that code in their own systems to get real work done. GNAP, like most standards, is just a layer in the internet stack. It builds on technologies and technologies will be built on it.

Our first hackathon experience has shown this to be a pretty solid layer. Come, build with us!


Doc Searls Weblog

What’s up with Dad?

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male […]

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male contemporaries of the same generation in North Carolina, however, were Dad or Daddy.

To my kids, I’m Pop or Papa. Family thing, again.

Anyway, I’m wondering what’s up, or why’s up, with Dad?

 


reb00ted

Web2's pervasive blind spot: governance

What is the common theme in these commonly stated problems with the internet today? Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health. In my view, the common

What is the common theme in these commonly stated problems with the internet today?

Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health.

In my view, the common theme underlying these problems is: “The wrong decisions were made.” That’s it. Not technology, not product, not price, not marketing, not standards, not legal, nor whatever else. Just that the wrong decisions were made.

Maybe it was:

The wrong people made the decisions. Example: should it really be Mark Zuckerberg who decides which of my friends’ posts I see?

The wrong goals were picked by the decisionmakers and they are optimizing for those. Example: I don’t want to be “engaged” more and I don’t care about another penny per share for your earnings release.

A lack of understanding or interest in the complexity of a situation, and inability for the people with the understanding to make the decision instead. Example: are a bunch of six-figure Silicon Valley guys really the ones who should decide what does and does not inflame religious tensions in a low-income country half-way around the world with a societal structure that’s fully alien to liberal Northern California?

What do we call the thing that deals with who gets to decide, who has to agree, who can keep them from doing bad things and the like? Yep, it’s “governance”.

Back in the 1980’s in 90’s, all we cared about was code. So when the commercial powers started abusing their power, in the mind of some users, those users pushed back with projects such as GNU and open-source.

But we’ve long moved on from there. In one of the defining characteristics of Web2 over Web1, data has become more important than the code.

Starting about 15 years ago, it was suddenly the data scientists and machine learning people who started getting the big bucks, not the coders any more. Today the fight is not about who had the code any more; it is about who has the data.

Pretty much the entire technology industry understands that now. What it doesn’t understand yet is that the consumer internet crisis we are in is best understood as a need to add another layer to the sandwich: not just the right code, not just plus the right data, but also plus the right governance: have the right people decide for the right reasons, and the mechanisms to get rid of the decisionmakers if the affected community decides they made the wrong decisions or had the wrong reasons.

Have you noticed that pretty much all senior technologists that dismiss Web3 — usually in highly emotional terms – completely ignore that pretty much all the genuinely interesting innovations in the Web3 world are governance innovations? (never mind blockchain, it’s just a means to an end for those innovators).

If we had governance as part of the consumer technology sandwich, then:

Whether I see which of my friends’ posts should be decisions that I make with my friends, and nobody else gets a say.

Whether a product optimizes for this or that should be a decision that is made by its users, not some remote investors or power-hungry executives.

A community of people half-way around the world should determine, on its own for its own purposes, what is good for its members.

(If we had a functioning competitive marketplace, Adam Smith-style, then we would probably get this because products that do what the customers want win over products that don’t. But have monopolies instead that cement the decisionmaking in the wrong places for the wrong reasons. A governance problem, in other words.)

If you want to get ahead of the curve, pay attention to this. All the genuinely new stuff in technology that I’ve seen for a few years has genuinely new ideas about governance. It’s a complete game changer.

Conversely, if you build technology with the same rudimentary, often dictatorial and almost always dysfunctional governance we have had for technology in the Web1 and Web2 world, you are fundamentally building a solution for the past, not for the future.

To be clear, better governance for technology is in the pre-kindergarten stage. It’s like the Apple 1 of the personal computer – assembly required – or the Archie stage of the internet. But we would have been wrong to dismiss those as mere fads then, and it would be wrong to dismiss the crucial importance of governance now.

That, for me, is the essence of how the thing after Web2 – and we might as well call it Web3 – is different. And it is totally exciting! Because “better governance” is just another way to say: the users get to have a say!!

Thursday, 07. April 2022

Identity Woman

Media Mention: MIT Technology Review

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […] The post Media Mention: MIT

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […]

The post Media Mention: MIT Technology Review appeared first on Identity Woman.

Monday, 04. April 2022

Damien Bod

Implementing OAuth2 Client credentials flow APP to APP security using Azure AD non interactive

This article shows how to implement the OAuth client credentials flow using the Microsoft.Identity.Client Nuget package and Azure AD to create an Azure App registration. The client application requires a secret which can be an Azure App registration or a certificate to request an access token. The token and only tokens created for this client […]

This article shows how to implement the OAuth client credentials flow using the Microsoft.Identity.Client Nuget package and Azure AD to create an Azure App registration. The client application requires a secret which can be an Azure App registration or a certificate to request an access token. The token and only tokens created for this client can be used to access the API.

Code: Azure Client credentials flows

Blogs in this series

Implementing OAuth2 APP to APP security using Azure AD from a Web APP APP to APP security using Azure AD from a daemon app

Azure App registration setup

The Azure App registration is setup as in this blog:

Implementing OAuth2 APP to APP security using Azure AD from a Web APP

An Azure App registration was then created to request new access tokens. The access_as_application claim is validated in the API.

API

The service API is implemented to validate the access tokens. The azp claim is used to validate that the token was requested using the known client ID and the secret. It does not validate who sent the access token, just that this client ID and secret was used to request the access token. The client credentials flow should only be used by trusted clients. The azpacr is used to validate how the token was requested. This is a confidential client. Microsoft.Identity.Web is used to implement the API security.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); IdentityModelEventSource.ShowPII = true; JwtSecurityTokenHandler.DefaultMapInboundClaims = false; services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>(); services.AddMicrosoftIdentityWebApiAuthentication(Configuration); services.AddControllers(); services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement()); // Validate id of application for which the token was created // In this case the CC client application validateAccessTokenPolicy.RequireClaim("azp", "b178f3a5-7588-492a-924f-72d7887b7e48"); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

Microsoft.Identity.Client OAuth Client credentials client

A console application is used to implement the client credentials trusted application. A console application cannot be trusted unless it is deployed to a trusted host. It would be better to use a certificate to secure this and even better if this was not a console application but instead some server deployed Azure service which uses Key Vault to persist it’s secrets and managed identities to access the secret. The ConfidentialClientApplicationBuilder is used to create a new CC flow. The access token is used to access the API.

using System.Net.Http.Headers; using Microsoft.Extensions.Configuration; using Microsoft.Identity.Client; var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddUserSecrets("78cf2604-554c-4a6e-8846-3505f2c0697d") .AddJsonFile("appsettings.json"); var configuration = builder.Build(); // 1. Client client credentials client var app = ConfidentialClientApplicationBuilder.Create(configuration["AzureADServiceApi:ClientId"]) .WithClientSecret(configuration["AzureADServiceApi:ClientSecret"]) .WithAuthority(configuration["AzureADServiceApi:Authority"]) .Build(); var scopes = new[] { configuration["AzureADServiceApi:Scope"] }; // 2. Get access token var authResult = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); if(authResult == null) { Console.WriteLine("no auth result... "); } else { Console.WriteLine(authResult.AccessToken); // 3. Use access token to access token var client = new HttpClient { BaseAddress = new Uri(configuration["AzureADServiceApi:ApiBaseAddress"]) }; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForServiceData"); if (response.IsSuccessStatusCode) { Console.WriteLine(await response.Content.ReadAsStringAsync()); } }

The console application uses app settings to load the trusted client. The Scope is defined to use the application ID from the Azure App registration and the .default scope. Then any API definitions will be added to the access token. It is important to used V2 access tokens which can be defined in the manifest.

{ "AzureADServiceApi": { "ClientId": "b178f3a5-7588-492a-924f-72d7887b7e48", // "ClientSecret": "--in-user-secrets--", // Authority Guid = tenanant ID "Authority": "https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1", "ApiBaseAddress": "https://localhost:44324", "Scope": "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default" } }

The access tokens returned contains the claims with the azp, roles and azpacr claims. These claims as well as the standard claims values are used to authorize each request. Authorize attributes are used with a match scheme and authorization policy which uses the claims to validate. The Azure definitions are used in the policies and the policies are used in the application. You should not use the Azure definitions directly in the application. Avoid using Roles or RequiredScope directly in controllers or specific application parts. Map these in the IClaimsTransformation or the OnTokenValidated method.

"iss": "https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0", "iat": 1648363449, "nbf": 1648363449, "exp": 1648367349, "aio": "E2ZgYLh1abHAkpeHvuz/fjX9QMNZDgA=", "azp": "b178f3a5-7588-492a-924f-72d7887b7e48", "azpacr": "1", "oid": "3952ce95-8b14-47b4-b3e6-2a5521d35ed1", "rh": "0.AR8AFVv5fyHcpku8koJIVlePwaXzeLGIdSpJkk9y14h7fkgfAAA.", "roles": [ "access_as_application", "service-api" ], "sub": "3952ce95-8b14-47b4-b3e6-2a5521d35ed1", "tid": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "uti": "WDee3wGpJkeJGUMN5CDOAA", "ver": "2.0" }

Implementing the application permissions in this way makes is possible to secure any daemon application or application flow with no user with a server for any deployment. The client can be hosted in an ASP.NET Core application which authenticates using Azure B2C or a service with no user interaction. The client and the API work alone. It is important that the client can be trusted to secure the secret used to request the access tokens.

Links:

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles?tabs=aspnetcore


Randall Degges

Real Estate vs Stocks

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating. For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties). Earlier this week,

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating.

For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties).

Earlier this week, I was discussing real estate investing with some friends, and we had a great conversation about why you might even consider investing in real estate in the first place. As I explained my strategy to them, I thought it might make for an interesting blog post (especially if you’re new to the world of investing).

Please note that I’m not an expert, just an enthusiastic hobbyist. Like all things I work on, I like to do a lot of research, experimentation, etc., but don’t take this as financial advice.

Why Invest in Stocks

Before discussing whether real estate or stocks is the better investment, let’s talk about how stocks work. If you don’t understand how to invest in stocks (and what rewards you can expect from them), the comparison between real estate and stocks will be meaningless.

What is a Stock?

Stocks are the simplest form of investment you can make. If you buy one share of Tesla stock for $100, you’re purchasing one tiny sliver of the entire company and are now a part-owner!

Each stock you hold can either earn or lose money, depending on how the company performs. For example, if Tesla doesn’t sell as many vehicles as the prior year, it’s likely that the company will not make as much money and will therefore be worth less than it was a year ago, so the value of the stock might drop. In this case, the one share of Tesla stock you purchased for $100 might only be worth $90 (a 10% drop in value!).

But, stocks can also make you money. If Tesla sells more vehicles than anyone expected, the company might be worth more, and now your one share of Tesla stock might be worth $110 (a 10% gain!). This gain is referred to as appreciation because the value of your stock has appreciated.

In addition to appreciation, you can also make money through dividends. While some companies choose to take any profits they make and reinvest them into the business to make more products, conduct research, etc., some companies take their profits and split them up amongst their shareholders. We call this distribution a dividend. When a dividend is paid, you’ll receive a set amount of money per share as a shareholder. For example, if Tesla issues a 10 cent dividend per share, you’ll receive $0.10 of spending money as the proud owner of one share of Tesla stock!

But here’s the thing, investing in stocks is RISKY. It’s risky because companies make mistakes, and even the most highly respected and valuable companies today can explode overnight and become worthless (Enron, anyone?). Because of this, generally speaking, it’s not advisable to ever buy individual stocks.

Instead, the best way to invest in stocks is by purchasing index funds.

What is an Index Fund?

Index funds are stocks you buy that are essentially collections of other stocks. If you invest in Vanguard’s popular VTSAX index fund, for example, you’re buying a small amount of all publicly traded companies in the US.

This approach is much less risky than buying individual stocks because VTSAX is well-diversified. If any of the thousands of companies in the US goes out of business, it doesn’t matter to you because you only own a very tiny amount of it.

The way index funds work is simple: if the value of the index as a whole does well (the US economy in our example), the value of your index fund rises. If the value of the index as a whole does poorly, the value of your index fund drops. Simple!

How Well Do Index Funds Perform?

Let’s say you invest your money into VTSAX and now own a small part of all US companies. How much money can you expect to make?

While there’s no way to predict the future, what we can do is look at the past. By looking at the average return of the stock market since 1926 (when the first index was created), you can see that the average return of the largest US companies has been ~10% annually (before inflation).

If you were to invest in VTSAX over a long period of time, it’s historically likely that you’ll earn an average of 10% per year. And understanding that the US market averages 10% per year is exciting because if you invest a little bit of money each month into index funds, you’ll become quite wealthy.

If you plug some numbers into a compound interest calculator, you’ll see what I mean.

For example, if you invest $1,000 per month into index funds for 30 years, you’ll end up with $2,171,321.10. If you start working at 22, then by the time you’re 52, you’ll have over two million dollars: not bad!

How Much Money Do I Need to Retire if I Invest in Index Funds?

Now that you know how index funds work and how much they historically earn, you might be wondering: how much money do I need to invest in index funds before I can retire?

As it turns out, there’s a simple answer to this question, but before I give you the answer, let’s talk about how this works.

Imagine you have one million dollars invested in index funds that earn an average of 10% yearly. You could theoretically sell 10% of your index funds each year and never run out of money in this scenario. Or at least, this makes sense at first glance.

Unfortunately, while it’s true that the market has returned a historical average of 10% yearly, this is an average, and actual yearly returns vary significantly by year. For example, you might be up 30% one year down 40% the next.

This unpredictability year-over-year makes it difficult to safely withdraw money each year without running out of money due to sequence of return risk.

Essentially, while it’s likely that you’ll earn 10% per year on average if you invest in a US index fund, you will likely run out of money if you sell 10% of your portfolio per year due to fluctuating returns each year.

Luckily, a lot of research has been done on this topic, and the general consensus is that if you only withdraw 4% of your investments per year, you’ll have enough money to last you a long time (a 30-year retirement). This is known as the 4% rule and is the gold standard for retirement planning.

Using the 4% rule as a baseline, you can quickly determine how much money you need to invest to retire with your desired spending.

For example, let’s say you want to retire and live off $100k per year. In this case, $100k is 4% of $2.5m, so you’ll need at least $2.5m invested to retire safely.

PRO TIP: You can easily calculate how much you need invested to retire if you simply take your desired yearly spend and multiply it by 25. For example, $40k * 25 = $1m, $100k * 25 = $2.5m, etc.

By only withdrawing 4% of your total portfolio per year, it’s historically likely that you’ll never run out of money over 30 years. Need a longer retirement? You may want to aim for a 3.5% withdrawal rate (or lower).

Should I Invest in Index Funds?

I’m a big fan of index fund investing, which is why my wife and I put 50% of our money into index funds.

Index funds are simple to purchase and sell (you can do it instantly using an investment broker like Vanguard) in seconds Index funds have an excellent historical track record (10% average yearly returns is fantastic!) Index funds are often tax-advantaged (they are easy to purchase through a company 401k plan, IRA, or other tax-sheltered accounts) Why Invest in Real Estate?

Now that we’ve discussed index funds, how they work, what returns you can expect if you invest in index funds, and how much money you need to invest to retire using index funds, we can finally talk about real estate.

What Qualifies as a Real Estate Investment?

Like stocks and other types of securities, there are multiple ways to invest in real estate. I’m going to cover the most basic form of real estate investing here, but know that there are many other ways to invest in real estate that I won’t cover today due to how complex it can become.

At a basic level, investing in real estate means you’re purchasing a property: a house, condo, apartment building, piece of land, commercial building, etc.

How Do Real Estate Investors Make Money?

There are many ways to make money through investing in real estate. Again, I’m only going to cover the most straightforward ways here due to the topic’s complexities.

Let’s say you own an investment property. The typical ways you might make money from this investment are:

Renting the property out for a profit Owning the property as its value rises over time. For example, if you purchased a house ten years ago for $100k worth $200k today, you’ve essentially “earned” $100k in profit, even if you haven’t yet sold the property. This is called appreciation.

Simple, right?

What’s One Major Difference Between Index Funds and Real Estate?

One of the most significant differences between real estate investing and index fund investing is leverage.

When you invest in an index fund like VTSAX, you’re buying a little bit of the index using your own money directly. This means if you purchase $100k of index funds and earn 10% on your money, you’ll have $110k of investments.

On the other hand, real estate is often purchased using leverage (aka: bank loans). It’s common to buy an investment property and only put 20-25% of your own money into the investment while seeking a mortgage from a bank to cover the remaining 75-80%.

The benefit of using leverage is that you can stretch your money further. For example, let’s say you have $100k to invest. You could put this $100k into VTSAX or purchase one property worth $500k (20% down on a $500k property means you only need $100k as a down payment).

Imagine these two scenarios:

Scenario 1: You invest $100k in VTSAX and earn precisely 10% per year Scenario 2: You put a $100k down payment on a $500k property that you rent out for a profit of $500 per month after expenses (we call this cash flow), and this property appreciates at a rate of 6% per year. Also, assume that you can secure a 30-year fixed-rate loan for the remaining $400k at a 4.5% interest rate.

After ten years, in Scenario 1, you’ll have $259,374.25. Not bad! That’s a total profit of $159,374.25.

But what will you have after ten years in Scenario 2?

In Scenario 2, you’ll have:

A property whose value has increased from $500k to $895,423.85 (an increase of $395,423.85) Cash flow of $60k A total remaining mortgage balance of $320,357.74 (a decrease of $79,642.26)

If you add these benefits up, in Scenario 2, you’ve essentially ballooned your original $100k investment into a total gain of $535,066.11. That’s three times the gain you would have gotten had you simply invested your $100k into VTSAX!

There are a lot of variables at play here, but you get the general idea. While investing in index funds is profitable and straightforward, if you’re willing to learn the business and put in the work, you can often make higher returns through real estate investing over the long haul.

How Difficult is Real Estate Investing?

Real estate investing is complicated. It requires a lot of knowledge, effort, and ongoing work to run a successful real estate investing operation. Among other things, you need to know:

How much a potential investment property will rent for How much a potential investment property will appreciate What sort of mortgage rates you can secure What your expenses will be each month How much property taxes will cost How much insurance will cost Etc.

All of the items above are variables that can dramatically impact whether or not a particular property is a good or bad investment. And this doesn’t even begin to account for the other things you need to do on an ongoing basis: manage the property, manage your accounts/taxes, follow all relevant laws, etc.

In short: investing in real estate is not simple and requires a lot of knowledge to do successfully. But, if you’re interested in running a real estate business, it can be a fun and profitable venture.

How We Invest in Real Estate

As I mentioned earlier, my wife and I split our investable assets 50⁄50 between index funds and real estate. The reason we do this is twofold:

It’s easy (and safe) for us to invest money in index funds It’s hard for us to invest in real estate (it took a lot of time and research to get started), but we generally earn greater returns on our real estate investments than we do on our index investments

Our real-estate investing criteria are pretty simple.

We only purchase residential real estate that we rent out to long-term tenants. We do this because it’s relatively low-risk, low-maintenance, and straightforward. We only purchase rental properties that generate a cash-on-cash return of 8% or greater. For example, if we buy a $200k property with a $40k downpayment, we need to earn $3,200 per year in profit ($3,200 is 8% of $40k) for the deal to make sense. We don’t factor appreciation into our investment calculations as we plan to hold these rental properties long-term and never sell them. The rising value of the rental properties we acquire isn’t as beneficial to us as is the cash flow. Over time, the properties pay themselves off, and once they’re free and clear, we’ll have a much larger monthly profit.

Why did we choose an 8% cash-on-cash return as our target metric for rental property purchases? In short, it’s because that 8% is roughly twice the safe withdrawal rate of our index funds.

I figured early on that if I was going to invest a ton of time and energy into learning about real estate investing, hunting down opportunities, etc., I’d have to make it worthwhile by at least doubling the safe withdrawal rate of our index funds. Otherwise, I could simply invest our money into VTSAX and never think about taking on extra work or risk.

Today, my wife and I own a small portfolio of single-family homes that we rent out to long-term tenants, each earning roughly 8% cash-on-cash return yearly.

Should I Invest in Stocks or Real Estate?

As you’ve seen by now, there isn’t a clear answer here. To sum it up:

If you’re looking for the most straightforward path to retirement, invest your money in well-diversified index funds like VTSAX. Index funds will allow you to retire with a 4% safe withdrawal rate and slowly build your wealth over time. If you’re interested in real estate and are willing to put in the time and effort to learn about it, you can potentially make greater returns, but it’s a lot of work. Or, if you’re like me, why not both? This way, you get the best of both worlds: a bit of simple, reliable index investments and a bit of riskier, more complex, and more rewarding real estate investments.

Does Music Help You Focus?

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming. Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming.

Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I’m unable to work with music most of the time, but when I have an hour or so of uninterrupted time, I still prefer to put music on and use it to help me crank through whatever it is I’m focusing on.

However, I’ve been doing some experimentation over the last few months. My goal was to determine how much music helped me focus. I didn’t have a precise scientific way of measuring this except to track whether or not I felt my Pomodoro sessions were productive.

To keep score, I kept a simple Apple Notes file that contained a running tally of whether or not I felt my recently finished Pomodoro session was productive or not. And while this isn’t the most scientific way to measure, I figured it was good enough for my purposes.

Over the last three months, I logged 120 completed Pomodoro sessions. Of those, roughly 50% (58 sessions) were completed while listening to music, and the other 50% (62 sessions) were completed without music.

To my surprise, when tallying up the results, it appears that listening to music is a distraction for me, causing me to feel like my sessions weren’t very productive. Out of the 58 Pomodoro sessions I completed while listening to music, I noted that ~20% were productive (12 sessions) vs. ~60% (37 sessions) without music.

60% vs. 20% is a significant difference, which is especially surprising since I genuinely enjoy working with music. When I started this experiment, I expected that music would make me more, not less productive.

So what’s the takeaway here? For me, it’s that despite how much I enjoy listening to music while working, it’s distracting.

Am I going to give up listening to music while trying to focus? Not necessarily. As I mentioned previously, I still love working with music. But, I’ll undoubtedly turn the music off if I’m trying to get something important done and need my time to be as productive as possible.

In the future, I’m also planning to run this experiment separately to compare the impact of instrumental vs. non-instrumental music on my productivity. I typically listen to music with lyrics (hip-hop, pop, etc.), which makes me wonder if the lyrics are distracting or just the music itself.

I’m also curious as to whether or not lyrics in a language I don’t understand would cause a similar level of distraction or not (for example, maybe I could listen to Spanish music without impacting my productivity since I don’t understand the language).

Regardless of my results, please experiment for yourself! If you’re trying to maximize productivity, you might be surprised what things are impacting your focus levels.

Saturday, 02. April 2022

Doc Searls Weblog

The Age of Optionality—and its costs

Throughout the entire history of what we call media, we have consumed its contents on producers’ schedules. When we wanted to know what was in newspapers and magazines, we waited until the latest issues showed up on newsstands, at our doors, and in our mailboxes. When we wanted to hear what was on the radio […]

Throughout the entire history of what we call media, we have consumed its contents on producers’ schedules. When we wanted to know what was in newspapers and magazines, we waited until the latest issues showed up on newsstands, at our doors, and in our mailboxes. When we wanted to hear what was on the radio or to watch what was on TV, we waited until it played on our stations’ schedules. “What’s on TV tonight?” is perhaps the all-time most-uttered question about a medium. Wanting the answers is what made TV Guide required reading in most American households.

But no more. Because we have entered the Age of Optionality. We read, listen to, and watch the media we choose, whenever we please. Podcasts, streams, and “over the top” (OTT) on-edmand subscription services are replacing old-fashioned broadcasting. Online publishing is now more synchronous with readers’ preferences than with producers’ schedules.

The graph above illustrates what happened and when, though I’m sure the flat line at the right end is some kind of error on Google’s part. Still, the message is clear: what’s on and what’s in have become anachronisms.

The centers of our cultures have been held for centuries by our media. Those centers held in large part because they came on a rhythm, a beat, to which we all danced and on which we all depended. But now those centers are threatened or gone, as media have proliferated and morphed into forms that feed our attention through the flat rectangles we carry in our pockets and purses, or mount like large art pieces on walls or tabletops at home. All of these rectangles maximize optionality to degrees barely imaginable in prior ages and their media environments: vocal, scribal, printed, broadcast.

We are now digital beings. With new media overlords.

The Digital Markets Act in Europe calls these overlords “gatekeepers.” The gates they keep are at entrances to vast private walled gardens enclosing whole cultures and economies. Bruce Schneier calls these gardens feudal systems in which we are all serfs.

To each of these duchies, territories, fiefs, and countries, we are like cattle from which personal data is extracted and processed as commodities. Purposes differ: Amazon, Apple, Facebook, Google, Twitter, and our phone and cable companies each use our personal data in different ways. Some of those ways do benefit us. But our agency over how personal data is extracted and used is neither large nor independent of these gatekeepers. Nor do we have much if any control over what countless customers of gatekeepers do with personal data they are given or sold.

The cornucopia of options we have over the media goods we consume in these gardens somatizes us while also masking the extreme degree to which these private gatekeepers have enclosed the Internet’s public commons, and how algorithmic optimization of engagement at all costs has made us into enemy tribes. Ignorance of this change and its costs is the darkness in which democracy dies.

Shoshana Zuboff calls this development The Coup We Are Not Talking About. The subhead of that essay makes the choice clear: We can have democracy, or we can have a surveillance society, but we cannot have both. Her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, gave us a name for what we’re up against. A bestseller, it is now published in twenty-six languages. But our collective oblivity is also massive.

We plan to relieve some of that oblivity by having Shoshana lead the final salon in our Beyond the Web series at Indiana University’s Ostrom Workshop. To prepare for that, Joyce and I spoke with Shoshana for more than an hour and a half last night, and are excited about her optimism toward restoring the public commons and invigorating democracy in our still-new digital age. This should be an extremely leveraged way to spend an hour or more on April 11, starting at 2PM Eastern time. And it’s free.

Use this link to add the salon to your calendar and join in when it starts.

Or, if you’re in Bloomington, come to the Workshop and attend in person. We’re at 513 North Park Avenue.

 

 

Friday, 01. April 2022

reb00ted

What can we do with a DAO that cannot be done with other organizational forms?

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not. So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?” Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. Th

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not.

So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?”

Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. They are both public, so I’m attributing:

Kurt Laitner: “They enable dynamic equity and dynamic governance”

Vittorio Bertocci: “Be robbed without any form of recourse, appeal or protection? 😛 I kid, I kid 🙂”

Dan Lyke: “…they create a gameable system that has less recourse to the law than a traditional system … [but] the immutable public ledger of all transactions may provide a better audit trail”

David Mason: “Lock yourself into a bad place without human sensibility to bail you out.”

Adam Lake: “We already have cooperatives, what is the value add?”

Phill Hallam-Baker: “Rob people who don’t understand that the person who creates them controls them absolutely.”

Jean Russell: “Act like you have a bank account as a group regardless of the jurisdictions of the members.”

David Berlind: “For now (things are changing), a DAO can fill a gap in international business law.”

Follow the links above, there are more details in the discussions.

I conclude: there is no consensus whatsoever :-) That may be because there such a large range of setups under that term today.

Wednesday, 30. March 2022

Doc Searls Weblog

Exitings

A hazard of aging well is outliving friends and other people you love. For example, two of the three in the photo above. It dates from early 1978, when Hodskins Simone & Searls, a new ad agency, was born in Durham, North Carolina. Specifically, at 602 West Chapel Hill Street. Click on that link and […]

A hazard of aging well is outliving friends and other people you love. For example, two of the three in the photo above. It dates from early 1978, when Hodskins Simone & Searls, a new ad agency, was born in Durham, North Carolina. Specifically, at 602 West Chapel Hill Street. Click on that link and you’ll see the outside of our building. Perhaps you can imagine the scene above behind the left front window, because that’s where we stood, in bright diffused southern light. Left to right are David Hodskins, Ray Simone, and me.

That scene, and the rest of my life, were bent toward all their possibilities by a phone call I made to Ray one day in 1976, when I was working as an occasionally employed journalist, advertising guy, comedy writer, radio voice, and laborer: anything that paid, plus plenty that didn’t. I didn’t yet know Ray personally, but I loved the comics he drew, and I wanted his art for an ad I had written for a local audio shop. So I called him at the “multiple media studio” where he was employed at the time. Before we got down to business, however, he also got into an off-phone conversation with another person in his office. After Ray told the other person he was on the phone with Doctor Dave (the comic radio persona by which I was known around those parts back then), the other person told Ray to book lunch with me at a restaurant downtown.

I got there first, so I was sitting down when Ray walked in with a guy who looked like an idealized version of me. Not just better looking, but radiating charisma and confidence. This was the other person who worked with Ray, and who told Ray to propose the lunch. That’s how I met David Hodskins, who used the lunch to recruit me as a copywriter for the multiple media studio. I said yes, and after a few months of that, David decided the three of us should start Hodskins Simone & Searls. Four years and as many locations later, we occupied a whole building in Raleigh, had dozens of people working for us, and were the top ad agency in the state specializing in tech and broadcasting.

A couple years after that we seemed to be hitting a ceiling as the alpha tech agency in a region still decades away from becoming the “other Silicon Valley” it wanted to be. So, after one of our clients said “Y’know, guys, there’s more action on one street in Sunnyvale than there is in all of North Carolina,” David flew out to scout Silicon Valley itself. That resulted in a tiny satellite office in Palo Alto, where David prospected for business while running the Raleigh headquarters by phone and fax. After a year of doing that, David returned, convened a dinner with all the agency managers, and said we’d have to close Palo Alto if he didn’t get some help out there. This was in August 1985.

To my surprise, I heard myself volunteering duty out there, even though a year earlier when David asked me to join him there I had said no. I’m not even sure why I volunteered this time. I loved North Carolina, had many friends there, and was well established as a figure in the community, mostly thanks to my Doctor Dave stuff. I said I just needed to make sure my kids, then 15 and 12, wanted to go. (I was essentially a single dad at the time.) After they said yes, we flew out and spent a week checking out what was for me an extremely exotic place. But the kids fell instantly in love with it. So I rented a house near downtown Palo Alto, registered the kids in Palo Alto junior and high schools, left them there with David, flew back to North Carolina, gave away everything that wouldn’t fit in a small U-Haul trailer, and towed my life west in my new 145-horse ’85 Camry sedan with a stick shift. With my Mom along for company, we crossed the country in just four days.

The business situation wasn’t ideal. Silicon Valley was in a slump at that time. “For Lease” banners hung over the windows of new buildings all over the place. Commodore, Atari, and other temporary giants in the new PC industry were going down. Apple, despite the novelty of its new Macintosh computer, was in trouble. And ad agencies—more than 200 of them—were fighting for every possible account, new and old. Worse, except for David, me, and one assistant, our whole staff was three time zones east of there, and the Internet that we know today was decades off in the future. But we bluffed our way into the running for two of the biggest accounts in review.

As we kept advancing in playoffs for those two accounts, the North Carolina office was treading water and funds were running thin. In our final pitches, we were also up against the same incumbent agency: one that, at that time, was by far the biggest and best in the valley. It was also discouraging that this agency did enviably good work. So we were not the way to bet. The evening before our last pitch, David told Ray and me that we needed to win both accounts or retreat back to North Carolina. I told him that I was staying, regardless, because I belonged there, and so did my kids, one of whom was suddenly an academic achiever and the other a surfer who totally looked the part. We had gone native. David reached across the table to shake my hand. That was his way of saying both “Thanks” and “I respect that.”

Then we won both accounts, got a mountain of publicity for having come out of nowhere and kicked ass, and our Palo Alto office quickly outgrew our Raleigh headquarters. Within a year we had closed Raleigh and were on our way to becoming one of the top tech agencies in Silicon Valley. None of this was easy, and all of it required maximum tenacity, coordination, and smarts, all of which were embodied in, and exemplified by, David Hodskins. He was wickedly smart, tough, creative, and entrepreneurial. He also had a Steve Jobs-like sense of taste and drive for perfection: perfect for leading a small and rapidly growing company. While, like Jobs, he was hard-driving and often overbearing (sometimes driving Ray and me nuts) he was also great fun to work and hang out with, and one of the best friends I’ve ever had.

One of our bondings was around basketball. David was a severely loyal Duke alumnus, and (as an Iron Duke) grandfathered with two season tickets every year to games at the Duke’s famous Cameron Indoor Stadium. I became a Duke fan as his date for dozens of games there. When we moved to Palo Alto, he and I got our basketball fix through season tickets to the Golden State Warriors. (In the late ’80s, this was still affordable for normal people.) At one point, we even came close once to winning the Warriors’ advertising business.

In the early 90s, I forked my own marketing consulting business out of HS&S, while remaining a partner with the firm until it was acquired by Publicis in 1998. By then I had also shifted back into journalism as an editor for Linux Journal, while also starting to blog. (Which I’m still doing right here.) David, Ray, and I remained good friends, however, while all three of us got married (Ray), remarried (David and I), and had California kids. In fact, I met my wife with Ray’s help in 1990.

Alas, Ray died of lung cancer in 2011, at just 63. I remember him in this post here, and every day of my life.

On November 13 of last year, my wife and I attended the first game of the season for the Indiana University men’s basketball team: the Hoosiers. David and I had rooted against the same Hoosiers countless times when they played Duke and other North Carolina teams. While at the game, I took a photo of the scene with my phone and sent it in an email to David, saying “Guess where I am?” He wrote back, “Looks suspiciously like Assembly Hall in Bloomington, Indiana, where liberals go to die. WTF are you doing there?”

I explained that Joyce and I were now visiting scholars at IU. He wrote back,

Mr. visiting scholar,

Recuperating from a one-week visit by (a friend) and his missus, before heading to Maui for T’giving week.

The unwelcome news is that I’m battling health issues on several fronts: GERD, Sleep Apnea, Chronic Fatigue, and severe abdominal pain. Getting my stomach scoped when I’m back from Maui, and hoping it isn’t stomach cancer.

Actual retirement is in sight… at the end of 2022. (Wife) hangs it up in February, 2024, so we’ll kick our travel plans into higher gear, assuming I’m still alive.

Already sick of hearing that coach K has “5 national titles, blah, blah, blah” but excited to see Paulo Banchero this year, and to see Jon Scheyer take the reins next year. Check out the drone work in this promotional video: https://youtu.be/Dp1dEadccGQ

Thanks for checking in, and glad to hear you’re keeping your brain(s) active. Please don’t become a Hoosier fan.

d

David’s ailment turned out to be ALS. After a rapid decline too awful to describe, he died last week, on March 22nd. Two days earlier I sent him a video telling him that, among other things, he was the brother I never had and a massive influence on many of the lives that spun through his orbits. Unable to speak, eat or breathe on his own, he was at least able to smile at some of what I told him, and mouth “Wow” at the end.

And now there is just one left: the oldest and least athletic of us three. (Ray was a natural at every sport he picked up and won medals in fencing. David played varsity basketball in high school. Best I ever got at that game was not being chosen last for my college dorm’s second floor south intramural team.)

I have much more to think, say, and write about David, especially since he was a source of wisdom on many subjects. But it’s hard because his being gone is so out of character.

But not completely, I suppose. Hemmingway:

The world breaks everyone and afterward many are strong at the broken places. But those that will not break it kills. It kills the very good and the very gentle and the very brave impartially. If you are none of these you can be sure it will kill you too but there will be no special hurry.

My joke about aging is that I know I’m in the exit line, but I let others cut in. I just wish this time it hadn’t been David.

But the line does keep moving, while the world holds the door.

Tuesday, 29. March 2022

Phil Windley's Technometria

The Ukrainian War, PKI, and Censorship

Summary: PKI has created a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. Hierarchies are not good architectures for building robust, trustworthy, and stable digital systems. Each semester I have students in my distributed systems class read Rainbow's End, a science fiction book by Vernor Vinge set in the near future. I think it help

Summary: PKI has created a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. Hierarchies are not good architectures for building robust, trustworthy, and stable digital systems.

Each semester I have students in my distributed systems class read Rainbow's End, a science fiction book by Vernor Vinge set in the near future. I think it helps them imagine a world with vastly distributed computing infrastructure that is not as decentralized as it could be and think about the problems that can cause. One of the plot points involves using certificate authorities (CA) for censorship.

To review briefly, certificate authorities are key players in public key infrastructure (PKI) and are an example of a core internet service that is distributed and hierarchical. Whether your browser trusts the certificate my web server returns depends on whether it trusts the certificate used to sign it, and so on up the certificate chain to the root certificate. Root certificates are held in browsers or operating systems. If the root certificate isn't known to the system, then it's not trusted. Each certificate might be controlled by a different organization (i.e. they hold the private key used to sign it), but they all depend on confidence in the root. Take out the root and the entire chain collapses.

Certificate validation path for windley.com (click to enlarge)

The war in Ukraine has made hypothetical worries about the robustness of the PKI all too real. Because of the sanctions imposed on Russia, web sites inside Russia can't pay foreign CAs to renew their certificates. Modern browsers don't just shrug this off, but issue warnings and sometimes even block access to sites with expired certificates. So, the sanctions threaten to cripple the Russian web.

In response, Russia has established its own root certificate authority (see also this from KeyFactor). This is not merely a homegrown CA, located in Russia, but a state-operated CA, subject to the whims and will of the Russian government (specifically the Ministry of Digital Development).

This is interesting from several perspectives. First, from a censorship perspective, it means that Russia can effectively turn off web sites by revoking their certificates, allowing the state to censor web sites for any reason they see fit. Hierarchical networks are especially vulnerable to censorship. And while we might view state-controlled CAs as a specific problem, any CA could be a point of censorship. Recall that while SWIFT is a private company, it is located in Belgium and subject to Belgian and European law. Once Belgium decided to sanction Russia, SWIFT had to go along. Similarly, a government could pass a law mandating the revocation of any certificate for a Russian company and CAs subject to their legal jurisdiction would go along.

From the perspective of users, it's also a problem. Only two browsers support the root certificate of the new Russian CA: the Russian-based Yandex and open-source Atom. I don't think it's likely that Chrome, Safari, Firefox, Brave, Edge, and others will be adding the new Russian root CA anytime soon. And while you can add certificates manually, most people will find that difficult.

Lastly, it's a problem for the Russian economy. The new Russian CA is a massive single point of failure, even if the Russian government doesn't use it to censor. Anonymous, state actors, and other groups can target the new CA and bring large swaths of the Russian internet down. So, state-controlled and -mandated CAs are a danger to the economy they serve. Russia's actions in response to the exigency of the war are understandable, but I suspect it won't go back even after the war ends. Dependence on a single state-run CA is a problem for Russia and its citizens.

State-controlled CAs further balkanize the internet. They put web sites at risk of censorship. They make life difficult for users. They create centralized services that threaten economic stability and vibrancy. In general, hierarchies are not good architectures for building robust, trustworthy, and stable digital systems. PKI has allowed us to create a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. We should heed this warning to engineer more decentralized infrastructures that give us confidence in our digital communications.

Photo Credits:

Coat of Arms of Russia from Motorolla (Pixabay) HTTPS Icon from Sean MacEntee (CC BY 2.0)

Tags: pki identity censorship web decentralization web3

Monday, 28. March 2022

Damien Bod

Implementing OAuth2 APP to APP security using Azure AD from a Web APP

This article shows how to implement an API service and client in separate ASP.NET Core applications which are secured using Azure application permissions implemented in an Azure App registration. The OAuth client credentials flow is used to get an access token to access the API. Microsoft.Identity.Web is used to implement the client credentials (CC) flow. […]

This article shows how to implement an API service and client in separate ASP.NET Core applications which are secured using Azure application permissions implemented in an Azure App registration. The OAuth client credentials flow is used to get an access token to access the API. Microsoft.Identity.Web is used to implement the client credentials (CC) flow. Microsoft.Identity.Client can also be used to implement this flow, or any OAuth client implementation.

Code: BlazorWithApis

The OAuth client credentials flow can be used to access services, where no user is involved and the client is trusted. This flow is used in many shapes and forms in Azure. The client application requires some type of secret to get an access token to use the secured API. In Azure, there are different ways of implementing this which vary in different names and implementations. The main difference with most of these implementations is how the secret is acquired and where it is stored. A certificate can be used as the secret or an Azure App registration secret. This secret can be stored in Azure Key Vault. Managed identities provide another way of implementing app to app secured access between Azure services and can be used to acquire access tokens using the Azure SDK. Certificate authentication can also be used to secure this security flow. This can be a little bit confusing, but as a solution architect, you need to know when and where this should be used, and not.

Delegated user access tokens or application client credential tokens

As a general rule, always use delegated user access tokens and not application access tokens if possible. You can reduce the permissions per user to the max. To acquire a user delegated access token, an identity must login somewhere using a UI. A user interaction flow is required for this. The delegated user access token can be requested using a scope for the identity. In Azure AD, the On Behalf Flow OBO can also be used to acquire further delegated user access tokens for downstream APIs. This is not possible in Azure AD B2C.

Scopes or Roles Permissions

In Azure, scope permissions are used for delegated user access tokens, not application permissions. App Roles can be used for application and / or delegated access. Roles can only be defined in Azure AD App registrations and not Azure AD B2C App registrations. To define an Azure App registration with application App Roles, you need to use an Azure AD App registration. This is very Azure specific and nothing to do with security standards. You still request a scope when using delegated or application flows, but not scope permissions when using application OAuth client credentials. More information about this can be found in the Microsoft docs:

Protected web API: Verify scopes and app roles

By using application security permissions, you give the client application permissions for whatever is allowed in the service. No user is involved. This cannot be reduced for different users, only for different client applications.

Azure App Registration setup

The hardest part of implementing an API protected using application permissions is to know how and where to setup the Azure App registration. The Azure App registration needs to be created in an Azure AD app registration and not an Azure AD B2C tenant, even if you use this. The Azure App registration needs an application ID URI, make sure this is created.

An Azure App Role can be created and can be validated in the access token.

The app role is defined as an application type. I named the role access_as_application.

The role can be added as a permission and admin consent can be given. This will be included in tokens issued for this Azure app registration.

API setup

The API is setup using the Microsoft.Identity.Web. The AddMicrosoftIdentityWebApiAuthentication adds the OAuth validation using the configuration from the app settings. I created an authorization policy to implement the authorization which is applied to the controller or as a global filter. I think this is the best way as it is the standard way in ASP.NET Core. You should avoid using the Azure claims directly in the business of the application. Microsoft.Identity.Web also provides some specific Azure helper methods which checks consent or validates the scope etc. It is important that only access tokens intended for this API should work and all other access tokens must be rejected.

services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>(); services.AddMicrosoftIdentityWebApiAuthentication(Configuration); services.AddControllers(); services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement()); // Validate id of application for which the token was created // In this case the UI application validateAccessTokenPolicy.RequireClaim("azp", "2b50a014-f353-4c10-aace-024f19a55569"); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

Using Microsoft.Identity.Web

One way of implementing a client is to use Microsoft.Identity.Web. The client and user of the application uses the OpenID Connect Code flow and a secret with some Azure specifics and once authenticated, the application can request an application token using the ITokenAcquisition interface and the GetAccessTokenForAppAsync method. The scope definition uses the /.default value with the application ID URL from the Azure App registration. This uses the client credentials flow. If the correct parameters are used, an access token is returned and the token can be used to access the API.

public class ServiceApiClientService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; public ServiceApiClientService( ITokenAcquisition tokenAcquisition, IHttpClientFactory clientFactory) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; } public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); // CC flow access_as_application" (App Role in Azure AD app registration) var scope = "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default"; var accessToken = await _tokenAcquisition.GetAccessTokenForAppAsync(scope); client.BaseAddress = new Uri("https://localhost:44324"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForServiceData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer.DeserializeAsync<List<string>>(stream); return payload; } throw new ApplicationException("oh no..."); } }

Notes

When using client applications and the client credentials flow, it is important to only share the secret or certificate with a trusted client. The client should implement this is a safe way so that it does not get stolen. If I own the client, I deploy the client to Azure (if possible) and use a Key Vault to persist the certificates or secrets. A managed identity is used to access the key vault. This way, no secret is stored in an unsecure way. Try to use delegated access tokens rather than application tokens. The delegated access token is issued to the user and the application and the authorization can be reduced. This can only be done if using a user interaction flow to authenticate. Azure App registrations use scopes for delegated access tokens and roles can be used for application permissions. With app to app flows, other ways also exist for securing this access.

Links

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles?tabs=aspnetcore

Friday, 25. March 2022

Doc Searls Weblog

An arsonoma

While walking past this scene on my way to the subway in New York last week, I saw that a woman was emptying out what hadn’t burned from this former car. Being a curious extrovert, I paused to ask her about it. The conversation, best I recall: “This your car?” “Yeah.” “I’m sorry. What happened?” […]

While walking past this scene on my way to the subway in New York last week, I saw that a woman was emptying out what hadn’t burned from this former car. Being a curious extrovert, I paused to ask her about it. The conversation, best I recall:

“This your car?”

“Yeah.”

“I’m sorry. What happened?”

“Somebody around here sets fire to bags of garbage*. One spread to the car.”

“Any suspects?”

“There are surveillance cameras on the building.” She gestured upward toward two of them.

“Did they see anything?”

“They never do.”

So there you have it. In medicine they call this kind of thing a fascinoma. Perhaps in civic life we should call this an arsonoma. Or, in law enforcement, a felonoma.

*In New York City, we now put out garbage and recycling in curbside bags.

Wednesday, 23. March 2022

MyDigitalFootprint

Will decision making improve if we understand the bias in the decision making unit?

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides. As a director, you have to have experience before taking on the role, even as a fou

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides.

As a director, you have to have experience before taking on the role, even as a founder director. This thought-piece asks if we know where our business biases start from and what direction of travel they create. Business bias is the bias you have right now that affects your choice, judgment and decision making. Business bais is something that our data cannot tell us. Data can tell me if your incentive removes choice or aligns with an outcome. 

At the most superficial level, we know that the expectations of board members drive decisions.  The decisions we take link to incentives, rewards and motivations and our shared values. 

If we unpack this simple model, we can follow (the blue arrows in the diagram below) that says your expectation builds shared values that focus/highlight the rewards and motivations (as a group) we want. These, in turn, drives new expectations.

However, equally, we could follow (the orange arrows) and observe that expectations search and align with rewards and motivations we are given; this exposes our shared values that create new expectations for us. 



Whilst Individual bias is complex; board or group bias adds an element of continuous dynamic change. We have observed and been taught this based on the “forming storming norming performing” model of group development first proposed by Bruce Tuckman in 1965, who said that these phases are all necessary and inevitable for a team to grow face up to challenges, tackle problems, find solutions, plan work, and deliver results.


The observation here is that whilst we might all follow the Tuckman ideals of “time”; in terms of the process to get to perfroming, of which there is lots of data to support, his model ignores the process of self-discovery we pass through during each phase, assuming that we align during the storming (conflicts and tensions) phase but ignore that we fundamentally have different approaches.  Do you follow the blue of orange route and from where did you start.

This is non-more evident than when you get a “board with mixed experience”, in this case, the diversity of experience is a founder, family business and promoted leader. The reason is that if you add their starting positions to the map, we tend to find they start from different biased positions and may be travelling in different directions.  Thank you to Claudia Heimer for stimulating this thought.  The storming phase may align the majority round the team but will not change the underlying ideals and biases in the individuals, which means we don’t explose the paradoxes in decision making. 


What does this all mean? As a CDO, we are tasked with finding data to support decisions? Often leadership will not follow the data, and we are left with questions. Equally, some leaders blindly follow the data without questioning it. Maybe it is time to collect smaller data at the board to uncover how we work and expose a bias in our decision making.





Monday, 21. March 2022

Heather Vescent

Beyond the Metaverse Hype

Seven Reflections Photo by Harry Quan on Unsplash On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections. Games are a playful way to explore who we are, to process
Seven Reflections Photo by Harry Quan on Unsplash

On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections.

Games are a playful way to explore who we are, to process and interact with people in a way we can’t do IRL. Games are a way to try on other identities, to create or adjust our mental map of the world. Companies won’t protect me. I’m concerned we are not fully aware of the data that can be tracked with VR hardware. From a quantified self perspective, I would love to know more information about myself to be a better human; but I don’t trust companies. Companies will weaponize any scrap of data to manipulate you and I into buying something (advertising), and even believing something that isn’t true (disinformation). Privacy for all. We need to shift thinking around privacy and security. It’s not something we each should individually have to fight for — for one of us to have privacy, all of us must have privacy. I wrote some longer thoughts in this article. Capitalism needs Commons. Capitalism can’t exist without a commons to exploit. And commons will dry up if they are not replenished or created anew. So we need to support the continuity and creation of commons. Governments traditionally are in the role of protecting commons. But people can come together to create common technological languages, like technology standards to enable interoperable technology “rails” that pave the way for an open marketplace. We need new business models. The point of a business model is profit first. This bias has created the current set of problems. In order to solve the world’s problems, we must wean ourselves off profit as the primary objective. I’m not saying that making money isn’t important, it is. But profit at all costs is what has got us into the current set of world problems. Appreciate the past. I’m worried too much knowledge about how we’ve done things in the past is being lost. But not everything needs to go into the future. Identify what has worked and keep doing it. Identify what hasn’t worked and iterate to improve on it. This is how you help build on the past and contribute to the future. Things will fail. There is a lot of energy (and money) in the Metaverse, and I don’t see it going away. That said, there will be failures. If the experimentation fails, is that so bad? In order to understand what is possible, we have to venture a bit into the realm of what’s impossible.

Watch the whole video for the thought-provoking conversation.

Thank you to Nik, Kharis, James and everyone at the Vancouver International Privacy & Security Summit!

Friday, 11. March 2022

@_Nat Zone

東日本大震災の記憶ー2011年3月11日のツイート

今日で東関東大震災から11年。記憶が薄れないように… The post 東日本大震災の記憶ー2011年3月11日のツイート first appeared on @_Nat Zone.

今日で東関東大震災から11年。記憶が薄れないように、わたしの当日のツイートを貼り付けておきます。時系列そのままです。リアルタイムであの記憶が蘇ります。日本語、英語両方あります。

日本語のツイート 英語のツイート 日本語のツイート @kthrtty STAATSKAPELLE BERLIN もなかなか。 posted at 00:35:28 無常社会の例。現代のジャンバルジャン RT @masanork: 副業で続けてたんだったらともかく、辞めさせる必要ないと思うんだけど / 科学の先生は「大ポルノ女優」! 生徒がビデオ見つけて大騒動に(夕刊フジ) htn.to/t8n8AJ posted at 09:50:16 えーと。 RT @47news: ウイルス作成に罰則 関連法案を閣議決定 bit.ly/dFndIC posted at 09:51:53 地震だ! posted at 14:48:22 緊急地震速報。30秒以内に大きな揺れが来ます。 posted at 14:48:56 ビルがバキバキ言っている。出口は確保されている。 posted at 14:51:13 エレベーターは非常停止中。 posted at 14:52:59 余震なう。縦揺れ。 posted at 15:17:58 Tsunami 10m H. posted at 15:26:33 九段会館ホール天井崩落600人巻き込み。けが人多数。 posted at 15:49:02 横浜駅前ボーリング場天井崩落。10人が下敷き。神奈川県庁外壁剥がれ落ち。 posted at 15:50:25 RT @motoyaKITO: これやばいぞ RT @OsakaUp: どなたか、助けてあげて下さい!東京都台東区花川戸1-11-7 ギークハウス浅草 301号RT @itkz 地震が起きた時、社内サーバールームにいたのだが、ラックが倒壊した。 … posted at 16:25:52 汐留シティーセンター、津波対策のために地下2階出入口封鎖。出入りには1F、地下1Fを利用のこと。 posted at 16:50:24 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:27 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:32:27 「訂正します。店舗の営業を終了しました。」そりゃそうだよねw RT @_nat: 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:42:42 another shake coming in a minute. posted at 17:44:03 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:53 本当に?津波は川も上って来るはずだけと大丈夫?安全な場所で待機が基本のはずだけど。 RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:07:47 政府の17:40の指示は、待機。RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:09:21 Finally could get in touch with my daughter. posted at 18:32:37 RT @hitoshi: 【帰宅困難の方】毛布まではさすがに用意できませんし、ゆっくり寝るようなスペースは取れないかもしれませんが、店長が泊まることになっていますので、避難してきた方は明日の朝まで滞在可能です。豚組しゃぶ庵 港区六本木7-5-11-2F posted at 18:54:42 RT @UstreamTech_JP: このたびの地震災害報道に関して、NHK様より、放送をUSTREAM上で再配信をすることについて許諾いただきました。 posted at 18:57:00 RT @oohamazaki: 【東京23区内にいる帰宅難民へ】避難場所を公開しているところを可能なかぎりGoogle Maps でまとめました。リアルタイム更新します!bit.ly/tokyohinan posted at 19:51:50 食料班が帰還 posted at 19:53:49 @night_in_tunisi ありがとうございます! posted at 19:54:45 なんと。 RT @hiroyoshi: 霞ヶ関の各庁舎には講堂がある。なぜ帰宅難民に開放しない? posted at 20:29:22 こりゃぁ、都と国とで、大分対応が分かれるなぁ。 posted at 20:50:31 RT @fu4: RT @kazu_fujisawa: 4時ぐらいの携帯メールが今ごろたくさん届いた。TwitterとGmailだけ安定稼働したな。クラウドの信頼性は専用回線より劣るというのは、嘘だという事が判明した。 posted at 20:59:35 チリの友人からその旨連絡ありました。 RT @Y_Kaneko: チリは既に警戒していると聞きました。 RT @marinepolaris: ハワイや米国西海岸、南米チリペルーの在留邦人に津波の情報をお願いします。到達確実なので。 posted at 21:50:11 @iglazer Thanks. Yes, they are fine. posted at 22:10:17 チリ政府も支援体制を整えたそうです。 @trinitynyc posted at 22:12:35 NHK 被災人の知恵 www.nhk.or.jp/hisaito2/chie/… posted at 22:16:27 市原、五井のプラント火災、陸上からは近づけず。塩釜市石油コンビナートで大規模な火災。爆発も。 posted at 22:19:21 東京都、都立高校、すべて開放へ。受け入れ準備中。 posted at 22:20:55 RT @inosenaoki: 都営新宿線は21時45分に全線再開。他の都営地下鉄はすでに再開。ただし本数はまだ少ない。 posted at 22:22:36 RT @fujita_nzm: 【お台場最新情報】台場駅すぐの「ホテル日航東京」さんでは温かいコーンスープと、冷たいウーロン茶の無料サービスが始まり、喝采を浴びています。みなさん落ち着きを取り戻し、疲れて寝る方も増えてきました。 posted at 22:24:07 For earthquake info in English/Chinese etc., tune to 963 for NHK Radio. posted at 22:25:20 RT @genwat: 福島原発は報道されているとおりです。 電源車がつけばよし、つかなければ予想は難しいです。一気にメルトダウンというものではありません。デマにまぎらわされず、推移を見守りましょう。BWR=沸騰水型軽水炉なので、汚染黒鉛を吹いたりするタイプではありません posted at 22:26:25 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqaj posted at 22:30:20 [22:39:52] =nat: 宮城県警察本部:仙台市若林区荒浜で200人~300人の遺体が見つかった。 22:40 posted at 22:41:42 RT @tokyomx: 鉄道情報。本日の運転を終日見合わせを決めたのは次のとおり。JR東日本、ゆりかもめ、東武伊勢崎線、東武東上線、京王電鉄、京成電鉄。(現在情報です) posted at 22:44:05 @mayumine よかった! posted at 22:50:58 I’m at 都営浅草線 新橋駅 (新橋2-21-1, 港区) 4sq.com/hvEZ7Z posted at 23:39:48 @ash7 汐留はだめ。浅草線はOk posted at 23:44:31 英語のツイート Big Earthquake in Japan right now. posted at 14:54:37 Earthquake Intensity in Japan. ow.ly/i/921g posted at 14:59:14 All the trains in Tokyo are stopped. posted at 15:08:32 Still Shaking. posted at 15:08:48 It is one of the biggest shake that Japan had. @shita posted at 15:13:41 Tsunami 10m H. posted at 15:26:33 90min past the shake and it is still shaking in Tokyo. posted at 16:25:50 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:28 another shake coming in a minute. posted at 17:44:03 Well, it is 8.8. RT @judico: OMG, 7.9 in Japan. Be safe @_nat_en! #earthquakes posted at 17:48:43 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:54 Now it is corrected to be 8.8. RT @domcat: Earthquake in Japan. Richter Scale 8.4. (via @_nat_en) posted at 17:54:26 Finally could get in touch with my daughter. posted at 18:32:38 @rachelmarbus Thanks! posted at 18:36:40 Fukushima Nuclear Power Plant – If we can re-install power for the cooling system within 8 hours, it will be ok. #earthquakes posted at 18:39:30 @helena_arellano We still have 7 hours to install power for the cooling system. posted at 19:32:26 Tsunami is approaching Hawaii now. posted at 22:21:37 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqam posted at 22:30:20 According to the Miyagi Policy, 200-300 bodies found in the Arahama beach. posted at 22:43:00 The post 東日本大震災の記憶ー2011年3月11日のツイート first appeared on @_Nat Zone.

Wednesday, 09. March 2022

Here's Tom with the Weather

C. Wright Mills and the Battalion

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written t

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written to the Battalion in 1935 by a freshman named C. Wright Mills.

College students are supposed to become leaders of thought and action in later life. It is expected they will profit from a college education by developing an open and alert mind to be able to cope boldly with everyday problems in economics and politics. They cannot do this unless they learn to think independently for themselves and to stand fast for their convictions. Is the student at A and M encouraged to do this? Is he permitted to do it? The answer is sadly in the negative.

Little did he know that current students would be dealing with this shit 85 years later with a group of former students with nothing better to do than infiltrate student-run organizations from freshman orientation to the newspaper. But shocking no one, they were too incompetent to maintain the privacy of the school regents who met with them.

According to meeting minutes from Dec. 1, 2020, the Rudder Association secured the attendance of four members of the A&M System Board of Regents. The meeting minutes obtained by The Battalion were censored by TRA to remove the names of the regents in the meeting as well as other “highly sensitive information.”

“DO NOT USE THEIR NAMES BEYOND THE RUDDER BOARD. They do not wish to be outed,” the minutes read on the regents in attendance.

Further examination by The Battalion revealed, however, that the censored text could be copied and pasted into a text document to be viewed in its entirety due to TRA using a digital black highlighter to censor.

Well done, Battalion.

(photo is from C. Wright Mills: Letters and autobiographical writings)

Monday, 07. March 2022

reb00ted

Finding and sharing truth and lies: which of these 4 types are you?

Consider this diagram: Trying to find: truth Crook Scientist lies Demagogue Debunker Sharing: lies truth If I try to find the truth, but lie about what I’m telling others, I’m a crook. If I try to find lies that “work” and tell them to others, I’m a demagogue. If I try to find lies to expose them and share the truth, I’m a debunker of lies. And if

Consider this diagram:

Trying to find: truth Crook Scientist lies Demagogue Debunker Sharing: lies truth

If I try to find the truth, but lie about what I’m telling others, I’m a crook.

If I try to find lies that “work” and tell them to others, I’m a demagogue.

If I try to find lies to expose them and share the truth, I’m a debunker of lies.

And if I try to find the truth, and share it, that makes me a scientist.

If so, we can now describe each one of those categories in more detail, and understand the specific behaviors they necessarily need to engage in.

For example, the scientist will welcome and produce copious objective evidence. The demagogue, likely, will provide far less evidence, and if so, point to other people and their statements as their evidence. Those other people are likely either other demagogues or just crooks.

If we could annotate people on-line with these four categories, we could even run a PageRank-style algorithm on it to figure out which is which. Why aren’t we doing this? Might this interfere with attention as the primary driver of revenue for “free” on-line services?

P.S. Sorry for the click bait headline. It just lent itself so very well…

Sunday, 06. March 2022

Doc Searls Weblog

The frog of war

“Compared to war, all other forms of human endeavor shrink to insignificance. God help me, I do love it so.” — George S. Patton (in the above shot played by George C. Scott in his greatest role.) Is the world going to croak? Put in geological terms, will the Phanerozoic eon, which began with the […]

“Compared to war, all other forms of human endeavor shrink to insignificance. God help me, I do love it so.” — George S. Patton (in the above shot played by George C. Scott in his greatest role.)

Is the world going to croak?

Put in geological terms, will the Phanerozoic eon, which began with the Cambrian explosion a half billion years ago, end at the close of the Anthropocene epoch, when the human species, which has permanently put its mark on the Earth, commits suicide with nuclear weapons? This became a lot more plausible as soon as Putin rattled his nuclear saber.

Well, life will survive, even if humans do not. And that will happen whether or not the globe warms as much as the IPCC assures us it will. If temperatures in the climate of our current interglacial interval peak with both poles free of ice, the Mississippi river will meet the Atlantic at what used to be St. Louis. Yet life will abound, as life does, at least until the Sun gets so large and hot that photosynthesis stops and the phanerozoic finally ends. That time is about a half-billion years away. That might seem like a long time, but given the age of the Earth itself—about 4.5 billion years—life here is much closer to the end than the beginning.

Now let’s go back to human time.

I’ve been on the planet for almost 75 years, which in the grand scheme is a short ride. But it’s enough to have experienced history being bent some number of times. So far I count six.

First was on November 22, 1963, when John F. Kennedy was assassinated. This was when The Fifties actually ended and The Sixties began. (My great aunt Eva Quakenbush, née Searls or Searles—it was spelled both ways—told us what it was like when Lincoln was shot and she was 12 years old. “It changed everything,” she said. So did the JFK assassination.)

The second was the one-two punch of the Martin Luther King and Bobby Kennedy assassinations, on April 4 and June 6, 1968. The former was a massive setback for both the civil rights movement and nonviolence. And neither has fully recovered. The latter assured the election of Richard Nixon and another six years of the Vietnam war.

The third was the Internet, which began to take off in the mid-1990s. I date the steep start of hockey stick curve to April 30, 1995, when the last backbone within the Internet that had forbidden commercial traffic (NSFnet) shut down, uncorking a tide of e-commerce that is still rising.

The fourth was 9/11, in 2001. That suckered the U.S. into wars in Afghanistan and Iraq, and repositioned the country from the world’s leading peacekeeper to the world’s leading war-maker—at least until Russia stepped up.

The fifth was the Covid pandemic, which hit the world in early 2020 and is still with us, causing all sorts of changes, from crashes in supply chains to inflation to complete new ways for people to work, travel, vote, and think.

Sixth is the 2022 Russian invasion of Ukraine, which began on February 24, 2022, just eleven days ago as I write this.

Big a thing as this last bend is—and it’s huge—there are too many ways to make sense of it all:

The global struggle between democracy and autocracy The real End of History At last, EU gets it together Putin the warlord The man is nuts Zelensky the hero Russia about to collapse, maybe WWIII Ukraine will win Hard to beat propaganda Putin’s turning Russia into  It’ll get worse before it ends badly while we all do more than nothing but not enough Whatever it is, social media is reporting it all World War Wired Russia does not have an out The dawn of uncivilization

I didn’t list the threat of thermonuclear annihilation among the six big changes in history I’ve experienced because I was raised with it. Several times a year we would “duck and cover” under our desks when the school would set off air raid sirens. Less frequent than fire drills, these were far more scary, because we all knew we were toast, being just five miles by air from Manhattan, which was surely in the programmed crosshairs on one or more Soviet nukes.

Back then I put so little faith in adult wisdom, and its collective expression in government choices, that I had a bucket list of places I’d like to see before nuclear blasts or fallout doomed us all. My top two destinations were the Grand Canyon and California: exotic places for a kid whose farthest family venturings from New Jersey were to see relatives in North Carolina and North Dakota. (Of no importance but of possible interest is that I’ve now been a citizen of California for 37 years, married to an Angelino for 32 of those, and it still seems exotic to me. Mountains next to cities and beaches? A tradition of wildfires and earthquakes? Whoa.)

What’s around the corner we turned two Thursdays ago? Hard to tell, in spite of all that’s being said by Wise Ones in the links above. One things I do know for sure: People have changed, because more and more of them are digital now, connected to anybody and anything at any distance, and able to talk, produce “content” and do business—and to look and think past national and territorial boundaries. We make our tools and then our tools make us, McLuhan taught. Also, all media work us over completely. We have been remade into digital beings by our wires, waves, and phones. This raises optionalities in too many ways to list.

I’m an optimist by nature, and since the ’90s have been correctly labeled a cyber-utopian. (Is there anything more utopian than The Cluetrain Manifesto?) To me, the tiny light at the end of Ukraine’s tunnel is a provisional belief that bad states—especially ones led by lying bastards who think nothing of wasting thousands or millions of innocent lives just to build an empire—can’t win World War Wired. Unless, that is, the worst of those bastards launches the first nuke and we all go “gribbit.”

Our challenge as a species, after we stop Russia’s land grab from becoming a true world war, is to understand fully how we can live and work in the Wired World as digital as well as physical beings.


Mike Jones: self-issued

Two new COSE- and JOSE-related Internet Drafts with Tobias Looker

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group. The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is: This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with […]

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group.

The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is:


This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with the key representation formats of JSON Web Key (JWK) and COSE (COSE_Key).

These curves are used in Zero-Knowledge Proof (ZKP) representations for JOSE and COSE, where the ZKPs use the CFRG drafts “Pairing-Friendly Curves” and “BLS Signatures“.

The second is “CBOR Web Token (CWT) Claims in COSE Headers“, the abstract of which is:


This document describes how to include CBOR Web Token (CWT) claims in the header parameters of any COSE structure. This functionality helps to facilitate applications that wish to make use of CBOR Web Token (CWT) claims in encrypted COSE structures and/or COSE structures featuring detached signatures, while having some of those claims be available before decryption and/or without inspecting the detached payload.

JWTs define a mechanism for replicating claims as header parameter values, but CWTs have been missing the equivalent capability to date. The use case is the same as that which motivated Section 5.3 of JWT “Replicating Claims as Header Parameters” – encrypted CWTs for which you’d like to have unencrypted instances of particular claims to determine how to process the CWT prior to decrypting it.

We plan to discuss both with the COSE working group at IETF 113 in Vienna.


Kyle Den Hartog

Convergent Wisdom

Convergent Wisdom is utilizing the knowledge gained from studying multiple solutions that approach a similar outcome in different ways in order to choose the appropriate solution for the problem at hand.

I was recently watching a MIT Opencourseware video on Youtube titled “Introduction to ‘The Society of Mind’” which is a series of lectures (or as the author refers to them “seminars”) by Marvin Minsky. While watching the first episode of this course the professors puts forth an interesting theory about what grants humans the capability to handle a variety of problems while machines remain limited in their capacity to generically compute solutions to problems. In this theory he alludes to the concept that humans “resourcefullness” is what grants us this capability which to paraphrase is the ability for humans to leverage a variety of different paths to identify a variety of solutions to the same problem. All of which can be used in a variety of different situations in order to develop a solution to the generic problem at hand. While he was describing this theory he made an off hand comment about the choice of the word “resourcefullness” positing whether there was a shorter word to describe the concept.

This got me thinking about the lingustical preciseness to describe the concept and I came across a very fullfilling suggestion on stack exchange to do just that. They suggested the word “equifinality” which is incredibly precise, but also a bit of a pompous choice for a general audience. Albeit, great for the audience he was addressing. The second suggestion sent me down a tangent of thought that I find very enticing though. “Convergent” is a word that’s commonly used to describe this in common tongue today and more importantly can be paired with wisdom to describe a new concept. I’m choosing to define the concept of “convergent wisdom” as utilizing the knowledge gained from studying multiple solutions that approach the same outcome in different ways in order to choose the appropriate solution for the problem at hand.

What’s interesting about the concept of convergent wisdom is that it suitably describes the feedback loop that humans exploit in order to gain the capability of generalizable problem solving. For example, in chemical synthesis the ability to understand the pathway of creating an exotic compound is nearly as important as the compound itself because it can affect the feasiblity of mass production of the compound. In manufacteuring similarly, there are numerous instance of giant discoveries occuring (battery technology is the one that comes to mind first) which then fall short when it comes time to manufateur the product. In both of these instances the ability to understand the chosen path is nearly as important as the solution itself.

So why does this matter and why define the concept? This concept seems incredibly important to the ability to build generically intelligent machines. Today, it seems much of the focus of the artificial intelligence feild focuses primarily on the outcome while treating the process as a hidden and unimportant afterthought up until the point in which the algorithm starts to produce ethically dubious outcomes as well.

Through the study of not only the inputs and outputs, but also the pathway by with the outcome is achieved I believe the same feedback loop may be able to be formed to produce generalizable computing in machines. Unfortunately, I’m no expert in this space and have tons of reading to do on the topic. So now that I’ve been able to describe and define the topic can anyone point me to the area of study or academic literature which focuses on this aspect of AI?

Saturday, 05. March 2022

Just a Theory

How Goodreads Deleted My Account

Someone stole my Goodreads account; the company failed to recover it, then deleted it. It was all too preventable.

On 12:31pm on February 2, I got an email from Goodreads:

Hi David,

This is a notice to let you know that the password for your account has been changed.

If you did not recently reset or change your password, it is possible that your account has been compromised. If you have any questions about this, please reach out to us using our Contact Us form. Alternatively, visit Goodreads Help.

Since I had not changed my password, I immediately hit the “Goodreads Help” link (not the one in the email, mind you) and reported the issue. At 2:40pm I wrote:

I got an email saying my password had been changed. I did not change my password. I went to the site and tried go log in, but the login failed. I tried to reset my password, but got an email saying my email is not in the system.

So someone has compromised the account. Please help me recover it.

I also tried to log in, but failed. I tried the app on my phone, and had been logged out there, too.

The following day at 11:53am, Goodreads replied asking me for a link to my account. I had no idea what the link to my account was, and since I assumed that all my information had been changed by the attackers, I didn’t think to search for it.

Three minutes later, at 11:56, I replied:

No, I always just used the domain and logged in, or the iOS app. I’ve attached the last update email I got around 12:30 EST yesterday, in case that helps. I’ve also attached the email telling me my password had been changed around 2:30 yesterday. That was when I became aware of the fact that the account was taken over.

A day and half later, at 5:46pm on the 4th, Goodreads support replied to say that they needed the URL in order to find it and investigate and asked if I remembered the name on the account. This seemed odd to me, since until at least the February 2nd it was associated with my name and email address.

I replied 3 minutes later at 5:49:

The name is mine. The username maybe? I’m usually “theory”, “itheory”, or “justatheory”, though if I set up a username for Goodreads it was ages ago and never really came up. Where could I find an account link?

Over the weekend I can log into Amazon and Facebook and see if I see any old integration messages.

The following day was Saturday the fifth. I logged into Facebook to see what I could find. I had deleted the link to Goodreads in 2018 (when I also ceased to use Facebook), but there was still a record of it, so I sent the link ID Facebook had. I also pointed out that my email address had been associated with the account for many years until it was changed on Feb 2. Couldn’t they find it in the history for the account?

I still didn’t know the link to my account, but forwarded the marketing redirect links that had been in the password change email, as well as an earlier email with a status on my reading activity.

After I sent the email, I realized I could ask some friends who I knew followed me on Goodreads to see if they could dig up the link. Within a few minutes my pal Travis had sent it to me, https://www.goodreads.com/user/show/7346356-david-wheeler. I was surprised, when I opened it, to see all my information there as I’d left it, no changes. I still could not log in, however. I immediately sent the link to Goodreads support (at 12:41pm).

That was the fifth. I did no hear back again until February 9th, when I was asked if I could provide some information about the account so they could confirm it was me. The message asked for:

Any connected apps or devices Pending friend requests to your account Any accounts linked to your Goodreads account (Goodreads accounts can be linked to Amazon, Apple, Google, and/or Facebook accounts) The name of any private/secret groups of which you are a part Any other account-specific information you can recall

Since I of course had no access to the account, I replied 30 minutes later with what information I could recall from memory: my devices, Amazon Kindle connection (Kindle would sometimes update my reading progress, though not always), membership in some groups that may or may not have been public, and the last couple books I’d updated.

Presumably, most of that information was public, and the devices may have been changed by the hackers. I heard nothing back. I sent followup inquiries on February 12th and 16th but got no replies.

On February 23rd I complained on Twitter. Four minutes later @goodreads replied and I started to hope there might be some progress again. They asked me to get in touch with Support again, which i did at 10:59am, sending all the previous information and context I could.

Then, at 12:38am, this bombshell arrived in my inbox from Goodreads support:

Thanks for your your patience while we looked into this. I have found that your account was deleted due to suspected suspicious activity. Unfortunately, once an account has been deleted, all of the account data is permanently removed from our database to comply with the data regulations which means that we are unable to retrieve your account or the related data. I know that’s not the news you wanted and I am sincerely sorry for the inconvenience.Please let me know if there’s anything else I ​can assist you with.

I was stunned. I mean of course there was suspicious activity, the account was taken over 19 days previously! As of the 5th when I found the link it still existed, and I had been in touch a number of times previously. Goodreads knew that the account had been reported stolen and still deleted it?

And no chance of recovery due to compliance rules? I don’t live in the EU, and even if I was subject to the GDPR or CCPA, there is no provision to delete my data unless I request it.

WTAF.

So to summarize:

Someone took control of my account on February 2 I reported it within hours On February 5 my account was still on Goodreads We exchanged a number of messages By February 23 the account was deleted with no chance of recovery due to suspicious activity

Because of course there was suspicious activity. I told them there was an issue!

How did this happen? What was the security configuration for my account?

I created an entry for Goodreads in 1Password on January 5, 2012. The account may have been older than that, but for at least 10 years I’ve had it, and used it semi-regularly. The password was 16 random ASCII characters generated by 1Password on October 27, 2018. I create unique random passwords for all of my accounts, so it would not be found in a breached database (and I have updated all breached accounts 1Password has identified). The account had no additional factors of authentication or fallbacks to something like SMS, because Goodreads does not offer them. There was only my email address and password. On February 2nd someone changed my password. I had clicked no links in emails, so phishing is unlikely. Was Goodreads support social-engineered to let someone else change the password? How did this happen? I exchanged multiple messages with Goodreads support between February 2 and 23rd, to no avail. By February 23rd, my account was gone with all my reviews and reading lists.

Unlike Nelson, who’s account was also recently deleted without chance of recovery, I had not been making and backups of my data. Never occurred to me, perhaps because I never put a ton of effort into my Goodreads account, mostly just tracked reading and a few brief reviews. I’ll miss my reading list the most. Will have to start a new one on my own machines.

Though all this, Goodreads support were polite but not particularly responsive. days and then weeks went by without response. The company deleted the account for suspicious activity an claim no path to recovery for the original owner. Clearly the company doesn’t give its support people the tools they need to adequately support cases such as this.

I can think of a number of ways in which these situations can be better handled and even avoided. In fact, given my current job designing identity systems I’m going to put a lot of thought into it.

But sadly I’ll be trusting third parties less with my data in the future. Redundancy and backups are key, but so is adequate account protection. Letterboxed, for example, has no multifactor authentication features, making it vulnerable should someone decide it’s worthwhile to steal accounts to spam reviews or try to artificially pump up the scores for certain titles. Just made a backup.

You should, too, and backup your Goodreads account regularly. Meanwhile, I’m on the lookout for a new social reading site that supports multifactor authentication. But even with that, in the future I’ll post reviews here on Just a Theory and just reference them, at best, from social sites.

Update April 3, 2022: This past week, I finally got some positive news from Goodreads, two months after this saga began:

The Goodreads team would like to apologize for your recent poor experience with your account. We sincerely value your contribution to the Goodreads community and understand how important your data is to you. We have investigated this issue and attached is a complete file of your reviews, ratings, and shelvings.

And that’s it, along with some instructions for creating a new account and loading the data. Still no account recovery, so my old URL is dead and there is no information about my Goodreads friends. Still, I’m happy to at least have my lists and reviews recovered. I imported them into a new Goodreads account, then exported them again and imported them into my new StoryGraph profile.

More about… Security Goodreads Account Takeover Fail

Thursday, 03. March 2022

Mike Jones: self-issued

Minor Updates to OAuth DPoP Prior to IETF 113 in Vienna

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were: Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were:

Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-06.html

Wednesday, 02. March 2022

Here's Tom with the Weather

Good Paper on Brid.gy

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook. Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites. Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook.

Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites.

Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and responses that are backfed by brid.gy to the canonical post on a personal website.

The paper details the challenging history of providing the same for Facebook (in which even Cambridge Analytica plays a part) and helped me appreciate why I never see similar responses from Facebook on personal websites these days.

It ends on a positive note…

while Facebook’s API shutdown led to an overnight decrease in Bridgy accounts (Barrett, 2020), other platforms with which Bridgy supports POSSE remain functional and new platforms have been added, including Meetup, Reddit, and Mastodon.

Monday, 28. February 2022

Randall Degges

Journaling: The Best Habit I Picked Up in 2021

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year. In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedic

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year.

In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedics and doctors at my local hospital, but it was a terrifying ordeal.

While I was in recovery, I spent a lot of time thinking about what I wanted to do when feeling better. How I wanted to spend the limited time I have left. There are lots of things I want to spend my time doing: working on meaningful projects, having fun experiences with family and friends, going on camping and hiking trips, writing, etc.

The process of thinking through everything I wanted to do was, in and of itself, incredibly cathartic. The more time I spent reflecting on my thoughts and life, the better I felt. There’s something magical about taking dedicated time out of your day to write about your thoughts and consider the big questions seriously.

Without thinking much about it, I found myself journaling every day.

It’s been just about a year since I first started journaling, and since then, I’ve written almost every day with few exceptions. In this time, journaling has made a tremendous impact on my life, mood, and relationships. Journaling has quickly become the most impactful of all the habits I’ve developed over the years.

Benefits of Journaling

There are numerous reasons to journal, but these are the primary benefits I’ve personally noticed after a year of journaling.

Journaling helps clear your mind.

I have a noisy inner monologue, and throughout the day, I’m constantly being interrupted by ideas, questions, and concerns. When I take a few minutes each day to write these thoughts down and think through them, it puts my brain at ease and allows me to relax and get them off my mind.

Journaling helps put things in perspective.

I’ve often found myself upset or frustrated about something, only to realize later in the day while writing about how insignificant the problem is. The practice of writing things down brings a certain level of rationality to your thoughts that aren’t always immediately apparent.

I often discover that even the “big” problems in my life have obvious solutions I would never have noticed had I not journaled about them.

Journaling preserves memories.

My memory is terrible. If you asked me what I did last month, I’d have absolutely no idea.

Before starting a journal, the only way I could reflect on memories was to look through photos. The only problem with this is that often, while I can remember bits and pieces of what was going on at the time, I can’t remember everything.

As I’m writing my daily journal entry, I’ll include any relevant photos and jot down some context around them – I’ve found that by looking back through these entries with both pictures and stories, it allows me to recall everything.

And… As vain as it is, I hope that someday I’ll be able to pass these journals along to family members so that, if they’re interested, they can get an idea of what sort of person I was, what I did, and the types of things I thought about.

Journaling helps keep your goals on track.

It’s really easy to set a personal goal and forget about it – I’ve done it hundreds of times. But, by writing every day, I’ve found myself sticking to my goals more than ever.

I think this boils down to focus. It would be hard for me to journal every day without writing about my goals and how I’m doing, and that little bit of extra focus and attention goes a long way towards helping me keep myself honest.

It’s fun!

When I started journaling last year, I didn’t intend to do it every day. It just sort of happened.

Each day I found myself wanting to write down some thought or idea, and the more I did it, the more I enjoyed it. Over time, I noticed that I found myself missing it on the few occasions I didn’t journal.

Now, a year in, I look forward to writing a small journal entry every day. It’s part of my wind-down routine at night, and I love it.

Keeping a Digital and Physical Journal

Initially, when I started keeping a journal, I had a few simple goals:

I wanted to be able to quickly write (and ideally include photos) in my journal I wanted it to be easy to write on any device (phone, laptop, iPad, etc.) I wanted some way to physically print my journal each year so that I could have a physical book to look back at any time I want – as well as to preserve the memories as digital stuff tends to disappear eventually

With these requirements in mind, I did a lot of research, looking for a suitable solution. I looked at various journaling services and simple alternatives (physical journals, Google Docs, Apple Notes, etc.).

In the end, I decided to start using the Day One Mac app (works on all Apple devices). I cannot recommend it highly enough if you’re an Apple user.

NOTE: I have no affiliation whatsoever with the Day One app. But it’s incredible.

The Day One app looks beautiful, syncs your journals privately using iCloud, lets you embed photos (and metadata) into entries in a stylish and simple way, makes it incredibly easy to have multiple journals (by topic), track down any entries you’ve previously created, and a whole lot more.

For me, the ultimate feature is the ability to easily create a beautiful looking physical journal whenever I want. Here’s a picture of my journal from 2021.

It’s a bound book with high-quality photos, layouts, etc. It looks astounding. You can customize the book’s cover, include select entries, and make a ton of other customizations I won’t expand on here.

So, my recommendation is that if you’re going to start a journal and want to print it out eventually, use the Day One app – it’s been absolutely 10⁄10 incredible.

Wednesday, 23. February 2022

MyDigitalFootprint

Ethics, maturity and incentives: plotting on Peak Paradox.

Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.   ---- In psychology, maturit
Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.  

----

In psychology, maturity can be operationally defined as the level of psychological functioning one can attain, after which the level of psychological functioning no longer increases with age.  Maturity is the state, fact, or period of being mature.

Whilst immature is not fully developed or has an emotional or intellectual development appropriate to someone younger, I want to use the state of immaturity, which is the state where one is not fully mature. 

Incentives are a thing that motivates or encourages someone to do something.

Peak Paradox is where you try to optimise for everything but cannot achieve anything as you do not know what drives you and are constantly conflicted. 

Ethics is a branch of philosophy that "involves systematising, defending, and recommending concepts of right and wrong behaviour".  Ethical and moral principles govern a person's behaviour.


The Peak Paradox framework is below.

 


When we start our life journey, we travel from being immature to mature.  Depending on your context, e.g. location, economics, social, political and legal, you will naturally be associated with one of the four Peak Purposes. It might not be extreme, but you will be framed towards one of them (bias).  This is the context you are in before determining your own purpose, mission or vision.  Being born in a place with little food and water, there is a natural affinity to survival.  Being born in a community that values everyone and everything, you will naturally align to a big society.  Born to the family of a senior leader in a global industry, you will be framed to a particular model.  Being a child of Murdoch, Musk, Zuckerberg, Trump, Putin, Rothschild, Gates,  etc. - requires assimilation to a set of beliefs. 

Children of celebrities break from what the parents thinking, as have we and as do our children.  Politics and religious chats with teenagers are always enlightening. As children, we travel from the contextual purpose we are born into and typically head towards reaching Peak Paradox - on this journey, we are immature. (note, it is possible to go the other way and become more extreme in purpose than your parents.)  Later in life and with the benefits of maturity, we observe this journey from the simplicity of binary choices (black and white ethics) towards a more nuanced mess at Peak Paradox. At Peak Paradox, we sense a struggle to make sense of the different optimisations, drivers, purposes, incentives, rewards that others have.  This creates anxiety, tension and conflict within us. During this journey from a given purpose to Peak Paradox, the incentives given to you are designed to maintain the original purpose, to keep you following that ideal.  Incentive frame and keep us in a model which is sometimes hard to break.

It is only when we live with complexity and are able to appreciate others' purposes, optimisations, and drivers that we will also gain clarity on our own passion, purpose or mission. By living with Peak Paradox, we change from being driven by others' incentives to uncovering our own affinity; this is where we start to align to what we naturally believe in and find what fits our skin. 

I have written before that we have to move from Peak Paradox towards a purpose if we want to have clarity of purpose and achieve something

Enter Ethics

Suppose maturity is the transition from our actions being determined by others and following their ethical or moral code to determining what type of world or society we want to be part of. In that case, we need to think about the two journeys.  

On the route from birth to Peak Paradox, I have framed this as a route from immaturity to living with complexity.  On the route in, we live by others' moral and ethical codes and are driven by their incentives.   As we leave Peak Paradox and head to a place where we find less tension, conflicts and anxiety, which has a natural affinity to what we believe, and we create our own moral and ethical compass/ code and become driven by our own motivations.  

We should take a fresh perspective on ethics and first determine which direction someone is heading before we make a judgement.  This is increasingly important in #cyberethics and #digitalethics as we only see the point and have no ability to create a bearing or direction.

The purpose of Peak Paradox

As maturity heads towards being mature, we move in and out of living at Peak Paradox from different “Purposes” I am sure this is an iterative process.  The purpose of Peak Paradox is to be in a place where you are comfortable with complexity, but it is not a place to stay. It is like a holiday home, good to go there now and again, but it does not represent life.  The question we have is how do we know when we are at Peak Paradox, probably because our north star has become a black hole!  The key message here is that some never escape the Peak Paradox black hole, finding they live in turmoil driven by others' incentives that are designed to keep you there) and never finding their own passion, incentive or motivation.  The complexity death vortex is where you endlessly search for a never reachable explanation of everything as it is all too interconnected and interdependent to unravel. Leaders come out from Peak Paradox knowing why they have a purpose and a direction. 

The Journey

Imagine you are born into a celebrity household, over time you see the world is more complicated,  you work for a company believing that money bonuses and incentives matter.  Over time you come to understand the tensions such a narrow view brings.  You search for something better, committing to living a simpler life, changing your ethics and moral code.  This still creates tension, and you search for peace and harmony, where you find less tension and more alignment; when you arrive there, you have a unique code because you live with complexity and understand different optimisations.  It does not scale, and few can grasp your message.


Where are you on the journey?  




Monday, 21. February 2022

Mike Jones: self-issued

Four Months of Refinements to OAuth DPoP

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were: Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced […]

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were:

Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced description of DPoP proof expiration checking. Described nonce storage requirements and how nonce mismatches and missing nonces are self-correcting. Specified the use of the use_dpop_nonce error for missing and mismatched nonce values. Specified that authorization servers use 400 (Bad Request) errors to supply nonces and resource servers use 401 (Unauthorized) errors to do so. Added a bit more about ath and pre-generated proofs to the security considerations. Mentioned confirming the DPoP binding of the access token in the list in (#checking). Added the always_uses_dpop client registration metadata parameter. Described the relationship between DPoP and Pushed Authorization Requests (PAR). Updated references for drafts that are now RFCs.

I believe this brings us much closer to a final version.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-05.html

Sunday, 20. February 2022

Here's Tom with the Weather

Friday, 18. February 2022

Identity Woman

Event Series: Making the Augmented Social Network Vision a Reality

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […] The post Event Series: Making the Augmented Social Network Vision a Reality appeared firs

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […]

The post Event Series: Making the Augmented Social Network Vision a Reality appeared first on Identity Woman.

Wednesday, 16. February 2022

Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing Working Group Last Call Comments

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were: Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a […]

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were:

Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a URI which hash algorithm is used. Added reference to a registry for hash algorithm identifiers. Added SHA-256 as a mandatory to implement hash algorithm to promote interoperability. Acknowledged WGLC reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-01.html

Tuesday, 15. February 2022

MyDigitalFootprint

How do you recognise when your north star has become a black hole?

This post is about being lost — without realising it. source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/ I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and
This post is about being lost — without realising it.

source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/

I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and masks; to find nothing has changed, but everything has changed.

The hope of a shake or wake up call from something so dramatic as a global pandemic is immediately lost as we re-focus on how to pay for the next meal, drink, ticket, bill, rent, mortgage, school fee or luxury item. Have you become so wedded to an economic model that we cannot see that we will not get to our imagined NorthStar?

I feel right now that I have gone into a culdesac and cannot find the exit. The road I was following had a shortcut, but my journey planner had a shortcut that assumed I was walking and could hop over the gate onto the public path and not the reality that I was in my car.

I wrote about “The New Fatigue — what is this all about?” back in Feb 2021. I could not pinpoint how I was productive, maintained fitness, and ate well, but something was missing — human contact and social and chemistry-based interactions. I posted a view about the 7 B’s and how we were responding to a global pandemic; we lost #belonging. I wrote more on this under a post about Isolation — the 8th deadly sin.

Where am I going with this? Because we want a radical change masked as a “New Normal, something better”, but we are already finding nothing has actually changed on the journey we have been on, and I am now questioning that the bright north star I had lost its sparkle!

I have used heuristics and rules to help me for the longest time; anyone on the neuro-diverse spectrum has to have them because without them surviving becomes exhausting. However, these shortcuts (when created and learnt) also mean I stopped questioning why. Now that the very fabric that set up my heuristics has changed, those rules don’t necessarily work or apply. We love a shortcut because it gets us out of trouble, we love the quick route because it works, we love an easy known trusted route because we don’t have to think. We use them all the time in business to prioritise. “What is the ROI on this?” In truth, we either don’t have the resources or cannot be bothered to spend the time to look in detail, so we use the blunt tool (ROI) to make a decision.

My tools don’t work (as well or at all)

I found my NorthStar with my tools. I was navigating to the north star with my tools. My tools did not tell me I was heading past a black hole that could suck me in. I am not sensing I am lost as my tools are not telling me; all the things we did pre-pandemic don’t work as well on the other side — but nothing other than feeling lost is telling me this. We have not gone back to everything working and still have not created enough solid ground to build new rules, so we are now lost, looking for a new NorthStar with tools that do not work.

Our shortcuts sucked us in and took away the concept that we need to dwell, be together, work it out, and take time. Our tools and shortcuts reduced our time frames and tricked us into thinking they would work forever. The great quote from Anthony Zhou below assumes you know where you are going. That is not true.

How do I recognise that my north star has become a black hole because my shortcuts and rules no longer work, creating fatigue I cannot describe, and I feel lost? There is a concept of anchor points in philosophy, and it is a cognitive bias. When you lose your anchor in a harbour, you drift (ignoring sea anchors for those who sail). The same can be said when you lose your own personal anchor points that have provided the grounding for your decision making. Routines and experience are not anchor points. But the pandemic looks to have cut the ties we had to anchor points, so we feel all somewhat lost and drifting. The harder we try to re-apply the old rules, the more frustrated we become that nothing works. Perhaps it is time to make some new art such that we can discover the new rules and find some new anchor points. Then, maybe I will feel less lost?


Tuesday, 08. February 2022

MyDigitalFootprint

Hostile environments going in the right direction; might be the best place to work?

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments. ---- I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some th

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments.

----

I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." When writing this, I have the same problem looking for a direct opposite of a Hostile Environment, as in an extreme ecosystem (ecology),  and whilst I have opted for a non-hostile environment, anti-hostile would be better.

 In nature, a hostile or extreme environment is a habitat characterised by harsh environmental conditions, beyond the optimal range for the development of humans, for example, pH 2 or 11, −20°C or 113°C, saturating salt concentrations, high radiation, 200 bars of pressure, among others.  These hostile places are super interesting because life is possible, and it is from what life emerged. I will extend the thinking and use this as an analogy to help unpack complex decision making by comparing hostile to non-hostile, extreme to amicable. 

 

Time apparently moves more slowly than in a non-hostile environment than in a hostile environment.  Time, in this case, is about the period between change.  Change in a hostile environment is challenging and has to be done in tiny steps over a long period of time. Rapid change increases the risk of death and non-survival.   To survive in a hostile environment, the living thing needs stability and resilience.  The processes (chemistry) methods (biology) must be finely adjusted to become incredibly efficient and effective - efficacy matters. Change is incremental and very slow.  To survive, sharing is a better option; having facts (data) matters, you survive together, and there are few paradoxes. Of note is that hostile environments will become less hostile. As it moved from an acidic creation to our current diversity, the earth is worthy of note.

In a non-hostile environment, time can move fast between iteration/ adaptation.  The risk of a change leading to death is lower as the environment is tolerant of change.   The jumps can be more significant with fewer threats.  Because the environment has a wide tolerance, it is less sensitive to risk (5 sigma deviation is acceptable); therefore, you can have large scale automation, programmable algorithms and less finely tuned processes, methods and rules.  Innovation and change will come thick and fast as the environment is amiable and amicable.  The time between this innovation and adaptation is fast.   The environment creates more volatility, uncertainty, complexity and ambiguity.   Politics and power focus on control over survival.  The world is full of paradoxes.  Non-hostile environments will become more hostile. 

----

Yes, there are problems will all analogies, and this one breaks down; however, there is a principle here worth thinking about.  In hostile environments, there are fewer paradoxes.   I would argue that this is because survival is one driving purpose.  Survival is one of the four-opposing purposes in the peak-paradox model.    In non-hostile environments, you can optimise for more than one thing.  Indeed you can have many purposes all competing.  This leads to many paradoxes.   In work environments where senior leadership is unable to comprehend paradoxes, I also observe hostile environments (different to natural ones but just as toxic).  Where I find teams that embrace VUCA, innovation, change and can see the paradoxes in their data, facts and knowledge;  observe congenial, amenable, non-hostile and anti-hostile environments.  

The key here is not this observation but about direction.  Knowing which camp you are in is essential, but so too is knowing the direction. How to measure Peak-Hostility is going to be another post. Non-hostile places tend to become more hostile because of politics and power dynamics; dealing with paradoxes is hard work.  Because they demand working together and clarity of purpose, hostile environments can become less hostile.

If we plot this thinking on the Peak Pardox framework, I believe it will be difficult to escape the dynamics of Peak Human Purpose (survival) until scarcity is resolved.  At Peak Individual Purpose, the few will control, but this creates a hostile environment for the majority.  Peak Work, we observe fierce competition between the two camps, where hostile can win by focussing on costs, but non-hostile wins through innovation.  At Peak Society Purpose - there is something unique as non-hostile could lead to anti-hostile.  

As to decision-making, what becomes critical is whether your decision-making processes match your (hostile/ non-hostile) environment and direction, as they demand very different approaches.  Hostile, in many ways, is more straightforward as there is a much more defined purpose to which decisions can be aligned.  Non-hostile introduce paradoxes, optimisation and complexity into the processes with many interested stakeholders.  If there is a mismatch in methods, this can be destructive—much more to think about.   

 

 

 


Monday, 07. February 2022

Werdmüller on Medium

The web is a miracle

Not everything has to be a business. Continue reading on Medium »

Not everything has to be a business.

Continue reading on Medium »

Thursday, 03. February 2022

Altmode

Chasing Power Anomalies

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing […]

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing list.

Pacific Gas and Electric (PG&E) responded to these customers with a visit, and in some cases replaced the service entrance cable to the home. In at least one case PG&E also said they might need to replace the pole transformer, which would take a few months to complete. I have heard no reports that these efforts have made any difference.

This isn’t our first recent challenge with voltage regulation in our neighborhood. Our most recent issue was a longer-term voltage regulation problem that occurred on hot days, apparently due to load from air conditioners and the fact that our neighborhood is fed by older 4-kilovolt service from the substation. This is different, and raised several questions:

How local are the anomalies? Are neighbors on different parts of the street seeing the same anomalies, or are they localized to particular pole transformers or individual homes? What is the duration and nature of the anomalies? Are they only happening in the evening and early morning, or do we just notice them at these times?

To try to answer these questions, I found a test rig that I built several years ago when we were noticing some dimming of our lights, apparently due to neighbors’ air conditioners starting on summer evenings. The test rig consists of a pair of filament transformers: 110 volt to 6 volt transformers that were used in equipment with electronic tubes, which typically used 6 volts to heat the tube’s filament. The transformers are connected in cascade to reduce the line voltage to a suitable level for the line-in audio input on a computer. An open-source audio editing program, Audacity, is used to record the line voltage. I often joke that this is a very boring recording: mostly just a continuous 60 hertz tone.

At the same time, I started recording the times our lights flickered (or my uninterruptable power supply clicked, another symptom). I asked my neighbors to record when they see their lights flicker and report that back to me.

I created a collection of 24-hour recordings of the power line, and went looking for the reported power anomalies. It was a bit of a tedious process, because not everyone’s clocks are exactly synchronized. But I was successful in identifying several power anomalies that were observed by neighbors on opposite ends of the street (about three blocks). Here’s a typical example:

Typical power anomaly

As you can see, the problem is very short in duration, about 60 milliseconds or so.

I was getting a lot of flicker reports, and as I mentioned, searching for these anomalies was tedious. So I began looking at the analysis capabilities of Audacity. I noticed a Silence Finder plug-in and attempted to search for the anomalies using that tool. But Silence Finder is designed to find the kind of silence that one might find between tracks on an LP: very quiet for a second or so. Not surprisingly, Silence Finder didn’t find anything for me.

I noticed that Silence Finder is written in a specialized Lisp-like signal processing language known as Nyquist. So I had a look at the source code, which is included with Audacity, and was able to understand quite a bit of what was going on. For efficiency reasons, Silence Finder down-samples the input data so it doesn’t have to deal with as much data. In order to search for shorter anomalies, I needed to change that, as well as the user interface limits on minimum silence duration. Also, the amplitude of the silence was expressed in dB, which makes sense for audio but I needed more sensitivity to subtle changes in amplitude. So I changed the silence amplitude from dB to a linear voltage value.

The result was quite helpful. The modified plug-in, which I called “Glitch Finder”, was able to quite reliably find voltage anomalies. For example:

Power recording 1/29/2022-1/30/2022

The label track generated by Glitch Finder points out the location of the anomalies (at 17:05:12, 23:00:12, and 7:17:56 the next morning), although they’re not visible at this scale. Zoom in a few times and they become quite obvious:

Power anomaly at 1/30/2022 7:17:56

Thus far I have reached these tentative conclusions:

The power problems are primarily common to the neighborhood, and unlikely to be caused by a local load transient such as plugging an electric car in. They seem to be concentrated mainly in the evening (4-11 pm) and morning (6-10 am). These seem to be times when power load is changing, due to heating, cooking, lighting, and home solar power systems going off and on at sunset and sunrise. The longer term voltage goes up or down a bit at the time of a power anomaly. This requires further investigation, but may be due to switching activity by the utility. Further work

As usual, a study like this often raises new questions about as quickly as it answers questions. Here are a few that I’m still curious about.

What is the actual effect on lights that causes people to notice these anomalies so easily? I currently have an oscilloscope connected to a photoelectric cell, set to trigger when the lights flash. It will be interesting to see how that compares with the magnitude of the anomaly. Do LED lights manifest this more than incandescent bulbs? It seems unlikely that such a short variation would affect the filament temperature of an incandescent bulb significantly. Do the anomalies correlate with any longer-term voltage changes? My test rig measures long-term voltage in an uncalibrated way, but the processing I’m currently doing doesn’t make it easy to look at longer-term voltage changes as well.

Wednesday, 02. February 2022

Moxy Tongue

Bureaucratic Supremacy

The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence. Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++ History is long; let the dead pas
The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence.
Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++
History is long; let the dead past bury its dead. You are here, now, and the structure of your participation in this life, the administration of your Rights in civil society, matters a great deal for the results both are capable of rendering.
"We the people" is an example of how words crafted by intent, can be manipulated over time to render outcomes out-of-step with their original intent. Once upon a time.. "people" was an easy word to define. An experiment in self-governance, unique in all the world and history, arrived because people made it so. Fast forward to 2022, and people no longer function as "people" under the law; in lieu of basic observable fact, a bureaucratic interpretation and abstraction of intent has been allowed to take root among people.
Oft confused in 2020++ with phrases like "white supremacy", or "Institutional racism", methods of administrative bureaucracy have taken a supreme role in defining and operationalizing Rights defined for "people". This "bureaucratic supremacy" has allowed abstraction of words like "people" to render a Government operated by bureaucrats, not under the authority of the people "of, by, for" whom it was originally conceived and instantiated, but instead under methods of processing bureaucratic intent. From the point-of-view of the oppressed historically, the bureaucracy has skin in the game, and domination is absolute. But, from the point-of-view of the unexperienced future derived "of, by, for" people, skin has nothing to do with it. History's leverage is one of administrative origin.
Pandemics will come and go in time; experiences of the administrative machinery that guarantees the integrity, security and continued self-governance of society by people should never be overlooked. Especially in the context of now "computational" Constitutional Rights for people, (not birth certificates - vaccination passport holders - or ID verification methods & artifacts poorly designed) where operational structure determines the integrity of operational run time results. Literature might say "Freedom of Speech" is a Right, but if the administrative system does not compute said Rights, then they cease to exist. 
"Bureaucratic Supremacy" has a predictable moat surrounding its practices; credentialed labor. Employees with labor certifications are only as useful as the validity of the credential under inspection and in practice. Administering the permission to be hired, work, contribute value and extend a meaningful voice into a civil system is easily sequestered if/when that credential is meaningless under inspection, and is only used as means of identifying bureaucratic compliance.
Bureaucratic supremacy is the direct result of bureaucratic compliance; people, functioning as "people" willing to cede their inalienable rights in exchange for a paycheck, yield a systemic approach to human management that often counters the initial intent and integrity of a systems existence. Often heard when something happens that lacks systemic integrity, "I was only doing my job" represents an output of bureaucratic fraud, whereby people claim plausible deniability of responsibility and accountability based on the structure of their working efforts. Corporate law is founded on the premise of "liability control", whereby a resulting bureaucracy allows real human choices to function as bureaucratic outcomes lacking any real direct human definition. People are no longer operating as "people" when abstracted by the law in such ways, and the world over, systems of bureaucracy with historic significance control and confuse the interpretation of results that such a system of labor induces.
Rooted at the origin of a self-governed civil society is an original act of human Sovereignty. In America, this act is writ large by John Hancock for a King's benefit, as well as every administrative bureaucracy the world will ever come to experience. People declare independence from bureaucracies by personal Sovereign authority. This is the root of Sovereign authority, and can never be provisioned by a bureaucracy. Bureaucratic supremacy is a perversion of this intent, and labor credentials make it so.
Where do "we" go from here?
People, Individuals all, is the only living reality of the human species that will ever actually exist. People, living among one another, never cease to function as Individuals, and any systemic process that uses a literary abstraction, or computational abstraction to induce "we" into a bureaucratic form is an aggressive act of fraud against Humanity, and the Sovereignty defined "of, by, for" such people.
In America, people are not the dog of their Government; self-governance is man's best friend. The order of operations in establishing such a "more perfect Union" is critical for its sustained existence. Be wary of listening to lifetime bureaucrats; they will speak with words that are no longer tools for human advancement, but instead, are designed to reinforce the "bureaucratic supremacy" of the authority derived by their labor credentials. Inspect those credentials directly to ensure they are legitimate, for false labor credentials are endemic.
Structure yields results; fraud by bureaucracy and Rights for people are juxtapositional and never exist in the same place at the same time. 
Think About It.. More Individuals in Civil Society Needed: https://youtu.be/KHbzSif78qQ

Tuesday, 01. February 2022

Here's Tom with the Weather

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Monday, 31. January 2022

Identity Woman

Reality 2.0 Podcast: ID.me Vs. The Alternatives

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards.  The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards. 

The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

Sunday, 30. January 2022

Jon Udell

Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization. I was a child of the 1960s, and … Continue reading Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization.

I was a child of the 1960s, and spent my grade school years in a newly-built suburb of Philadelphia. Commuter culture was well established by then, so the dads in the neighborhood were gone during the day. So were some of the moms, mine included, but many were at home and were able to keep an eye on us kids as we played in back yards after school. And our yards were special. A group of parents had decided not to fence them, thus creating what was effectively a private park. The games we played varied from season to season but always involved a group of kids roaming along that grassy stretch. Nobody was watching us most of the time. Since the kitchens all looked out on the back yards, though, there was benign surveillance. Somebody’s mom might be looking out at any given moment, and if things got out of hand, somebody’s mom would hear that.

For most kids, a generation later, that freedom was gone. Not for ours, though! They were in grade school when BYTE Magazine ended and I began my remote career. Our house became an after-school gathering place for our kids and their friends. With me in my front office, and Luann in her studio in the back, those kids enjoyed a rare combination of freedom and safety. We were mostly working, but at any given moment we could engage with them in ways that most parents never could.

I realized that commuter culture had, for several generations, sucked the daytime life out of neighborhoods. What we initially called telecommuting wasn’t just a way to save time, reduce stress, and burn less fossil fuel. It held the promise of restoring that daytime life.

All this came back to me powerfully at the height of the pandemic lockdown. Walking around the neighborhood on a weekday afternoon I’d see families hanging out, kids playing, parents working on landscaping projects and tinkering in garages, neighbors talking to one another. This was even better than my experience in the 2000s because more people shared it.

Let’s hold that thought. Even if many return to offices on some days of the week, I believe and hope that we’ve normalized working from home on other days. By inhabiting our neighborhoods more fully on weekdays, we can perhaps begin to repair a social fabric frayed by generations of commuter culture.

Meanwhile here is a question to ponder. Why do we say that we are working from and not working at home?


Randall Degges

How to Calculate the Energy Consumption of a Mac

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here. While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here.

While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why I live in a home with solar power, drive an electric vehicle, and try to avoid single-use plastics as much as possible.

During a recent impact-focused hackathon at work (come work with me!), I found myself working on an interesting sustainability project. Our team’s idea was simple: because almost all Snyk employees work remotely using a Mac laptop, could we measure the energy consumption of every employee’s Mac laptop to better understand how much energy it takes to power employee devices, as well as the amount of carbon work devices produce?

Because we know (on average) how much carbon it takes to produce a single kilowatt-hour (kWh) of electricity in the US (0.85 pounds of CO2 emissions per kWh), if we could figure out how many kWh of electricity were being used by employee devices, we’d be able to do some simple math and figure out two things:

How much energy is required to power employee devices How much carbon is being put into the atmosphere by employee devices

Using this data, we could then donate money to a carbon offsetting service to “neutralize” the impact of our employee’s work devices.

PROBLEM: Now, would this be a perfectly accurate way of measuring the true carbon impact of employees? Absolutely not – there are obviously many things we cannot easily measure (such as the amount of energy of attached devices, work travel, food consumption, etc.), but the idea of being able to quantify the carbon impact of work laptops was still interesting enough that we decided to pursue it regardless.

Potential Energy Tracking Solutions

The first idea we had was to use smart energy monitoring plugs that employees could plug their work devices into while charging. These plugs could then store a tally of how much energy work devices consume, and we could aggregate that somewhere to get a total amount of energy usage.

I happen to have several of the Eve Energy smart plugs around my house (which I highly recommend if you use Apple’s HomeKit) that I’ve been using to track my personal energy usage for a while now.

While these devices are incredible (they work well, come with a beautiful app, etc.), unfortunately, they don’t have any sort of publicly accessible API you can use to extract energy consumption data.

We also looked into various other types of smart home energy monitoring plugs, including the Kasa Smart Plug Mini, which does happen to have an API.

Unfortunately, however, because Snyk is a global company with employees all over the world, hardware solutions were looking less and less appealing as to do what we wanted, we’d need to:

Ship country-specific devices to each new and existing employee Include setup instructions for employees (how to configure the plugs, how to hook them up to a home network, etc.) Instruct employees to always plug their work devices into these smart plugs, which many people may forget to do Is It Possible to Track Mac Energy Consumption Using Software?

When someone on the team proposed using software to track energy consumption, I thought it’d be a simple task. I assumed there were various existing tools we could easily leverage to grab energy consumption data. But boy, oh boy, I was wrong!

As it turns out, it’s quite complicated to figure out how many watt-hours of electricity your Mac laptop is using. To the best of my knowledge, there are no off-the-shelf applications that do this.

Through my research, however, I stumbled across a couple potential solutions.

Using Battery Metrics to Calculate Energy Consumption

The first idea I had was to figure out the size of the laptop’s battery (in milliamp-hours (mAh)), as well as how many complete discharge cycles the battery has been through (how many times has the battery been fully charged and discharged).

This information would theoretically allow us to determine how much energy a Mac laptop has ever consumed by multiplying the size of the battery in mAh by the number of battery cycles. We could then simply convert the number of mAh -> kWh using a simple formula.

After a lot of Google-fu and command-line scripting, I was able to get this information using the ioreg command-line tool, but in the process, I realized that there was a critical problem with this approach.

The problem is that while the variables I mentioned above will allow you to calculate the energy consumption of your laptop over time, when your laptop is fully charged and plugged into a wall outlet it isn’t drawing down energy from the battery – it’s using the electricity directly from your wall.

This means that the measuring approach above will only work if you never use your laptop while it is plugged into wall chargers – you’d essentially need to keep your laptop shut down while charging and only have it turned on while on battery power. Obviously, this is not very realistic.

Using Wall Adapter Information to Calculate Energy Consumption

After the disappointing battery research, I decided to take a different approach. What if there was a way to extract how much energy your laptop was pulling from a wall adapter?

If we were able to figure out how many watts of electricity, for example, your laptop was currently drawing from a wall adapter, we could track this information over time to determine the amount of watt-hours of electricity being consumed. We could then easily convert this number to kWh or any other desired measure.

And… After a lot of sifting through ioreg output and some help from my little brother (an engineer who helps build smart home electric panels), I was able to successfully extract the amount of watts being pulled from a plugged-in wall adapter! Woo!

The Final Solution: How to Calculate the Energy Consumption of Your Mac Using Software

After many hours of research and playing around, what I ended up building was a small shell script that parses through ioreg command-line output and extracts the amount of watts being pulled from a plugged-in wall adapter.

This shell script runs on a cron job once a minute, logging energy consumption information to a file. This file can then be analyzed to compute the amount of energy consumed by a Mac device over a given time period.

I’ve packaged this solution up into a small GitHub project you can check out here.

The command I’m using to grab the wattage information is the following:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | grep BatteryData | grep -o '"AdapterPower"=[0-9]*' | cut -c 16- | xargs -I % lldb --batch -o "print/f %" | grep -o '$0 = [0-9.]*' | cut -c 6-

Here it is broken down with a brief description of what these commands are doing:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | \ # retrieve power data grep BatteryData | \ # filter it down to battery stats grep -o '"AdapterPower"=[0-9]*' | \ # extract adapter power info cut -c 16- | \ # extract power info number xargs -I % lldb --batch -o "print/f %" | \ # convert power info into an IEEE 754 float grep -o '$0 = [0-9.]*' | \ # extract only the numbers cut -c 6- # remove the formatting

The output of this command is a number which is the amount of watts currently being consumed by your laptop (I verified this by confirming it with hardware energy monitors). In order to turn this value into a usable energy consumption metric, you have to sample it over time. After thinking this through, here was the logging format I came up with to make tracking energy consumption simple:

timestamp=YYYY-MM-DDTHH:MM:SSZ wattage=<num> wattHours=<num> uuid=<string>

This format allows you to see:

The timestamp of the log The amount of watts being drawn from the wall at the time of measurement (wattage) The number of watt hours consumed at the time of measurement (wattHours), assuming this measurement is taken once a minute, and The unique Mac UUID for this device. This is logged to help with deduplication and other statistics in my case.

Here’s an example of what some real-world log entries look like:

timestamp=2022-01-30T23:41:00Z wattage=8.37764739 wattHours=.13962745650000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:42:01Z wattage=8.7869072 wattHours=.14644845333333333333 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:43:00Z wattage=9.16559505 wattHours=.15275991750000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:44:00Z wattage=8.49206352 wattHours=.14153439200000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:45:00Z wattage=7.45262718 wattHours=.12421045300000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999

To sum up the amount of energy consumption over time, you can then parse this log file and sum up the wattHours column over a given time period. Also, please note that the script I wrote will NOT log energy consumption data to the file if there is no energy being consumed (aka, your laptop is not plugged into a wall adapter).

PROBLEMS: While this is the final solution we ended up going with, it still has one fatal flaw: this approach only works if the script is ran once a minute. This means that if your laptop is shut down or sleeping and this code is not running, there will be no way to log energy consumption data.

What I Learned About Tracking Energy Consumption on Macs

While building our short sustainability-focused hackathon project, I learned a lot about tracking energy consumption on Macs.

Your laptop doesn’t always use its battery as a power source, so tracking battery metrics is not an ideal solution It’s possible to track energy consumption by measuring the draw from wall adapters, although this approach isn’t perfect as it requires your computer to be on and running code on a regular interval While using hardware energy trackers isn’t convenient in our case, this is certainly the simplest (and probably the best) option for personal energy tracking

If you’d like to see the software-based energy tracking solution I built, please check it out on GitHub.

I’m currently in the process of following up with Snyk’s IT department to see if this is something we could one day roll out automatically to employee devices. I still think it would be incredibly interesting to see a central dashboard of how much energy Snyk employees are using to “power” their work, and what that amount of carbon looks like.

PS: The creation of this blog post took precisely 19.972951810666647 watt-hours of electricity and generated .016977009039067 pounds of CO2.

Saturday, 29. January 2022

Mike Jones: self-issued

Working Group Adoption of the JWK Thumbprint URI Specification

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is: This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts […]

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is:

This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts requiring URIs.

The need for this arose during specification work in the OpenID Connect working group. In particular, JWK Thumbprint URIs are used as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs in the Self-Issued OpenID Provider v2 specification.

Given that the specification does only one simple thing in a straightforward manner, we believe that it is ready for working group last call.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-00.html

Aaron Parecki

Stream a USB webcam to HDMI on a Raspberry Pi

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

Note: Do not actually do this, it's terrible.

Install Raspberry Pi OS Lite, you don't want the full desktop environment.

Once you boot the Pi, install VLC and the X windows environment:

sudo apt install vlc xinit

Configure your Pi to boot to the command line already logged in, using the tool raspi-config.

Create the file ~/.bash_profile with the following contents which will start X on boot:

if [ -z $DISPLAY ] && [ $(tty) = /dev/tty1 ]
then
startx
fi

Create the file ~/.xinitrc to launch VLC streaming the webcam when X launches:

#!/bin/bash
cvlc v4l2:// :v4l2-dev=/dev/video0

Now you can reboot the Pi with a webcam plugged in and you'll get a full screen view of the camera.

If your webcam isn't recognized when it first boots up, you'll need to quit VLC and start it again. You can quit by pressing ctrl-Q, then type startx to restart it after you plug the camera back in. If that doesn't work, you might have to SSH in and kill the process that way.

There are many problems with this approach:

It seems VLC is not hardware accelerated so there is pretty bad tearing of the image Sometimes the webcam isn't recognized when the Pi boots up and I have to unplug it and plug it back in when it boots and restart the script The image tearing and stuttering is completely unusable for pretty much anything

Do you know of a better solution? Let me know!

So far I haven't found anything that actually works, and I've searched all the forums and tried all the solutions with guvcview and omxplayer with no luck so far.

For some other better solutions, check out my blog post and video How to Convert USB Webcams to HDMI.


Werdmüller on Medium

Surfing the stress curve

Using the Yerkes-Dodson Law to craft a better, calmer life. Continue reading on Medium »

Using the Yerkes-Dodson Law to craft a better, calmer life.

Continue reading on Medium »


Hans Zandbelt

OpenID Connect for Oracle HTTP Server

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that: Oracle Webgate and mod_osso are SSO plugins … Continue reading →

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that:

Oracle Webgate and mod_osso are SSO plugins (aka. agents) for the Oracle HTTP Server (OHS) that implement a proprietary (Oracle) SSO/authentication protocol that provides authentication (only) against Oracle Access Manager the said components are closed source implementations owned by Oracle these components leverage a single domain-wide SSO cookie which has known security drawbacks, especially in todays distributed and delegated (cloud and hybrid) application landscape, see here ZmartZone supports builds of mod_auth_openidc that can be used as plugins in to Oracle HTTP Server (11 and 12), thus implementing standards based OpenID Connect for OHS with an open source component those builds are a drop in replacement into OHS that can even be used to set the same headers as mod_osso/Webgate does/did mod_auth_openidc can be used to authenticate to Oracle Access Manager but also to (both commercial and free) alternative Identity Providers such as PingFederate, Okta, Keycloak etc. when required Oracle HTTP Server can be replaced with stock Apache HTTPd the Oracle HTTP Server builds of mod_auth_openidc come as part of a light-weight commercial support agreement on top of the open source community support channel

In summary: modern OpenID Connect-based SSO for Oracle HTTP Server can be implemented with open source mod_auth_openidc following a fast, easy and lightweight migration plan.

See also:
https://hanszandbelt.wordpress.com/2021/10/28/mod_auth_openidc-vs-legacy-web-access-management
https://hanszandbelt.wordpress.com/2019/10/23/replacing-legacy-enterprise-sso-systems-with-modern-standards/

Friday, 28. January 2022

Identity Woman

Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […] The post Explo

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […]

The post Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee appeared first on Identity Woman.

Monday, 24. January 2022

Jon Udell

Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone. “Don’t worry,” I said confidently, “it’ll come back to … Continue reading Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone.

“Don’t worry,” I said confidently, “it’ll come back to one us.”

Sure enough, a few days later, on a bike ride, the name popped into my head. I’m sure you’ve had the same experience. This time around it prompted me to think about how that happens.

To me it feels like starting up a background search process that runs for however long it takes, then notifies me when the answer is ready. I know the brain isn’t a computer, and I know this kind of model is suspect, so I wonder what’s really going on.

– Why was I was so sure the name would surface?

– Does a retrieval effort kick off neurochemical change that elaborates over time?

– Before computers, what model did people use to explain this phenomenon?

So far I’ve only got one answer. That spunky little girl was Diana.


Hyperonomy Digital Identity Lab

Trusted Digital Web (TDW2022): Characteristic Information Scopes

Figure 1. Trusted Digital Web (TDW2022): Characteristic Information Scopes (based on the Social Evolution Model

Sunday, 23. January 2022

Moxy Tongue

Rough Seas Ahead People

The past is dead.  You are here now. The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas.  Whereas human; automated human process. Automate human data extraction. Au
The past is dead. 
You are here now.
The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas. 
Whereas human; automated human process. Automate human data extraction. Automate human data use.
I am purposefully vague -> automate everything that can be automated .. this is here, now.
What is a Constitution protecting both "Human Rights" and "Civil Rights"? 
From the view of legal precedent and human intent actualized, it is a document, a work of literary construct, and its words are utilized to determine meaning in legal concerns where the various Rights of people are concerned. Imperfect words of literature, implemented in their time and place. And of those words, a Governing system of defense for the benefit "of, by, for" the people Instituting such Governance.
This is the simple model, unique in the world, unique in history as far as is known to storytellers the world over. A literary document arriving here and now as words being introduced to their data manifestations. Data loves words. Data loves numbers. Data loves people the most. Why?
Data is "literally" defined as "data" in relation to the existence of Humanity. That which has no meaning to Humanity is not considered "data" being utilized as such. Last time I checked, Humanity did not know everything, yet. Therefore much "data" has barely been considered as existing, let alone being understood in operational conditions called "real life", or "basic existence" by people. 
This is our administrative problem; words are not being operationalized accurately as data. The relationship between "words" and "data" as operational processes driving the relationship between "people" and "Government Administration" has not been accurately structured. In other words, words are not being interpreted as data accurately enough, if at all.
A governed system derived "of, by, for" the people creating and defending such governed process, has a basic starting point. It seems obvious, but many are eager to acquiesce to something else upon instantiation of a service relationship, when easy or convenient enough, so perhaps "obvious" is just a word. "Of, By, For" people means that "Rights" are for people, not birth certificates. 
Consider how you administer your own life. Think back to last time you went to the DMV. Think back to last time you filed taxes and something went wrong that you needed to fix. Think back to when you registered your child for kindergarten. Think back to the last time you created an online bank account. 
While you are considering these experiences, consider the simultaneous meaning created by the words "of, by, for" and whether any of those experiences existed outside of your Sovereign Rights as a person.
Humanity does not come into existence inside a database. The American Government does not come into authority "of, by, for" database entries. 
Instead, people at the edges of society, in the homes of our towns derive the meaning "of, by, for" their lawful participation. Rights are for people, not birth certificates. People prove birth certificates, birth certificates do not prove people. If an administrative process follows the wrong "administrative precedent" and logic structure, then "words" cease meaning what they were intended to mean.
This words-to-data slight of hand is apparently easy to run on people. The internet, an investment itself of Government created via DARPA and made public via NSF, showcases daily the mis-construed meaning of "words" as "data". People are being surveilled, tracked and provisioned access to services based on having their personal "ID:DATA" leveraged. In some cases, such as the new ID.me services being used at Government databases, facial scans are being correlated to match people as "people" operating as "data". The methods used defy "words" once easily accessible, and have been replaced by TOSDR higher up the administrative supply chain as contracts of adhesion.
Your root human rights, the basic meaning of words with Constitutional authority to declare war upon the enemies of a specific people in time, have been usurped, and without much notice, most all people have acquiesced to the "out-of-order" administrative data flows capturing their participation. Freedom can not exist on such an administrative plantation, whereby people are captured as data for use by 2nd and 3rd parties without any root control provided to the people giving such data existence and integrity.
People-backwards-authority will destroy this world. America can not be provisioned from a database. People possess root authority in America. America is the leader of the world, and immigrants come to America because "people possess root authority" in America. "Of, By, For" People in America, this is the greatest invention of America. Owning your own authority, owning root authority as a person expressing the Sovereign structure of your Rights as a person IS the greatest super power on planet Earth.
The American consumer marketplace is born in love with the creative spirit of Freedom. The American Dream lures people from the world over to its shores. A chance to be free, to own your own life and express your freedom in a market of ideas, where Rights are seen, protected, and leveraged for the benefit of all people. A place where work is honored, and where ladders may be climbed by personal effort and dedication in pursuit of myriad dreams. A land honored by the people who sustain its promise, who guard its shores, and share understanding of how American best practices can influence and improve the entire world.
It all begins with you.
If I could teach you how to do it for yourself I would. I try. My words here are for you to use as you wish. I donate them with many of my efforts sustained over many years. This moment (2020-2022) has been prepared for by many for many many years. A populace ignorant of how data would alter the meaning of words in the wrong hands was very predictable. Knowing what words as data meant in 1992 was less common. In fact, getting people to open ears, or an email, was a very developmental process. Much hand-holding, much repetition. I have personally shared words the world over, and mentored 10's of thousands over the past 25 years. To what end?
I have made no play to benefit from the ignorance of people. I have sought to propel conversation, understanding, skill, and professional practices. By all accounts, I have failed at scale. The world is being over-run by ignorance, and this ignorance is being looted, and much worse, it is being leveraged against the best interest of people, Individuals all.
"We the people" is a literary turn-of-hand in data terms; People, Individuals All. The only reality of the human species that matters is the one that honors what people actually are. Together, each of us as Individual, living among one another.. is the only reality that will ever exist. "We" is a royal construct if used to instantiate an Institutional outcome not under the control of actual people as functioning Individuals, and instead abstracts this reality via language, form, contract or use of computer science to enable services to be rendered upon people rather than "of, by, for" people.
The backwards interpretation of words as data process is the enemy of Humanity. Simple as that.
You must own root authority; Americans, People. 

Read Next: Bureaucratic Supremacy




Werdmüller on Medium

The deep, dark wrongness

The internet, community, and finally being yourself Continue reading on Medium »

The internet, community, and finally being yourself

Continue reading on Medium »

Tuesday, 18. January 2022

Kerri Lemole

W3C Verifiable Credentials Education Task Force 2022 Planning

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots. To get things in order, we

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots.

To get things in order, we compiled a list of topics and descriptions in this sheet and have set up a ranking system. This ranking system is open until January 19 at 11:59pm ET and anyone is invited to weigh in. The co-chairs will evaluate the results and we’ll discuss them at the January 24th VC-EDU Call (call connection info).

It’s a lengthy and thought-provoking list and I hope we have the opportunity to dig deep into each of these topics and maybe more. I reconsidered my choices quite a few times before I landed on these top 5:

Verifiable Presentations (VPs) vs (nested) Verifiable Credentials (VCs) in the education context — How to express complex nested credentials (think full transcript). The description references full transcript but this topic is also related to presentation of multiple single achievements by the learner. I ranked this first because presentations are a core concept of VCs and very different from how the education ecosystem is accustomed to sharing their credentials. VPs introduce an exchange of credentials in response to a verifiable request versus sharing a badge online or emailing a PDF. Also, there’s been quite a bit of discussion surrounding more complex credentials such as published transcripts that we can get into here. Integration with Existing Systems — Digitizing existing systems, vs creating; existing LMSes; bridging; regulatory requirements — ex: licensing, PDFs needing to be visually inspected. To gain some traction with VCs, we need to understand how systems work now and what can be improved upon using VCs but also, how do we make VCs work with what is needed now? Bridging Tech. This ties into integrating with existing systems above. We are accustomed to the tech we have now and it will be with us for some time. For instance, email will still be used for usernames and identity references even when Decentralized Identifiers start gaining traction. They will coexist and it can be argued that compromises will need to be made (some will argue against this). Protocols — Much of the work in VC-EDU so far has been about the data model. But what about the protocols — what do we /do/ with the VCs once we settle on the format? (How to issue, verify, exchange, etc). This made my top five because as the description notes, we’re pretty close to a data model but we need to understand more about the protocols that deliver, receive, and negotiate credential exchanges. Part of what we do in VC-EDU is learn more about what is being discussed and developed in the broader ecosystem and understanding protocols will help the community with implementation. Context file for VC-EDU — Create a simple context file to describe an achievement claim. There are education standards organizations like IMS Global (Open Badges & CLR) that are working towards aligning with VC-EDU but having an open, community-created description of an achievement claim, even if it reuses elements from other vocabularies, will provide a simple and persistent reference. A context file in VC-EDU could also provide terms for uses in VCs that haven’t yet been explored in education standards organizations and could be models for future functionality considerations.

Monday, 17. January 2022

Here's Tom with the Weather

TX Pediatric Covid Hospitalizations

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.


Markus Sabadello on Medium

Transatlantic SSI Interop

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just to

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just too many details where technological choices can diverge. Yes we all use DIDs and VCs. But do we use Data Integrity Proofs (formerly called Linked Data Proofs) or JWT Proofs? JSON-LD contexts or JSON schemas, or both? Do we use DIDComm (which version?), or CHAPI, or one of the emerging new variants of OpenID Connect? Which one of the many revocation mechanisms? Which DID methods? How do we format our deep links and encode our QR codes?

We all want to build the missing identity layer for the Internet, where everything is interoperable just like on the web. But we all do it in slightly different ways. So how can we solve this paradox? Do we create yet another interoperability working group?

No! We try out simple steps and make them work. We conduct concrete experiments that bridge gaps and cross borders. In this case, we planned and executed an experiment that demonstrates interoperability between prominent decentralized identity initiatives in the EU and the US, funded by the NGIatlantic.eu program. Two companies collaborated on this project: Danube Tech (EU) and Digital Bazaar (US).

EU-US collaboration on decentralized identity

In the EU, the European Blockchain Service Infrastructure (EBSI) is building an ambitious network that could become the basis for a digital wallet for all EU citizens. In the US, the Department of Homeland Security‘s Silicon Valley Innovation Program (SVIP) is working with companies around the world on personal digital credentials as well as trade use cases. Both projects have developed easy-to-understand narratives (student “Eva” in EBSI, immigrant “Louis” in SVIP). Both narratives are described further in the W3C’s DID Use Cases document (here and here). So we thought, let’s conduct an experiment that combines narratives and technological elements from both the EU and US sides!

SVIP (left side) and EBSI (right side)

We built and demonstrated two combined stories:

Eva studied in the EU and would then like to apply for a US visa. In this story, there is an EU-based Issuer of a VC, and a US-based Verifier. Louis is an immigrant in the US and would like to apply for PhD studies at an EU university. In this story, there is a US-based Issuer of a VC, and an EU-based Verifier.

For a walkthrough video, see: https://youtu.be/1t9m-U-3lMk

For a more detailed report, see: https://github.com/danubetech/transatlantic-ssi-interop/

In the broader decentralized identity community, both the EU- and US-based initiatives currently have strong influence. EBSI’s main strength is its ability to bring together dozens of universities and other organizations to build a vibrant community of VC Issuer and Verifiers. SVIP’s great value has been its continuous work on concrete test suites and interoperability events (“plugfests”) that involve multiple heterogeneous vendor solutions.

In this project, we used open-source libraries supported by ESSIF-Lab, as well as the Universal Resolver project from the Decentralized Identity Foundation (DIF). We also used various components supplied by Digital Bazaar, such as the Veres Wallet.

We hope that our “Transatlantic SSI Interop” experiment can serve as an inspiration and blueprint for further work on interoperability not only between different DID methods and VC types, but also between different vendors, ecosystems, and even continents.

Wallet containing an EU Diploma and US Permanent Resident Card

Aaron Parecki

How to Green Screen on the YoloBox Pro

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

There are a few different ways to use the green screening feature in the YoloBox. You can use it to add a flat virtual background to your video, or you could use it to put yourself over a moving background or other video sources like an overhead or document camera. You could even key yourself over your computer screen showing your slides from a presentation.

You can also switch things around and instead of removing the background from your main camera, instead you can generate graphics on a computer screen with a green background and add those on top of your video.

Setting up your green screen

Before jumping in to the YoloBox, you'll want to make sure your green screen is set up properly. A quick summary of what you'll need to do is:

Light your green screen evenly Light your subject Don't wear anything green

Watch Kevin The Basic Filmmaker's excellent green screen tutorial for a complete guide to these steps!

Green screening on top of an image

We'll first look at how to green screen a camera on top of a static image. You can load images in to the YoloBox by putting them on the SD card. I recommend creating your background image at exactly the right size first, 1920x1080.

On the YoloBox, click the little person icon in the top right corner of the camera that you want to remove the green background from.

That will open up the Chroma Key Settings interface.

Turn on the "Keying Switch", and you should see a pretty good key if your green screen is lit well. If you have a blue screen instead of green, you can change that setting here. The "Similarity" and "Smoothness" sliders will affect how the YoloBox does the key. Adjust them until things look right and you don't have too much of your background showing and it isn't eating into your main subject.

Tap on the "Background Image" to choose which image from your SD card to use as the background. Only still graphics are supported.

Click "Done" and this will save your settings into that camera's source.

Now when you tap on that camera on the YoloBox, it will always include the background image in place of the green screen.

Green screening on top of other video sources

Green screening yourself on top of other video sources is similar but a slightly different process.

First, set up your HDMI source as described above, but instead of choosing a background image, leave it transparent.

Then click the "Add Video Source" button to create a new picture-in-picture layout.

Choose "PiP Video" from the options that appear. For the "Main Screen", choose the video angle that you want to use as the full screen background that you'll key yourself on top of. This could be a top down camera or could be your computer screen with alides for a presentation. It will then ask you to choose a "Sub Screen", and that is where you'll choose your camera angle that you've already set up for chroma keying.

This is where you can choose how big you want your picture to be, and you can drag it around with your finger to change the position.

Once you save this, your new PiP layout will appear as another camera angle you can switch to.

Cropping the green screened video

You may notice that if your green background doesn't cover the entire frame, you'll have black borders on the sides of your chroma keyed image. The YoloBox doesn't exactly have a cropping feature to fix this, but you can use the "Aspect Ratio" setting to crop the background.

You can edit your PiP video settings and choose "1:1" in the Aspect Ratio option to crop your video to a square, removing the black borders from the edges.

Adding computer graphics using the chroma key

Lastly, let's look at how to bring in graphics from an external computer source and key them out on the YoloBox.

When you plug in your computer's HDMI to the YoloBox, your computer will see it as an external monitor. Make sure your computer screen isn't mirrored so you can still use your main computer screen separately.

You can generate graphics in any program as long as you can have it use a green background. You can create animated graphics in Keynote for example, but for this tutorial we'll use the app H2R Graphics.

In H2R Graphics, you'll first want to make sure you set the background color to a bright green like #00FF00. Then you can open up the main output window and drag it over to your second screen (the YoloBox).

Choose the little person icon in the top right corner of the HDMI input of your computer screen to bring up the keying settings for it.

The defaults should look fine, but you can also make any adjustments here if you need. Click "Done" to save the settings.

Now you can create a new PiP layout with your main video as the background and your computer screen keyed out as the foreground or "Sub Screen".

For the Main Screen, choose the video angle you want to use as the background.

For the Sub Screen, choose your computer screen which should now have a transparent background.

Now the layout with your H2R Graphics output window is created as the PiP angle you can choose.

My YoloBox stand

If you haven't already seen it, be sure to check out my YoloBox stand I created! It tilts the YoloBox forward so it's easier to use on a desk, and you can also attach things to the cold shoe mounts on the back.

We have a version for both the YoloBox Pro and the original YoloBox, and it comes in red and black!

You can see the full video version of this blog post on my YouTube channel!

Thursday, 13. January 2022

Mike Jones: self-issued

Described more of the motivations for the JWK Thumbprint URI specification

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers […]

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs. It is used this way in the Self-Issued OpenID Provider v2 specification, for instance. No normative changes were made.

As discussed on the call, we are requesting that that the chairs use this new draft as the basis for a call for working group adoption.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-01.html

Wednesday, 12. January 2022

ian glazer's tuesdaynight

Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience. My memories of … Continue reading Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience.

My memories of Kim come in three phases: Kim as Legend, Kim as Colleague, and Kim as Human, and with each phase came new things to learn.

My first memories of Kim were of Kim as Legend. I think the very first was from IIW 1 (or maybe 2 – the one in Berkeley) at which he presented InfoCard. He owned the stage; he owned the subject matter. He continued to own the stage and the subject matter for years…sometimes the subject matter was more concrete, like InfoCard, and sometimes it was more abstract, like the metaverse. But regardless, it was enthralling.

At some point something changed… Kim was no longer an unapproachable Legend. He was someone with whom I could talk, disagree, and more directly question. In this phase of Kim as Colleague, I was lucky enough to have the opportunity to ask him private follow-up questions to his presentation. Leaving aside my “OMG he’s talking to me” feelings, I was blown away by his willingness to go into depth of his thought process with someone who didn’t work with him. He was more than willing to be challenged and to discuss the thorny problems in our world.

Somewhere in the midst of the Kim as Colleague phase something changed yet again and it is in this third phase, Kim as Human, where I have my most precious memories of him. Through meeting some of his family, being welcomed into his home, and sharing meals, I got to know Kim as the warm, curious, eager-to-laugh person that he was. There was seemingly always a glint in his eye indicating his willingness to cause a little trouble. 

The last in-person memory I have of him was just before the pandemic lockdowns in 2020. I happened to be lucky enough to be invited to an OpenID Foundation event at which Kim was speaking. He talked about his vision for the future and identity’s role therein. At the end of his presentation, I and others helped him down the steep stairs off of the stage. I held onto one of his hands as we helped him down. His hand was warm.


Identity Woman

Why we need DIDComm

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […] The post Why we need DIDComm appeared first on Identity Woman.

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […]

The post Why we need DIDComm appeared first on Identity Woman.

Tuesday, 11. January 2022

Vittorio Bertocci - CloudIdentity

Remembering Kim Cameron

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who...

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who write far better than yours truly, hence I won’t attempt that here.

I owe a huge debt of gratitude to Kim: I don’t know where I’d be or what I’d be doing if it wouldn’t have been for his ideas and direct sponsorship. That’s something I have firsthand experience on, so I can honor his memory by writing about that.

Back in 2005, still in Italy, I was one of the few Microsoft employees with hands-on, customer deployment experience in WS-STAR, the suite of protocols behind the SOA revolution. That earned me a job offer in Redmond, to evangelize the .NET stack (WCF, workflow, CardSpace) to Fortune 500 companies. That CardSpace thing was puzzling. There was nothing like it, it was ultra hard to develop for, and few people appeared to understand what it was for. One day I had face time with Kim. He introduced me to his Laws of Identity, and that changed everything. Suddenly the technology I was working on had a higher purpose, something directly connected to the rights and wellbeing of everyone- and a mission, making user centric identity viable and adopted. I gave myself to the mission with abandon, and Kim helped in every step of the way:

He invested time in developing me professionally, sharing his master negotiator and genuinely compassionate view of people to counter my abrasive personality back then He looped me in important conversations, inside and outside the company- conversations way above my pay grade or actual experience at that point. He introduced me to all sorts of key people, and helped me understand what was going on. Perhaps the most salient example is the initiative he led to bring together the different identity products Microsoft had in the late 2000s (and culminating in a joint presentation we delivered at PDC2008). The company back then was a very different place, and his steely determination coupled with incredible consensus building skills forever changed my perception of what’s possible and how to influence complex, sometimes adversarial organizations.  He really taught me to believe in myself and in a mission. That’s thanks to his encouragement that I approached Joan Murray (then acquisition editor at Addison Wesley) on the expo floor of some event, pitching to her a book that the world absolutely needs about cardspace and user centric identity, and once accepted finding the energy to learn everything (putting together a ToC, recruiting coauthors, writing in English…) as an evenings and weekends project. Kim generously wrote the foreword for us, and relentlessly promoted the book.
His sponsorship continued even after the CardSpace project, promoting my other books and activities (like those U-prove videos now lost in time). 

Those are just the ones top of mind. I am sure that if”d dig in his or my blog, I’d find countless more. It’s been a huge privilege to work so closely with Kim, and especially to benefit from his mentorship and friendship. I never, ever took that privilege for granted. Although Kim always seemed to operate under the assumption that everyone had something of value to contribute, and talking with him made you feel heard, he wasn’t shy in calling out trolls or people who in his view would stifle community efforts.

When the user centric identity effort substantially failed to gain traction in actual products, with the identity industry incorporating some important innovations (hello, claims) but generally rejecting many of the key tenets I held so dear, something broke inside me. I became disillusioned with pure principled views, and moved toward a stricter Job to be done, user cases driven stance.

That, Kim’s temporary retirement from Microsoft and eventually my move to Auth0 made my interactions with Kim less frequent. It was always nice to run into him at conferences; we kept backchanneling whenever industry news called for coordinated responses; and he reached out to me once to discuss SSI, but we never had a chance to do so. As cliche’ as it might be, I now deeply regret not having reached out more myself.
Last time I heard from him, it was during a reunion of the CardSpace team. It was a joyous occasion, seeing so many people that for a time all worked to realize his vision, and touched in various degrees by his influence. His health didn’t allow him to attend in person, but he called in – we passed the phone around, exchanging pleasantries without knowing we were saying our goodbyes. I remember his “hello Vittorio” as I picked up the phone from Mike- his cordial, even sweet tone as he put his usual care in pronouncing my name just right- right there to show the kindness this giant used with us all. 


Aaron Parecki

How to convert USB webcams to HDMI

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

Most video switchers don't have a way to plug in USB webcams. That's because webcams are expected to plug in to a computer, and most video switchers aren't really computers. Thankfully over the past few years, UVC has become a standard for webcams, so there is no more worry about installing manufacturer-specific drivers for webcams anymore. For the most part, you can take any USB webcam and plug it into a computer and it will Just Work™.

I'm going to show you three different ways you can convert a USB UVC webcam to HDMI so you can use them with hardware video switchers like the ATEM Mini.

You can see a video version of this blog post on my YouTube channel!

Method 1: QuickTime Player

The simplest option is to use QuickTime on a Mac computer. For this, you'll need a Mac of course, as well as an HDMI output from the computer.

First, plug in the HDMI from your computer into your video switcher. Your computer will see it as a second monitor. In your display settings, make sure your computer is not mirroring the display. You want the computer to see the ATEM Mini or other video switcher as a secondary external display.

If you're doing this with the ATEM Mini, it's helpful to have a monitor plugged in to the ATEM Mini's HDMI output port, and then you can show your computer screen full screen on the ATEM's output by selecting that input's button in the "output" selector on the right side of the controls. This is important since you'll want to be able to navigate around the second screen a bit in the next steps.

Next, open QuickTime Player. Plug in your USB webcam into your computer. In the QuickTime "File" menu, choose "New Movie Recording". A window should appear with your default webcam. Click the little arrow next to the record button and you should see all your connected cameras as an option. Choose the USB camera you want to use and you should see it in the main video window.

Now drag that QuickTime window onto your second monitor that is actually the ATEM Mini. Click the green button in the top left corner to make the window full screen. Now what you see on the ATEM Mini should be just the full screen video. Make sure you move your cursor back to your main monitor so that it doesn't show up on the screen.

You're all set! You can switch the ATEM back to the multiview and you should see your webcam feed as one of the video inputs you can switch to.

Method 2: OBS

OBS is a powerful tool for doing all sorts of interesting things with video on your computer. You can use it to run a livestream, switching between multiple cameras and adding graphics on top. What we're going to use it for now is a simple way to get your USB cameras to show up on a second monitor attached to your computer.

Another benefit of OBS is that it is cross platform, so this method will work on Mac, Windows or Linux!

The basic idea is to create a scene in OBS that is just a full screen video of the webcam you want to use. Then you'll tell OBS to output that video on your second monitor, but the second monitor will actually be your computer's HDMI output plugged in to the ATEM Mini.

First, create a new scene, call it whatever you want, I'll call mine "Webcam". Inside that scene, add a new source of type "Video Capture Device". I'll call mine "Webcam Source".

When you create the source, it will ask you which video capture device you want to use, so choose your desired webcam at this step.

At this point you should see the webcam feed in the OBS main window. If it's not full screen, that's probably because the webcam is not full 1920x1080 resolution. You can drag the handles on the video to resize the picture to take up the full 1920x1080 screen. 

Next, right click anywhere in the main video window and choose "Fullscreen Projector (Preview)". Or if you use OBS in "Studio Mode", right click on the right pane and choose "Fullscreen Projector (Program)". Choose your secondary monitor that's plugged in to the ATEM, and OBS should take over that monitor and show just the video feed.

Method 3: Hardware Encoder

If you don't want to tie up a computer with this task, or don't have the space for a computer, another option is to use a dedicated hardware encoder to convert the USB webcam to HDMI.

There aren't a lot of options on the market for this right now, likely because it's not a super common thing to need to do. Currently, any device that can convert a UVC webcam to HDMI is basically a tiny computer. One example is the YoloBox which can accept some USB webcams as a video source alongside HDMI cameras. You could use the YoloBox to convert the USB camera to HDMI using the HDMI output of the YoloBox. 

Another option is this TBS2603au encoder/decoder

I originally was sent this device by TBS because I was interested in using it as an RTMP server. I wasn't able to figure that out, and have since switched to using the Magewell Pro Convert as an RTMP server which has been working great. But as I was poking around in the menus I realized that the TBS2603au has a USB port which can accept webcams!

So here are the step by step instructions for setting up the TBS2603au to output a USB webcam over its HDMI port.

The TBS2603au is controlled from its web interface. I'm going to assume you already know how to connect this to your network and configure the IP address and get to the device's web page. The default username and password are "admin" and "admin". Once you log in, you'll see a dashboard like this.

First, click on the "Encode" icon in the top bar. At the bottom, turn off the HDMI toggle and turn on the one next to USB.

Next click on the "Extend" tab in the top menu and choose "Video Mix".

Scroll down to the "Output Config" section and change "Mix Enable" to "Off", and choose "USBCam" from the "Video Source" option.

At this point you should see your webcam's picture out the device's HDMI port! And if that's plugged in to the ATEM Mini, your webcam will appear in your multiview!

I've tried this with a few different webcams and they all work great! 

The OBSBot Tiny is an auto-tracking PTZ camera that follows your face. The nice thing is that the camera itself is doing the face tracking, so no drivers are required!

The Elgato FaceCam is a high quality webcam for your PC, and it also works with this device. Although at that point you should probably just get a DSLR/mirrorless camera to use with the ATEM Mini.

This even works with the Insta360 One X2 in webcam mode. You won't get a full 360 picture, since in webcam mode the Insta360 One X2 uses only one of its two cameras. It does do some auto-tracking though.

The Mevo Start cameras are another interesting option, since you can crop in to specific parts of the video using a phone as a remote control.

There are a couple of problems with this method to be aware of. I wasn't able to find a way to output audio from the USB webcam, which means you will need to get your audio into the ATEM from another camera or external microphone. Another problem was with certain cameras (mainly the OBSBot Tiny), I left the device running overnight and in the morning it had crashed. I suspect it's because the OBSBot requires more power than other cameras due to its PTZ motor.

The TBS encoder isn't cheap, so it's not something you'd buy to use a generic webcam with your ATEM. But for use with specialized USB webcams like document cameras or PTZ cameras it could be a good option to use those cameras with streaming encoders like the ATEM Mini!

Let me know what USB webcams you'd like to use with your ATEM Mini or other hardware streaming encoder!

Friday, 07. January 2022

Identity Praxis, Inc.

Identity management is key to increasing security, reducing fraud and developing a seamless customer experience

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021. MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and […] The post Identity management is key

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021.

MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and Phil Todd, Director of Stereoscope, who co-authored the report.

Andrew Parkin-White wrote a nice blog piece that summarised our discussion. Three learnings came from our dialog:

Identity management is an iterative process with three core elements – initial identification, authentication (re-identifying the individual) and verification (ensuring the individual is who they claim to be) Enterprises employ a vast array of technologies to execute these processes which are growing in scope and complexity Understanding why identity management is necessary to enterprises and how this creates opportunities for vendors

You can watch the entire session you YouTube (60 min).

The post Identity management is key to increasing security, reducing fraud and developing a seamless customer experience appeared first on Identity Praxis, Inc..


Here's Tom with the Weather

The First Shots

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the p

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the patent from Epicentre to complement the work of Derrick Rossi.

In an interview, she also credits Paul Krieg and Douglas Melton for their contributions.

As a recipient of 3 doses of the Moderna vaccine, I’m thankful to these researchers and was glad to read this book.

Wednesday, 05. January 2022

Just a Theory

Every Day Is Jan 6 Now

The New York _Times_ gets real about the January 6 coup attempt.

The New York Times Editorial Board in an unusually direct piece last week:

It is regular citizens [who threaten election officials] and other public servants, who ask, “When can we use the guns?” and who vow to murder politicians who dare to vote their conscience. It is Republican lawmakers scrambling to make it harder for people to vote and easier to subvert their will if they do. It is Donald Trump who continues to stoke the flames of conflict with his rampant lies and limitless resentments and whose twisted version of reality still dominates one of the nation’s two major political parties.

In short, the Republic faces an existential threat from a movement that is openly contemptuous of democracy and has shown that it is willing to use violence to achieve its ends. No self-governing society can survive such a threat by denying that it exists. Rather, survival depends on looking back and forward at the same time.

See also this Vox piece. Great to see these outlets sound the alarm about the dangers to American democracy. The threats are very real, and clear-eyed discussions should ver much be dominating the public sphere.

More of this, please.

More about… New York Times January 6 Coup Democracy Vox

Moxy Tongue

Human Authority

Own Root, Dependencies:  

Own Root, Dependencies:

 
















Tuesday, 04. January 2022

@_Nat Zone

2022年のプライバシー標準

今年もgihyoの新春特集に書かせていただきました… The post 2022年のプライバシー標準 first appeared on @_Nat Zone.

今年もgihyoの新春特集に書かせていただきました。ご笑読ください。

もくじ

データ倫理が意識された年 金になる「ぼくのかんがえたさいきょうのプライバシー」 「トラッキング」を定義する:ISO/IEC 27551 ユーザ中心のプライバシー設定管理フレームワーク:ISO/IEC DIS 27556 プライバシー向上のための非識別化フレームワーク:ISO/IEC DIS 27559* Grant Management for OAuth 2.0 データ倫理元年へ向けて

記事はこちらから参照できます→ https://gihyo.jp/lifestyle/column/newyear/2022/privacy-standards?page=1

The post 2022年のプライバシー標準 first appeared on @_Nat Zone.

Sunday, 02. January 2022

Jon Udell

The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime. Although the osteoarthritis … Continue reading The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime.

Although the osteoarthritis was more advanced on the right side, we opted for bilateral replacement because the left side wasn’t far behind. Things hadn’t felt symmetrical in the years leading up to the surgery, and that didn’t change. There’s always a sense that something’s different about the right side.

We’re pretty sure it’s not the hardware. X-rays show that the implants remain firmly seated, and there’s no measurable asymmetry. Something about the software has changed, but there’s been no way to pin down what’s different about the muscles, tendons, and ligaments on that side, whether there’s a correction to be made, and if so, how.

Last month, poking around on my iPhone, I noticed that I’d never opened the Health app. That’s beause I’ve always been ambivalent about the quantified self movement. In college, when I left competive gymnastics and took up running, I avoided tracking time and distance. Even then, before the advent of fancy tech, I knew I was capable of obsessive data-gathering and analysis, and didn’t want to go there. It was enough to just run, enjoy the scenery, and feel the afterglow.

When I launched the Health app, I was surprised to see that it had been counting my steps since I became an iPhone user 18 months ago. Really? I don’t recall opting into that feature.

Still, it was (of course!) fascinating to see the data and trends. And one metric in particular grabbed my attention: Walking Asymmetry.

Walking asymmetry is the percent of time that your steps with one foot are faster or slower than the other foot.

An even or symmetrical walk is often an important physical therapy goal when recovering from injury.

Here’s my chart for the past year.

I first saw this in mid-December when the trend was at its peak. What caused it? Well, it’s been rainy here (thankfully!), so I’ve been riding less, maybe that was a factor?

Since then I haven’t biked more, though, and I’ve walked the usual mile or two most days, with longer hikes on weekends. Yet the data suggest that I’ve reversed the trend.

What’s going on here?

Maybe this form of biofeedback worked. Once aware of the asymmetry I subconsciously corrected it. But that doesn’t explain the November/December trend.

Maybe the metric is bogus. A phone in your pocket doesn’t seem like a great way to measure walking asymmetry. I’ve also noticed that my step count and distances vary, on days when I’m riding, in ways that are hard to explain.

I’d like to try some real gait analysis using wearable tech. I suspect that data recorded from a couple of bike rides, mountain hikes, and neighborhood walks could help me understand the forces at play, and that realtime feedback could help me balance those forces.

I wouldn’t want to wear it all the time, though. It’d be a diagnostic and therapeutic tool, not a lifestyle.


Mike Jones: self-issued

Computing Archaeology Expedition: The First Smiley :-)

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere. […]

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere.

So in 2002, I led a computing archaeology expedition to restore his post. As described in my original post describing this accomplishment, after a significant effort to locate it, on September 10, 2002 the original post made by Scott Fahlman on CMU CS general bboard was retrieved by Jeff Baird from an October 1982 backup tape of the spice vax (cmu-750x). Here is Scott’s original post:

19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-(

I’m reposting this here now both to recommemorate the accomplishment nearly twenty years later, and because my page at Microsoft Research where it was originally posted is no longer available.

Wednesday, 29. December 2021

Just a Theory

Review: Project Hail Mary

A brief review of the new book by Andy Weir.

Project Hail Mary by Andy Weir
2021 Ballantine Books

Project Hail Mary follows the success of Andy Weir’s first novel, The Martian, and delivers the same kind of enjoyment. If a harrowing story of a solitary man in extreme environments using science and his wits to overcome one obstacle after another then this is the book for you. No super powers, no villains, no other people, really — just the a competent scientist overcoming the odds through experimentation, constant iteration, and sheer creativity. Personally I can’t get enough of it. Shoot it right into my veins.

Andy Weir seems to know his strengths and weaknesses, given these two books. If you want read stories of a diverse array of people interacting and growing through compelling character arcs, well, look elsewhere. Project Hail Mary doesn’t feature characters, really, but archetypes. No one really grows in this story: Ryland Grace, our protagonist and narrator, displays a consistent personality from start to finish. The book attempts to show him overcoming a character flaw, but it comes so late and at such variance to how he behaves and speaks to us that it frankly makes no sense.

But never mind, I can read other books for character growth and interaction. I’m here for the compelling plot, super interesting ideas and challenges (a whole new species that lives on the sun and migrates to Venus to breed? Lay it on me). It tickles my engineering and scientist inclinations, and we could use more of that sort of plotting in media.

So hoover it up. Project Hail Mary is a super fun adventure with compelling ideas, creative, competent people overcoming extreme circumstances without magic or hand-waving, and an unexpected friendship between two like-minded nerds in space.

I bet it’ll make a good movie, too.

More about… Books Andy Weir

Werdmüller on Medium

Hopes for 2022

Instead of a review of the year, let’s look ahead. Continue reading on Medium »

Instead of a review of the year, let’s look ahead.

Continue reading on Medium »

Thursday, 23. December 2021

Kyle Den Hartog

Financing Open Source Software Development with DAO Governance Tokens

Is it possible to fix the tragedy of the commons problem with a DAO Governance Token?

One of the biggest problems in open source software development today is that it’s that the majority of open source software is written by developers as side projects on their nights and weekends. Out of the mix of developers who do produce software in their nights and weekends only a small sliver of them receive any funding for their work. Of the small portion of developers who do get sponsored, an even smaller percentage are actually able to make enough money to fully cover their expenses in life. So clearly we haven’t developed a sustainable solution to finance open source software development. So what are the main ways that open source software development gets funded? The two primary methods that I see open source software being developed with is via organizational sponsors or altruistic funding. Let’s break these down a bit more to gain a better understanding of them.

The most common and well understood method today that open source projects are funded is via for-profit corporations sponsoring development of projects by way of allowing their full time staff to work on these large projects. Some great examples of this are projects like Kubernetes, Linux kernel, React Framework, Hashicorp Vault, and Rust programming language. In all of these examples, these projects are either directly managed via a team of developers at large organizations (think React Framework being maintained by Facebook), managed by a startup who opensources their core product with additional sticky features (think Hashicorp Vault), managed by a foundation with a combination of many different developers from many different organizations (think Kubernetes and the Linux kernel these days and now Rust Lang), and finally there’s hybrid projects which have transitioned from one category to another over time (Think Rust language being started at Mozilla and then transferred to a foundation). With all of these models one thing is clear. Developers have a day job that pays them and they’re essentially employed to produce open source software. The reasons why many companys are funding developers to produce open source development is so scattered that I’m sure I couldn’t name them all. However, one thing in my experience is clear and that is that most companies have some form of strategic decision at play that leads them down the path of making their source code open. Whether that strategy be as simple as they want to allow others to solve a problem they’ve had to solve, they want to leverage open source as a sales channel, or they’re simply looking for free software developement contributions from developers who like the project. Whatever, the reason the company has to justify it’s contributions it’s pretty clear that this is a major avenue for contribution to the OSS community.

The second most common method of development which has been around for awhile, but has only recently become a more legitimate model of funding has been through altruistic funding. What I mean by this method is that people, organizations, or other such entities will “sponsor” a developer who’s released an open source project that they believe should continue to be worked on. This most commonly was done via Paypal or Buy me a coffee in the past with Patreon and Github Sponsors getting involved more recently as well. This model of funding is becoming more common of a way to fund a small project which is used substantially by much larger projects or companies who want some certainty that the project will continue to be maintained in the future. It’s shown some promise for becoming a sustainable source of funding for developers who are looking for a way to monetize their projects without a massive overhead that comes with starting a company. However, while this method does leave the original maintainer in control of their project to continue to bring their vision to reality, it often times does not provide a sustainable and large enough income for most maintainers to leverage this avenue full time.

So what’s up with this DAO governance token idea then?

To put it simply the concept of leveraging a DAO token to manage an open source project is still just that - an idea. So why do I considered it worth exploring? Today, in the Defi space we see many different projects that are being built completely open source and doing so with often times very complex tokenomics schemes just to sustainably fund the development of the protocol. With each new project the developers need to find a new way to integrate a token into the system in order to fund their time and effort that they’d like to put into growing the project. However, what if we could re-shape the purpose of tokens to make it actually what the tokens are about which is funding the development of the project rather than trying to create a new gift card scheme for each new project?

The way I imagine this would work is via a DAO goverance token which effectively represents a share of the project. Each token that’s already been minted would allow for voting on proposals for to accept or reject new changes to the project in the same way that DAOs allow for decentralized goverance of treasuries today. However these proposals all come in the form of a pull request to modify the code allowing for developers to directly receive value for the proposed changes their making. Where things get interesting is that along with the new pull request comes a proposal set forth by the contributor who would assign a value they believe it’s worth represented in the value of new tokens which would be approved if the pull request is approved. This effectively would be diluting the value of the current tokens in exchange for work they’ve done to improve the project leading to an interesting trade in value. Current majority stake holders give up a small portion of their funds in exchange for receiving new contributions if and only if they believe the dilution is acceptable.

So how does this make developers money?

As a project grows and is utilized by more and more developers it will create an economic incentive for people and companies who wish to steer a project to buy up the currently available tokens or contribute to the project in order to collect these tokens. This value would be tradeable in terms of real world value either for money to buy food or for additional utility in upstream or downstream projects. The value of the tokens is only as valuable as the number of people who are utilizing the project and believe they need the ability to affect the direction of the project or make sure it remains maintained. Meaning for projects like Kubernetes where there’s numerous companies who’s core infrastructure built on top of this project they want to make sure they’re features are getting added and supported. Just like they do today in the Cloud Native Computing Foundation which sees many people from many different organizations and backgrounds contributing to the project now.

Where this becomes interesting is in the economic decision making that happens as a market is formed around maintainership of projects. With many of the good things that will be introduced like being able to have more full-time freelance software developers available I’m sure there will be interesting economic issues that will be introduced. It’s my belief though that this controversy will be worked out in different ways through different principles that projects will choose. As an example today one of the most obvious problems in large projects today such as when SushiSwap forked Uniswap and started taking sushiswap in a different direction. However, the legitimacy of the fork will help to form interesting economic behaviors on whether or not the value of the fork will go up like SushiSwap has shown by adding new and interesting contributions to their fork, or whether it will go down like many of the random clone projects that often lead to scams do.

I believe that if the mechanics of the maintainership role are established correctly then it may even be possible to create some interesting dynamics to reduce forking by leveraging the market dynamics. As an example if the DAO fork was required to mint the same number of tokens in the new project as the original project and make them assigned to the same maintainers then the original maintainers of the project could leverage their newly minted tokens in the new project in order to outright reject all proposals in the fork and slow down the momentum of the project. I tend to think this may be bad for innovation, but it’s an interesting example of how leveraging markets to make maintainership decisions can be utilized to build sustainability in open source development given that maintainership status of projects has legitimate value that if broken up and governed properly could be leveraged to reshape how software development is funded.

Tuesday, 21. December 2021

Tim Bouma's Blog

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub Summary of Changes to Version 1.4: Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of d

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub

Summary of Changes to Version 1.4:

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of definitions and descriptions and continued improvement of editorial and style consistency. Numerous improvements have been made due to feedback incorporated from the application of the PSP PCTF to trusted digital identity assessment and acceptance processes. Other changes have resulted from review and providing input into the National Standard of Canada, CAN/CIOSC 103–1, Digital trust and identity — Part 1: Fundamentals The PSP PCTF Assessment Workbook has been updated to reflect the latest changes.

Mike Jones: self-issued

Identity, Unlocked Podcast: OpenID Connect with Mike Jones

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history. As always, people were a big part of the story. Two of my […]

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history.

As always, people were a big part of the story. Two of my favorite parts are talking about how Kim Cameron brought me into the digital identity world to build the Internet’s missing identity layer (2:00-2:37) and describing how we applied the “Nov Matake Test” when thinking about keeping OpenID Connect simple (35:16-35:50).

Kim, I dedicate this podcast episode to you!

Sunday, 19. December 2021

Mike Jones: self-issued

Stories of Kim Cameron

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor. Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to […]

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor.

Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to leave Microsoft Research and join his quest to “Build the Internet’s missing identity layer” – a passion that still motivates me to this day.

Within days of me joining the identity quest, Kim asked me to go with him to the first gathering of the Identity Gang at PC Forum in Scottsdale, Arizona. Many of the people that I met there remain important in my professional and personal life! The first Internet Identity Workshop soon followed.

Kim taught me a lot about building positive working relationships with others. Early on, he told me to always try to find something nice to say to others. Showing his devious sense of humor, he said “Even if you are sure that their efforts are doomed to fail because of fatal assumptions on their part, you can at least say to them ‘You’re working on solving a really important problem!’ :-)” He modelled by example that consensus is much easier to achieve when you make allies rather than enemies. And besides, it’s a lot more fun for everyone that way!

Kim was always generous with his time and hospitality and lots of fun to be around. I remember he and Adele inviting visitors from Deutsche Telekom to their home overlooking the water in Bellevue. He organized a night at the opera for identity friends in Munich. He took my wife Becky and I and Tony Nadalin out to dinner at his favorite restaurant in Paris, La Coupole. He and Adele were the instigators behind many a fun evening. He had a love of life beyond compare!

At one point in my career, I was hoping to switch to a manager more supportive of my passion for standards work, and asked Kim if I could work for him. I’ll always remember his response: “Having you work for me would be great, because I wouldn’t have to manage you. But the problem is that then they’d make me have others work for me too. Managing people would be the death of me!”

This blog exists because Kim encouraged me to blog.

I once asked Kim why there were so many Canadians working in digital identity. He replied: “Every day as a Canadian, you think ‘What is it that makes me uniquely Canadian, as opposed to being American? Whereas Americans never give it a thought. Canadians are always thinking about identity.'”

Kim was a visionary and a person of uncommon common sense. His Information Card paradigm was ahead of its time. For instance, the “selecting cards within a wallet” metaphor that Windows CardSpace introduced is now widespread – appearing in platform and Web account selectors, as well as emerging “self-sovereign identity” wallets, containing digital identities that you control. The demos people are giving now sure look a lot like InfoCard demos from back in the day!

Kim was a big believer in privacy and giving people control over their own data (see the Laws of Identity). He championed the effort for Microsoft to acquire and use the U-Prove selective disclosure technology, and to make it freely available for others to use.

Kim was hands-on. To get practical experience with OpenID Connect, he wrote a complete OpenID Provider in 2018 and even got it certified! You can see the certification entry at https://openid.net/certification/ for the “IEF Experimental Claimer V0.9” that he wrote.

Kim was highly valued by Microsoft’s leaders (and many others!). He briefly retired from Microsoft most of a decade ago, only to have the then-Executive Vice President of the Server and Tools division, Satya Nadella, immediately seek him out and ask him what it would take to convince him to return. Kim made his asks, the company agreed to them, and he was back within about a week. One of his asks resulted in the AAD business-to-customer (B2C) identity service in production use today. He also used to have regular one-on-ones with Bill Gates.

Kim wasn’t my mentor in any official capacity, but he was indeed my mentor in fact. I believe he saw potential in me and chose to take me under his wing and help me develop in oh so many ways. I’ll always be grateful for that, and most of all, for his friendship.

In September 2021 at the European Identity and Cloud (EIC) conference in Munich, Jackson Shaw and I remarked to each other that neither of us had heard from Kim in a while. I reached out to him, and he responded that his health was failing, without elaborating. Kim and I talked for a while on the phone after that. He encouraged me that the work we are doing now is really important, and to press forward quickly.

On October 25, 2021, Vittorio Bertocci organized an informal CardSpace team reunion in Redmond. Kim wished he could come but his health wasn’t up to travelling. Determined to include him in a meaningful way, I called him on my phone during the reunion and Kim spent about a half hour talking to most of the ~20 attendees in turn. They shared stories and laughed! As Vittorio said to me when we learned of his passing, we didn’t know then that we were saying goodbye.

P.S. Here’s a few of my favorite photos from the first event that Kim included me in:

All images are courtesy of Doc Searls. Each photo links to the original.

Thursday, 16. December 2021

Markus Sabadello on Medium

Report from EBSI4Austria

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI). EBSI was created aiming to, on the one hand,

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI).

EBSI was created aiming to, on the one hand, provide blockchain capabilities used by the partner of the EPB to implement and realize blockchain projects and use cases within these countries. Moreover, on the other hand, to achieve certain use cases on a European level. The so-called use case groups were defined and present the working groups related to a specific use case to support the latter idea. These use case groups consist of representatives of the EBP member counties, domain experts as well as the European Commission.

Initially, four use case groups were founded, namely the European Self-Sovereign Identity Framework (ESSIF), the diploma use case, document traceability, and secure document transfer. ESSIF focuses on digital identities where the user is in control over her identity data. The diploma use case focuses on educational diplomas of students and related processes such as issuing, verifying, revocation, and all of these processes in cross-border scenarios. Document traceability considers the anchoring of document-related identifiers like hashes on the blockcahin and secure document sharing on tax-related information transfer.

EBSI defined so-called use case groups that should be achieved using the provided capabilities to showcase their functionality and bring in expertise in the specific fields. Each use case group consists of representatives of the member states, domain experts, and the European Commission.

About EBSI4Austria

EBSI4Austria is a CEF funded project with two main objectives. First, EBSI4Austria aims to set up, operate and maintain the Austrian’s EBSI node. Second, we pilot the diploma use case on the Austrian level supported by two Universities and data providers as well as verifiers.

EBSI created a so-called early adopter program to speed up the use case integration of the participating countries. EBSI4Austria joined this ambiguous program already in the first wave reflecting our project’s motivation.

Partners

EBSI4Austria consists of three partners, namely two Universities such as Graz University of Technology (TU Graz) and the Vienna University of Economics (WU Vienna), together with Danube Tech, a Vienna based company that provides leading expertise in Self-Sovereign Identity (SSI) as well as distributed systems and is involved in related standardization bodies. The Universities are responsible for issuing students’ diplomas and also verifying them. Austrian’s EBSI node is set up and operated at the department eGovernment innovation center (EGIZ), which is part of Graz University of technology.

User Story

Figure 1 illustrates the user story that is covered in our project. A student studying at the Graz University of Technology is finishing her bachelor’s program. TU Graz issues her diploma credential stating her bachelor’s degree, which the student stores in her wallet. Next, she wants to apply for a master’s program at the Vienna University of Economics and Business; thus, she presents her bachelor’s diploma credential. After successfully finishing her master’s program at WU Vienna, the university issues her master’s diploma credential to the student. The student is very ambitious; therefore, she applies for a Ph.D. position at the Berlin Institute of Technology by presenting her diplomas. All involved parties utilize the EBSI blockchain network to verify if the issuing universities are trusted issuers.

Figure 1: User Story of the Diploma Use Case Technology

In order to implement our EBSI4Austria project, we used similar technologies as many other Self-Sovereign Identity (SSI) initiatives, i.e., based on building blocks such as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs).

We created two DIDs on the EBSI blockchain for the two universities, as follows:

Test DID for TU Graz: did:ebsi:zuoS6VfnmNLduF2dynhsjBU Test DID for WU Vienna: did:ebsi:z23EQVGi5so9sBwytv6nMXMo

In addition, we registered them in EBSI’s Trusted Issuer Registry (TIR).

We also designed Verifiable Credentials to model digital versions of university diplomas. We implemented them using different credential and proof formats to accommodate changing requirements and guidelines in the EBSI specifications throughout the year. See here for some examples in different formats:

Example Diploma by TU Graz:

JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

Example Diploma by WU Wien:

Paper version (in German) Paper version (in English) JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

We also designed our own (experimental) JSON-LD context in order to be able to work with Linked Data Proofs (see essif-schemas-vc-2020-v1.jsonld). In our opinion, it would be preferable if JSON-LD contexts were provided by EBSI to all member states instead of having to do this separately for each EBSI pilot project.

We use the following technologies in our project:

Universal Resolver → For resolving DIDs. Universal Registrar → For creating DIDs. Universal Issuer → For issuing VCs. Universal Verifier → For verifying VCs. SSI Java Libraries ld-signatures-java — For Linked Data Signatures. verifiable-credentials-java — For Verifiable Credentials.

We set up the following demonstration websites:

https://tugraz.ebsi4austria.danubetech.com/ — Issuer demo website https://wuwien.ebsi4austria.danubetech.com/ — Verifier demo website

See this Github repository for additional technical details about EBSI4Austria.

Multi-University Pilot

Within EBSI’s early adopter program, EBSI4Austria also joined the multi-university pilot (MU pilot) in which the focus is on issuing and verifying student diplomas between universities but in this case, even in a cross-border scenario. This multi-university pilot should underpin the possibilities even across countries.

While working on the MU pilot, we participated in several EBSI Early Adopter program meetings to identify issuers, verifiers, and types of credentials. We were in contact with members of Spanish EBSI pilot projects (especially from the SSI company Gataca), to compare our approaches to EBSI DIDs and Verifiable Credentials. We had several technical discussions and email exchanges regarding details of those credentials, e.g. about the JSON-LD contexts and exact proof formats we were planning to use. During these exchanges, we were able to exchange initial examples of verifiable credentials and verify them.

Within one of the “clusters” of the EBSI MU pilot, we also collaborated closely with the “UniCert” aka “EBSI4Germany” project led by the Technical University of Berlin, a member of the EBSI early adopter program and the German IDunion consortium. This collaboration proved to be particularly interesting for the following reasons:

1. Since TU Berlin participates both in EBSI and IDunion, they have unique insights into the similarities and differences between these different SSI networks.

2. TU Berlin was also able to share some experiences regarding the use of existing standards such as Europass and ELMO/EMREX, which can help with semantic interoperability of Verifiable Credentials use in EBSI.

Figure 2: Multi-university pilot scenario.

Note: This blog post was co-authored by Andreas Abraham (eGovernment Innovation Center) and Markus Sabadello (Danube Tech). The EBSI4Austria project was funded under agreement No INEA/CEF/ICT/A2020/2271545.

Tuesday, 14. December 2021

@_Nat Zone

日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」

2021年12月14日の日本経済新聞朝刊(16面)… The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

2021年12月14日の日本経済新聞朝刊(16面)に、大豆生田記者による5段にわたるインタビュー記事が掲載されました。

巨大ITと国際規格策定

米OpenIDファウンデーション理事長 崎村夏彦氏テクノロジストの時代2021年12月14日 2:00 [有料会員限定]

https://www.nikkei.com/article/DGKKZO78400100T11C21A2TEB000/

写真を撮られると思っていなかったので、ボサボサヘアのやつれた研究者然とした写真が掲載されております。どうして標準化の世界に飛び込んだかなども語られております。

崎村さんが標準化に飛び込んだ理由、知らなかったな @_nat / “巨大ITと国際規格策定” https://t.co/au3TDCZwPT

— Masanori Kusunoki / 楠 正憲 (@masanork) December 13, 2021

この記事に関連して、Q&AをTwitterのSpaceを使ってやる会を企画したいと思っています。

記事についてのQ&A のスペース開催はいつが良いですか?

— 崎村夏彦『デジタルアイデンティティ』7/16発売 (@_nat) December 15, 2021

アンケートによると、平日夜が濃厚です。Twitter上でアナウンスします(ここにも書くかも)ので、@_nat をフォローして少々お待ち下さい。

The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

Monday, 13. December 2021

Mike Jones: self-issued

OpenID Presentations at December 2021 OpenID Virtual Workshop

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

Friday, 10. December 2021

MyDigitalFootprint

Why is being data Savvy not the right goal?

It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.   A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decisio
It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.  

A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decision and the person's capability with data.  It infers that boards, executives and senior leadership need to be “data-savvy” if they are to make better decisions. Data Savvy is a position between being “data-naive or data-devoid” and “drunk on data.”  The former has no data or skills; the latter is too much data or cannot use the tools. Data Savvy means you are skilled with the correct data and the right tools.

This thinking is driven by those trying to sell data training by simplifying a concept to such a point its becomes meaningless but is easy to sell/ buy and looks great as a visual.  When you don’t have enough time to reflect on the graph and the message, it looks logical, inspired and correct - it is none of these things.   The basis of the idea is that a board or senior leadership team who are data-savvy will make better decisions, based on the framing that if you are naive or drunk on data, you will make poor decisions.  

The first issue I have is that if the data does not have attestation, your capability (data-savviness) will make no difference to the quality of the decision.  One could argue that you will test the data if data-savvy, but this is also untrue as most boards cannot test the data, relying on the organisations' processes and procedures to ensure “quality” data. This is a wild assumption. 


It is worth searching for what “data-savvy” means and reading a few articles.  You will find that many put becoming data-savvy as a step in the journey to being data-driven.  To a second point:  data-driven means you will always be late.  To wait for enough data to reduce the risk to match your risk framework means that you will be late in the decision-making process.  Data-driven does not make you fast, agile, ahead, innovative or adaptive.   Data-driven makes you late, slow, behind and a follower.

Is the reality of wanting to be data-savvy or a desire to be data-driven that you look to use data to reduce risk and therefore become more risk-averse, which means you miss the signals that would make you genuinely innovative?

The question as a CDO (data or digital) we should reflect on is “how do we reconcile that we want to be first, innovative, creative or early; but our processes, methods, and tools depend on data that means we will always be late!” The more innovative we want to be, the less data we will have and the more risk we need to take, which does not align to the leadership, culture or rewards/ incentives that we have or operate to.


Identity Praxis, Inc.

The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts

Article published by the Mobile Ecosystem Forum, 12/10/2021. Article published by the Mobile Ecosystem Forum, 12/10/2021. “We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we […] The post The Identity Imperative: Risk Manageme

Article published by the Mobile Ecosystem Forum, 12/10/2021.

Article published by the Mobile Ecosystem Forum, 12/10/2021.

“We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable, as more of our services, more of our citizen identity, move online.” – Surash Patel, VP EMEA, TeleSign Corporation 2021 (click here to listen).1

I recently sat down with  Surash Patel, VP EMEA for TeleSign and Board Member of the Mobile Ecosystem Forum (MEF) to discuss the personal data & identity (PD&I) market, for a PD&I market assessment report I’m working on for the MEF (the report will be out in January 2022). Surash’s above quote stuck out to me because I think he is right. It also reminds me of another quote, one from WPP:

“By 2030 society will no longer tolerate a business model that relies on mass transactions of increasingly sensitive personal data: a quite different system will be in place.” – WPP2

I took away three key insights, although are more, from my interview with Surash:

Enterprises must immediately start learning how to master [mobile] identity verification; mobile identity verification can help reduce losses to fraud and self-inflicted losses of revenue. Enterprises that effectively use mobile identity verification can create value and generate trust and engagement at every stage of the customer journey. There is much we—people, private organizations, and public institutions—need to know and do to equip for the now and prepare for the future.

The following summarizes my conversation with Surash. . To watch the complete interview with Surash Patel of TeleSign (39:11 min), click here.

Risk Mitigation, Value Creation, and the Customer Journey

When introducing himself and his background, Surash opened with a wonderfully self-reflective quote:

“I completely missed a trick on my career and where it was going, in that I thought about the value exchange between the consumer and the brand. From a marketing perspective, I never really considered it from the digital identity perspective before–seeing the numbers on digital fraud right now I think that case is becoming more and more clear to me.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here).

By reading between the lines of his statement, I gather that, as a marketer, he previously saw identity as a tool for audience targeting and promotion. But, once he went into the infrastructure side of the business, he realized identity plays an even bigger role throughout the industry. This is because identity has a critical role at every touchpoint along the customer journey–not just for marketing, but for fraud prevention, revenue protection, and trust.

Risk Mitigation and managing losses

Drawing from industry reports, Surash notes that t businesses are losing upwards of $56 billion a year to fraud each year.4 Because of this, “knowing your customer,” i.e., knowing that there is a legitimate human on the other side of a digital transaction, is not just a nice to have, but a business imperative. Surash points out that it’s not just fraud that brands must contend with when it comes to losses. They must also contend with self-inflicted wounds.

Surash referenced a report from Checkout.com which found that, in 2019, brands in the UK, US, France, and Germany lost $20.3 billion in 2019 due to false declines at check out, i.e. identity verification system failures. $12.7 billion of these losses went to competitors, while $7.6 billion just evaporated.5 My takeaway from this is that it’s necessary for brands to see identity verification as a strategic imperative, not just an IT function.

But, reducing fraud and managing revenue breakage is not all Surash brought up. He also noted that, based on findings in the Checkout.com report, consumers would pay an average of $4 to be sure their transactions are secure. So, not only can brands reduce fraud, but they can also retain sales by more effectively identifying their customers (listen to his comments here).

The Potential For Harm is Real and Must Be Managed

Let’s briefly return to Surash’s quote above:

“We know now that technology and, you know, business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable as more of our services, more of a citizen identity, move online.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here to listen).6

I agree with him–the people, not just businesses, are at risk of being even more vulnerable than they are now, but that does not mean the risks we face today are trivial. Harm from the misuse of personal data is all around us. We primarily measure this in financial terms. For example, in 2020, U.S. consumers reported losses of $86M to fraud originating from text messaging scams .7

On harm

There is more privacy harm out there than financial loss, noted by Ignacio N. Cofone,8 and it is not a trivial discussion to be slipped under the rug. In fact, it is one of the fundamental drivers behind emerging people-centric regulations, industry best practices, and the reshaping of law.

This topic is too big to cover in this article, but I can provide a good resource for you on privacy harm. One of my go-to resources when considering this issue is Daniel Solove, who recently, along with Danielle Keates Citron, updated the  Typology of Privacy Harms. This is a must-read if you are a student on the topic of being of service to the connected individual.

The Customer Journey and Balance of Power Shift

To address these privacy harms, Surash specifically calls for the government to get involved. However, Surash thinks brands and individuals alike can do more as well. Surash makes it clear that individuals need to be more aware and accountable for their own actions and interactions. He also thinks, however, that brands need to learn to engage people in an even value exchange (hear Surash’s comment). Furthermore, he recognizes that people are taking more control of their data, and as this continues, we may eventually see the evolution of consumer “curation services  (hear his remark),” what may call “infomediaries.” Again, I’m drawn to the WPP quote above. Brands need to prepare for fundamental shifts in people’s attitudes and expectations. The implications of these shifts will be profound, as they will force a change in competition, business models, product offerings, and business practices.

There is Much We All Need to Know and Do

So, after taking all this in, what’s next?

What I learned from my time with Surash is that an effective identity management implementation can collectively save brands billions, while building trust and improving their ability to serve their customer throughout the customer’s journey.

Surash emphasized that the people must know that they are at risk and be aware of all that is going on in the industry. As a result of this knowledge, they can take steps to advocate for and protect themselves. He notes that individuals “absolutely need to know the value of their data” and “how they can challenge” the brands’ use of their data. Surash suggested that individuals need to start shifting the balance of power by approaching brands and questioning, “Do you really need that data to serve me? If not, don’t ask me for it.” Surash does recognize, however, that this is going to be hard for individuals to go up against the “large” brand. As noted below, we believe in both the companies’ and the government’s abilities to do more.

For brands, Surash wants them to:

Take cyberattacks seriously and prepare as the attacks are expected to get worse. Get the fraud and marketing teams working together and not at loggerheads Not just onboard individuals after the first transaction, but to continually evaluate and authenticate their customers as their customers move along the journey. He suggests that brands must learn to evaluate the local context of each engagement, regularly verify and authenticate their customers, and show the people they serve some respect by making an effort to validate if individuals’ circumstances (preferences, address, phone number, etc.) have changed over time. Surash implies that these actions will both reduce the risk of fraud and cybercrime, but also improve the relationship they have with those they serve. Ensure there is always an even value exchange, and if the brand wants more in return during a transaction, e.g. more data to support a future upsell, then they should consider paying the individual for it.

As for public institutions, .e.g. governments, Surash suggests that “there isn’t enough being done to protect the consumers.” Governments should work with industry to refine value propositions, institute consistent standards, and advocate for consumers.

Clearly, this is all just the tip of the iceberg. There is definitely more to come.

Watch the complete interview with Surash Patel of TeleSign (39:11 min, click here).

REFERENCES

Becker, Michael. “The Chain of Trust & Mobile Number Identity Scoring: AnInterview with Virginie Debris of GSM.” Accessed October 28, 2021. https://www.youtube.com/watch?v=ftJ_4800W2Y.

Becker, Michael, and Surash Patel. “The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts.” Accessed October 30, 2021. https://www.youtube.com/watch?v=V5WlrHSohpM.

Buzzard, John, and Tracy Kitten. “2021 Identity Fraud Study: Shifting Angles.”Livonia, MI: Javelin, March 2021. https://www.javelinstrategy.com/content/2021-identity-fraud-report-shifting-angles-identity-fraud.

Citron, Danielle Keats, and Daniel Solove. “Privacy Harms.” Boston University Law Review 102, no. 2022 (February 2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782222.

“Data 2030: What Does the Future of Data Look Like? | WPP.” London: WPP, November 2020. https://www.wpp.com/wpp-iq/2020/11/data-2030—what-does-the-future-of-data-look-like.

Scrase, Julie, Kasey Ly, Henry Worthington, and Ben Skeleton. “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Checkout.com, July 2021. https://www.checkout.com/connected-payments/black-boxes-and-paradoxes.

Skiba, Katherine. “Consumers Lost $86m to Fraud Originating in Scam Texts.”AARP, June 2021. https://www.aarp.org/money/scams-fraud/info-2021/texts-smartphone.html.

Becker and Patel, “The Identity Imperative.” “Data 2030.”↩︎ Becker, “The Chain of Trust & Mobile Number Identity Scoring.” Buzzard and Kitten, “2021 Identity Fraud Study.” Scrase et al., “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Becker and Patel, “The Identity Imperative.” Skiba, “Consumers Lost $86m to Fraud Originating in Scam Texts.”

The post The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts appeared first on Identity Praxis, Inc..

Thursday, 09. December 2021

Identity Woman

Techsequences Podcast: Self-Sovereign Identity

I chatted with Alexa Raad and Leslie Daigle of Techsequences about self-sovereign identity: what identity is and how we’ve lost control of our own identity in today’s world. Click on the link below to listen. https://www.techsequences.org/podcasts/?powerpress_pinw=252-podcast “Who are you?”. Answering that may seem at once easy and yet incredibly complex.  In the real world, we […] The post

I chatted with Alexa Raad and Leslie Daigle of Techsequences about self-sovereign identity: what identity is and how we’ve lost control of our own identity in today’s world. Click on the link below to listen. https://www.techsequences.org/podcasts/?powerpress_pinw=252-podcast “Who are you?”. Answering that may seem at once easy and yet incredibly complex.  In the real world, we […]

The post Techsequences Podcast: Self-Sovereign Identity appeared first on Identity Woman.

Sunday, 05. December 2021

Altmode

Sussex Day 11: Padding(ton) Home

Sunday, November 14, 2021 We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there […]

Sunday, November 14, 2021

We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there were delays on the line and that some trains had been canceled. This made us a little nervous, but the Network Rail application on my phone reassured us that there would, in fact, be a train soon. Although we had a bit more than the usual wait for Heathrow Express, the train proceeded normally and was not excessively crowded.

After the usual long walk, we reached the ticket counter and checked in. They were thorough in checking our vaccination and COVID testing status, although not to the point of actually checking the QR codes associated with each. After checking bags, there was another long walk to the vicinity of the gate. United’s lounge in London is still closed, but in the meantime they have an arrangement with Singapore Airlines for the use of their lounge where we were able to get breakfast.

At the gate, Kenna was diverted for extra security screening because the “SSSS” designation was printed on her boarding pass. Following that inconvenience, our flight departed on time, which given that we have only a 2-hour layover in Chicago (including customs and immigration) we appreciated. However, our arrival gate was occupied by another plane, resulting in about a 30 minute delay which made us understandably nervous.

Greenland from the air

Having seen US Customs signs back in San Francisco promoting the Mobile Passport immigration application for our phones, we entered our passport information and customs declaration. But after racing to the immigration hall, we were told, “We don’t use that any more. Get in line.” More nervousness about the time. After getting through Customs (which left us outside security), we took the tram to Terminal 1 for our flight to San Francisco.

Here we noticed that Kenna didn’t have the TSA Precheck designation on her boarding card, probably as a result of the SSSS designation earlier. It may not have mattered; there were signs saying precheck was closed and the people checking boarding passes didn’t seem to know. So we both went through the “slow line”, and unfortunately Kenna set something off and had to go through some extra screening. Apparently they thought there was something about one of her shoes, which they ran through the X-ray machine again; more delay. It was interesting that there were a number of women having their shoes rechecked at the same time.

We raced to our gate, nearly the furthest from the security checkpoint, and made it in enough time, but with not much to spare. The ride to San Francisco was unremarkable, and we collected our bags and caught our ride home, according to plan.

Epilogue

Arriving home we were severely jet lagged as expected, but tried to stay up as late as we could manage. After a few hours of sleep, I awoke about 2 am. I could hear some water dripping, which I attributed to a downspout being clogged with leaves following some recent rainfall. So I got up to investigate, and instead discovered that there was a substantial amount of water dripping from the ceiling into our guest room. It turns out that a hot water pipe in the attic had developed a pinhole leak and over time had soaked one wall. So we now have a new project.

This article is the final installment in a series about our recent travels to southern England. To see the introductory article in the series, click here.

Saturday, 04. December 2021

Altmode

Sussex Day 10: London

Saturday, November 13, 2021 London isn’t in Sussex, that’s just the theme of the trip. Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the […]

Saturday, November 13, 2021

London isn’t in Sussex, that’s just the theme of the trip.

Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the south side of London. The first thing you notice is the battleship guns at the front. My interest was also piqued by a short segment of Berlin Wall near the front entrance.

The museum has a large collection on several floors, with areas emphasizing World War I, World War II, the Cold War, the Holocaust, etc. One could easily spend several days to see all of the exhibits. Toward the end of our visit, we went in to the World War II gallery (having already seen quite a number of exhibits dealing with WW II), and it went on…and on. The gallery was very large and went into great detail, including many stories about participants in the war, German as well as Allied. We hadn’t expecte