Last Update 6:38 PM September 17, 2025 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Wednesday, 17. September 2025

ResofWorld

When Africa’s internet breaks, this ship answers the call

Undersea internet cables are critical in today’s hyperconnected world. The crew of the Léon Thévenin maintains one stretch of this global network.
As the Léon Thévenin eased into Cape Town port last month, Shuru Arendse was ready to rush home to his family. He had a month off and a laundry list...

Tuesday, 16. September 2025

OpenID

OIDF supports Japanese regulator on phishing defence

The OpenID Foundation continues to support government partners, with the OpenID Foundation’s Chairman Nat Sakimura recently leading the organization’s expert guidance to Japan’s Financial Services Agency (FSA) on strengthening cybersecurity defences for securities and trading companies facing sophisticated phishing attacks. Japanese financial firms have been experiencing increasingly sophisticated

The OpenID Foundation continues to support government partners, with the OpenID Foundation’s Chairman Nat Sakimura recently leading the organization’s expert guidance to Japan’s Financial Services Agency (FSA) on strengthening cybersecurity defences for securities and trading companies facing sophisticated phishing attacks.

Japanese financial firms have been experiencing increasingly sophisticated phishing and unauthorized access attacks, prompting the FSA to revise its supervisory guidelines with mandatory phishing resistant authentication requirements. Under Sakimura’s leadership, the OpenID Foundation provided expert technical input on these security measures.

Several key areas where OpenID standards can strengthen Japan’s financial cybersecurity were highlighted. Specifically, the Foundation recommended using the Shared Signals Framework for information sharing between operators (standards already recommended by the US Cybersecurity and Infrastructure Security Agency and National Security Agency), API-first security approaches from the FAPI Working Group, and ongoing security monitoring rather than temporary measures.:

Global standards, local Impact

The OpenID Foundation’s feedback demonstrates how pressing local security issues can be remediated through the ecosystem wide application of existing, proven international standards. By providing guidance grounded on the latest and global best practices, the OpenID Foundation is helping to ensure Japan’s new guidelines can deliver on their critical path objectives.

As cyber attacks continue to scale in their sophistication and harm they can cause, so governments, standards bodies and implementers need to remain not just vigilant but proactive.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post OIDF supports Japanese regulator on phishing defence first appeared on OpenID Foundation.


OpenID for Verifiable Credential Issuance 1.0 Final Specification Approved

The OpenID Foundation membership has approved the following as an OpenID Final Specification: OpenID for Verifiable Credential Issuance 1.0: https://openid.net/specs/openid-4-verifiable-credential-issuance-1_0-final.html  A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the

The OpenID Foundation membership has approved the following as an OpenID Final Specification:

OpenID for Verifiable Credential Issuance 1.0: https://openid.net/specs/openid-4-verifiable-credential-issuance-1_0-final.html 

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenID DCP Working Group.

The voting results were:

Approve – 102 votes Object — 1 vote Abstain – 12 votes

Total votes: 115 (out of 442 members = 26% > 20% quorum requirement)

Marie Jordan – OpenID Foundation Secretary


About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net



The post OpenID for Verifiable Credential Issuance 1.0 Final Specification Approved first appeared on OpenID Foundation.


EdgeSecure

Edge Enhances EdgeMarket Platform with Voyatek’s Application Fraud Firewall Solution

The post Edge Enhances EdgeMarket Platform with Voyatek’s Application Fraud Firewall Solution appeared first on NJEdge Inc.

NEWARK, NEW JERSEY, September 16, 2025 – Edge, the nation’s leading member-owned nonprofit technology consortium, today announced the addition of Voyatek’s Application Fraud Firewall solution to its comprehensive EdgeMarket procurement platform. This agreement brings cutting-edge fraud prevention capabilities to Edge’s extensive membership network of colleges, universities, K-12 school districts, government entities, and nonprofit organizations nationwide.

Voyatek’s Application Fraud Firewall is an adaptive-AI, cloud-based solution that leverages sophisticated artificial intelligence and identity verification models to analyze admissions, financial aid, and enrollment data with precision and accuracy. The addition of the solution to EdgeMarket comes at a critical time when application fraud has reached epidemic proportions across higher education. Recent estimates suggest that up to 25% of applications at some institutions may be fraudulent, with scammers posing as “ghost students” using stolen or fabricated identities to enroll, disappear with financial aid funds, and create administrative burdens for resource-strapped schools.

“EdgeMarket has always been committed to providing our members with innovative solutions that address their most pressing challenges. The nationwide surge in application fraud represents a significant threat to the integrity and financial stability of our higher education members. By adding Voyatek’s Application Fraud Firewall to our platform, we’re empowering institutions across the country with the tools they need to protect their students, resources, and reputation while maintaining accessibility for genuine applicants.”

— Dan Miller
AVP EdgeMarket and Solution Strategy, Edge

“EdgeMarket has always been committed to providing our members with innovative solutions that address their most pressing challenges,” said Dan Miller, AVP of EdgeMarket and Solution Strategy at Edge. “The nationwide surge in application fraud represents a significant threat to the integrity and financial stability of our higher education members. By adding Voyatek’s Application Fraud Firewall to our platform, we’re empowering institutions across the country with the tools they need to protect their students, resources, and reputation while maintaining accessibility for genuine applicants.”

“We’re thrilled to partner with Edge and bring our fraud prevention capabilities to their extensive network of educational institutions and public sector organizations,” said John Van Weeren, VP of Higher Education at Voyatek. “Edge’s commitment to empowering digital transformation aligns perfectly with our mission to help institutions implement intelligent solutions that verify student identities without creating barriers for legitimate applicants. Through EdgeMarket, we can help more schools gain control over the management and monitoring of suspicious applications.”

About Edge

Edge serves as a member-owned, nonprofit provider of high-performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks, and nonprofit business entities as part of a membership-based consortium spanning across the nation. For more information, visit njedge.net.

The post Edge Enhances EdgeMarket Platform with Voyatek’s Application Fraud Firewall Solution appeared first on NJEdge Inc.


Hyperledger Foundation

LF Decentralized Trust at One: Accelerating Innovation, Collaboration, and Opportunity

Today marks one year since the launch of Linux Foundation Decentralized Trust (LFDT). Over the past 12 months, our community has grown, new projects have taken root, and existing projects have advanced. It’s a good moment to reflect on what we’ve built together and, more importantly, to look ahead at the opportunities still to come.

Today marks one year since the launch of Linux Foundation Decentralized Trust (LFDT). Over the past 12 months, our community has grown, new projects have taken root, and existing projects have advanced. It’s a good moment to reflect on what we’ve built together and, more importantly, to look ahead at the opportunities still to come.


Technical Advisory Council Report: Year One of LF Decentralized Trust

Introduction As we close out the first full year of the Linux Foundation Decentralized Trust (LFDT), it’s clear that our community has made meaningful strides in strengthening both our projects and our processes. Over the past twelve months, the Technical Advisory Council (TAC) has focused on refining our project lifecycle, expanding task forces to tackle critical areas like security
Introduction

As we close out the first full year of the Linux Foundation Decentralized Trust (LFDT), it’s clear that our community has made meaningful strides in strengthening both our projects and our processes. Over the past twelve months, the Technical Advisory Council (TAC) has focused on refining our project lifecycle, expanding task forces to tackle critical areas like security and contributor engagement, and ensuring that our governance structures meet the needs of a growing and diverse ecosystem.


Hiero Turns One! Celebrating a Community-Driven Journey, Milestones, and What’s Ahead

September marks an exciting milestone for Linux Foundation Decentralized Trust’s newest ledger project: Hiero! Over the past year, the Hiero project has grown from an ambitious idea into a developed, community-driven initiative along with other projects under the LF Decentralized Trust (LFDT) umbrella.

September marks an exciting milestone for Linux Foundation Decentralized Trust’s newest ledger project: Hiero! Over the past year, the Hiero project has grown from an ambitious idea into a developed, community-driven initiative along with other projects under the LF Decentralized Trust (LFDT) umbrella.


ResofWorld

Syria’s quest to build its own Silicon Valley

Tech founders in Damascus are rebooting a war-torn country.
At the end of November, Hamza Hourani’s phone rang. It was a cloudy morning in Damascus, and some Syrians were quietly stocking their cupboards as rebel fighters marched south toward...

We Are Open co-op

The algorithms are not working for us

We’re makers and while we’ve managed to share much of what we make, we spend less time promoting ourselves than we probably should. We aren’t natural marketers, and platforms such as LinkedIn have soul-sucking algorithms that prioritise shinyness over depth.

If you’ve been following us for a while, you might know that We Are Open Co-op is a collective that believes in bringing our whole selves to work. We get together and talk about our feelings and about our worker-owned cooperative. Over the years, we’ve had extremely productive co-op days that have resulted in things like:

A site to collect our AI focused think pieces, frameworks and strategies:  https://ailiteracy.fyi  A portfolio page to round up some of our work in Digital Credentialing: https://digitalcredentials.fyi  A place to make our favourite community building tactics easily accessible: https://communitybuilding.fyi 

We’re makers and while we’ve managed to share much of what we make, we spend less time promoting ourselves than we probably should. We aren’t natural marketers, and platforms such as LinkedIn have soul-sucking algorithms that prioritise shinyness over depth.

So, without further ado, this is a promotional post to say that we have upcoming capacity for interesting work. Hire us! 

At the moment, we’re helping Amnesty International UK build an online community for their activists and we’re collaborating with Skills Development Scotland on a potential national system for Verifiable Credentials. Both of these projects are complex, interesting and belong to organisations that are actively trying to help people. 

Other projects we’ve worked on this year have involved helping MIT’s Digital Credentials Consortium host a summit for their network in The Netherlands, carrying out research for the BBC on how they can approach AI Literacy education, and publishing a report on the environmental impact of AI for Friends of the Earth.  

While we were thinking about what to write in this post, we reflected on why these projects feel right to us. Beyond the specific themes involved - we usually work at the intersection of learning, technology, and community - we also talked about these cross-cutting attributes:

Strategy 

WAO loves helping organisations figure out the strategy stuff. We are great at helping organisations develop and refine big visions and then figuring out the practical and tactical ways people can achieve those visions. Through our theory of change workshops, strategy sessions and participatory methodologies, we’ve helped all kinds of organisations dream big, achievable dreams.

We believe in proactive planning, even in reactive situations! For example, when we helped Greenpeace International develop their crisis comms training programme, we were helping to establish a community of practice that could react to potential future critical incidents. When we worked with Friends of the Earth, we researched and wrote a nuanced article that helped them thread the needle between AI and environmental activism. With the Digital Credentials Consortium, we helped them establish a community-focused engagement plan that reflected a complex network of organisations. 

We are great at seeing the big picture and developing practical strategies that help bring those visions to life.

Advocacy

It shouldn’t be surprising to anyone who has read any of our other posts that we have opinions. We really do. We are internet people who have been working at the intersection of technology and society for our entire careers. We have a lot to say about privacy, decentralisation, open source, environmentalism, education and so much more. At this moment in history, we are still working to create technology that respects our rights. 

It’s not always easy, but we firmly believe in modelling the behaviours that you want to see in the world. That means being open and honest and helping others understand what’s at stake in regards to our data, the environment and our futures. We believe in solidarity, not charity and are organised as a cooperative aligned with the International Cooperative Alliance’s identity, values & principles. The spirit of WAO page on our wiki further describes our philosophical bent ;)

Work that has real impact

We have been lucky to work with clients and on projects that are looking to positively impact people’s lives. This is the kind of work we love. Real impact for us means that at the end of the day, the work we’ve done has helped empower, educate or encourage people to live their best lives. As we said at the beginning of this post, we believe in bringing our full selves, and we believe in helping others do the same. Real impact comes from communities and connections with real people.  

Projects where we have a lot of agency

As our website states,  we don’t just think outside the box; we shred it (and then recycle it, obviously). Our services include consultancy, workshops and training, project and product management, research and development, community building, and everything in between. That means , we have yet to meet a pre-established set of KPIs that can handle our general awesomeness. We prefer projects that allow us to stretch our wings past meaningless metrics and foregone conclusions. We work in partnership.

What we do at WAO:

Collaborate with organisations on sensemaking, digital transformation, strategy, product and generally making their work awesome Work openly because that’s how innovation happens and the world becomes a better place Adapt our work to the realities of your organisation, because humans are messy Share our networks, ideas and brains with your organisation. When we go, we like to go big. Collaborate with empathy, understanding, and humanity because life is hard and no-one likes jerks.

Find other people if you want to…

Hand over a spreadsheet of unnegotiated deliverables and expect us to get them done like you’re a teacher and this is our homework.  Keep us in the dark about what’s going on in your organisation as though you’re our parents.  Require sign-off for every small decision by someone outside the project team as if we’re not grown up enough to decide what needs doing. Argue about paying us what you agreed (or pay us late every time). We’re arguably inexpensive. This isn’t a flea market, and we’re not haggling. Treat us as taskrabbits, data entry clerks, or otherwise insist that we’re merely “consultants”. This ain’t Deloitte. Responsive clients

Finally, we’ve been quite fortunate to have found clients that “get” us. We love working with people who are also bringing their full selves to work and who aren’t afraid to change course when new information comes in. We know authenticity when we see it, and we appreciate honest, reflective responsiveness. We don’t know everything, and it’s ok if you don’t either. Together we can figure it out.

So, do you have an upcoming project that you’d like to talk to us about? Schedule a free 30 minute call and let’s see if we can work together. 

All images in this post are licensed cc-by-nd Bryan Mathers for WAO

Monday, 15. September 2025

Hyperledger Foundation

ToIP and DIF Announce Three New Working Groups for Trust in the Age of AI

Trust Over IP (ToIP), an LF Decentralized Trust (LFDT) project, and the Decentralized Identity Foundation (DIF) have launched three new Working Groups focused on digital trust for agentic AI:

Trust Over IP (ToIP), an LF Decentralized Trust (LFDT) project, and the Decentralized Identity Foundation (DIF) have launched three new Working Groups focused on digital trust for agentic AI:


DIDAS

The E-ID and You: DIDAS Launches the Youth Challenge

DIDAS Position on the E-ID As an association, DIDAS advocates for the state-issued E-ID as well as the underlying trust infrastructure. Both are key building blocks for a secure, efficient, and trustworthy digital society in Switzerland. E-ID The E-ID enables reliable identification in the digital space – especially where secure identity verification is required: Public ...
DIDAS Position on the E-ID

As an association, DIDAS advocates for the state-issued E-ID as well as the underlying trust infrastructure. Both are key building blocks for a secure, efficient, and trustworthy digital society in Switzerland.

E-ID

The E-ID enables reliable identification in the digital space – especially where secure identity verification is required:

Public services, e.g., ordering official documents such as register extracts

Opening a bank account

Online purchases, e.g., buying alcohol or concluding a mobile phone contract

Use cases can either be digital versions of existing processes (such as opening a bank account) or entirely new scenarios that exist only online.

Trust Infrastructure

The E-ID is based on a cryptographically secured trust infrastructure. This provides the foundation for a wide range of digital proofs (verifiable credentials), which – independently of the E-ID – can be managed in a personal wallet.

Put simply: everything that today exists as a plastic card – and much more – can be securely digitized and stored, for example:

Learner’s permit and driver’s license

Insurance card

Proof of residence (e.g., for subsidized family transport services)

Membership cards (loyalty programs, fitness clubs, libraries)

Educational certificates such as diplomas

Digital medical prescriptions

Personal health data (see award-winning prototype from the GovTech Hackathon 2024)

With the trust infrastructure, access to digital services becomes simpler, and legal certainty increases – thanks to the use of original data rather than insecure copies. This boosts efficiency, trust, and reliability in the digital space. Applications range from familiar processes to new digital services that do not exist in the analog world.

E-ID Challenge: The E-ID and You

To involve young people early in the digital discourse, DIDAS is launching the E-ID Challenge.

The competition is aimed at secondary school students, grammar school classes, and vocational school classes, encouraging them to engage creatively and thoughtfully with the E-ID and the trust infrastructure.

The Task:
In a short video, students explain what the E-ID and the trust infrastructure are and which applications they envision.

Objectives:

Promote creative and differentiated contributions

Strengthen understanding of this paradigm shift

Bring young voices into the democratic discourse

Prizes:
The three best contributions will be awarded prizes with a total value of CHF 3,500.–.

Further information and participation details can be found in the media release and will soon be available on this page.

E-ID Challenge  Press Release

ResofWorld

Countries are struggling to meet the rising energy demands of data centers

Mexico's lagging energy grid is forcing companies, including Microsoft, to use generators.
Behind a miles-long gray wall in the central Mexico town of Colón, a Microsoft data center portends the future for other data centers in the country. Unable to plug into...

Friday, 12. September 2025

ResofWorld

The myth of sovereign AI: Countries rely on U.S. and Chinese tech

As countries pursue self-sufficiency in AI, they risk depending on foreign companies, undermining their independence and their goals.
Every government aiming for sovereign artificial intelligence has roughly the same mantra: build their own ChatGPT, leapfrog into AI leadership, and be free of foreign dependence. It’s a compelling narrative,...

Thursday, 11. September 2025

The Engine Room

Community Call: Website Security for CSOs and Nonprofits

Join our next community call for a discussion about website security and how civil society organizations, human rights defenders and nonprofits can prepare, respond to and recover from website threats.  The post Community Call: Website Security for CSOs and Nonprofits appeared first on The Engine Room.

Join our next community call for a discussion about website security and how civil society organizations, human rights defenders and nonprofits can prepare, respond to and recover from website threats. 

The post Community Call: Website Security for CSOs and Nonprofits appeared first on The Engine Room.


Digital ID for Canadians

DIACC Welcomes New and Returning Directors to Board Following AGM

Toronto, ON – September 11, 2025: – The Digital ID & Authentication Council of Canada (DIACC) is pleased to announce the election and re-election of…

Toronto, ON – September 11, 2025: – The Digital ID & Authentication Council of Canada (DIACC) is pleased to announce the election and re-election of several distinguished leaders to its Board of Directors following the Annual General Meeting held in June 2025.

The DIACC Board plays a critical role in guiding the organization’s mission to unlock the full potential of digital identity and trust frameworks for the benefit of all Canadians. This year’s appointments bring together expertise from across government, technology, finance, telecommunications, and the innovation sector.

The following directors were elected and re-elected to the Board:

Jillian Carruthers – Assistant Deputy Minister & Chief Technology Officer, Government of British Columbia Giselle D’Paiva – Partner, Deloitte Canada Dalia Hussein – Vice President, Platform Engineering Excellence, TELUS Jonathan Kelly – Assistant Deputy Minister, Partnerships and Government Digital Strategies, Ministère de la Cybersécurité et du Numérique, Québec Patrick Mandic – CEO, Mavennet Systems Inc. CJ Ritchie – Independent Executive and Advisor, Independent Consulting EY

These directors bring a breadth of experience spanning digital infrastructure, cybersecurity, trusted identity solutions, and technology governance, ensuring that DIACC continues to lead Canada’s digital trust ecosystem with strength and vision.

Joni Brennan, President, DIACC:
“We are thrilled to welcome these outstanding leaders to our Board. Their diverse expertise and commitment to advancing digital trust will help DIACC continue to deliver real impact for Canadians and our economy.”

Jillian Carruthers, Assistant Deputy Minister & Chief Technology Officer, Government of British Columbia:
“As a public servant, I’m honoured to join the DIACC Board to help advance a safer, more trusted digital ecosystem—where people can confidently engage, knowing security, privacy, and inclusion are built in from the start.”

Patrick Mandic, CEO, Mavennet Systems Inc.:
“Being elected as a DIACC director is a great honour. As the global socio‑political order shifts and digital fraud rises alongside rapid advances in AI, building secure rails of digital trust has never been more critical for Canada. I’m excited to help guide DIACC’s vital mission forward and contribute to its success.”

DIACC also extends sincere thanks to Andre Boysen for his long-standing contributions to the organization and the broader community. His leadership and insights have been instrumental in advancing DIACC’s mission over many years.

Dave Nikolejsin, DIACC Board Chair
“On behalf of the Board and membership, we welcome the new and returning Directors and wholeheartedly thank Andre for his dedication and service during his time on the Board. His efforts helped shape the foundation of DIACC’s achievements today.”

As Canada continues to face evolving challenges in digital trust, identity, cybersecurity, fraud prevention, and interoperability, the leadership of DIACC’s Board of Directors will be crucial in fostering collaboration between the public and private sectors to develop trusted solutions that work for all Canadians.

The following elected and re-elected Directors will continue their leadership alongside the full Board listed below.

DIACC Board Chair: Dave Nikolejsin, Strategic Advisor with McCarthy Tetrault DIACC Board Vice-Chair: Jonathan Cipryk, Vice President of Canadian Technology Functions, Manulife Manish Agarwal, Chief Information Officer (CIO), Government of Ontario Mike Cook, CEO, Identos Balraj Dhillon, General Manager of Product Platforms and Channels, Canada Post Erin Hardy, General Counsel and Chief Privacy Officer, Service New Brunswick Jonathan Kelly, Assistant Deputy Minister for Partnerships and Government Digital Strategies, Province of Quebec Karan Puri, Associate Vice President, TD Bank Pierre Roberge, Independent

About DIACC
The Digital ID and Authentication Council of Canada (DIACC) is a non-profit coalition of public and private sector leaders committed to advancing a robust, secure, and user-centric digital identity ecosystem. Through collaboration, innovation, and adoption of the Pan-Canadian Trust Framework (PCTF), DIACC works to unlock opportunities for individuals, businesses, and governments in Canada and beyond.

Media Contact:
Joni Brennan
President
Digital ID and Authentication Council of Canada (DIACC)
communications@diacc.ca


DIF Blog

Nominations now open for Steering Committee Election 2025

DIF was established to represent our fast-changing community and create a safe space for designing and prototyping ambitious new identity architectures. From the original handful of members 9 years ago, DIF grew into an organization with over 400 member companies contributing thousands of lines of code and documentation and changing

DIF was established to represent our fast-changing community and create a safe space for designing and prototyping ambitious new identity architectures. From the original handful of members 9 years ago, DIF grew into an organization with over 400 member companies contributing thousands of lines of code and documentation and changing the conversation in the tech industry. 

Since its inception, DIF has been governed by a Steering Committee, like most Linux Foundation projects. The Steering Committee is comprised of volunteers from core member organizations; its primary functions are to set strategy, deliberate on finer points of DIF’s direction and identity, maintain the processes of the organization, provide guidance to the executive director, and tend to the health of the community and its conversations. 

Periodically, DIF holds elections for roughly half of the Steering Committee’s seats to keep representation and community needs aligned. The process is specified here, but the following overview may be a faster read:

Dates:  2025 election Announcement of election + Nomination period opens - Today (Thurs 11 Sep)  Last day to propose questions to the candidates (Thu 9 Oct) Nomination period closes & Platform statements due (Thu 16 Oct) SC ballot opens (Thu 23 Oct) SC ballot closes (Thu 30 Oct) First meeting of newly-reconstituted SC (Thur 6 Nov) Who votes: Associate Members (one ballot per organization) How private are votes: Seen and tallied only by DIF staff, stored in case of complications Who can stand on the steering committee:  Any DIF member can nominate and any DIF member can be nominated.  Reminder: DIF members include Associate members, Contributors, individuals who signed a Feedback Agreement, and DIF liaison organizations Nominations should be sent to nominations@identity.foundation To be on the ballot, nominees must, via email by Thurs 16 Oct: accept the nomination before the nomination closes provide a short biography and statement describing the nominee’s interest in and qualifications for serving (600 words max) provide answers to the "platform"/philosophy questions sent out 17 Oct (600 words max) these will be published all together on the DIF website and linked from the ballot Early nominations are encouraged! Questions to candidates  All members are encouraged to submit questions for all the SC candidates on issues regarding the management and direction of the Foundation. DIF Staff will compile and synthesize the questions and provide an anonymized, representative sample to all SC candidates to prompt their statements Election Logistics Each DIF Associate member submits one organizational ballot with votes for up to the number of seats then contested for the Steering Committee (6). The election lasts for 1 week between 23rd Nov and 30th Nov Associate members will be contacted during the nomination period to confirm each organization's point-of-contact

ResofWorld

The cold truth about EVs: Freezing weather slashes battery mileage

Innovation in China and Norway makes strides, but in most markets EVs can lose almost half their driving distance when temperatures drop.
Bashir Ahmad sold his wife’s gold jewelry to buy an electric three-wheeler that would revolutionize his apple business in Kashmir. Until winter arrived and killed it. On a freezing morning,...

Wednesday, 10. September 2025

ResofWorld

How Vietnam’s military built one of the hottest tech companies in Southeast Asia

Over two decades, Viettel grew from a state telco into a tech powerhouse that makes everything from 5G chips to robots to affordable mobile phones.
A Vietnamese military-owned company has quietly become one of the most ambitious state-backed technology firms in the world. Viettel, launched by Vietnam’s Ministry of National Defense in 1989 to build...

Next Level Supply Chain Podcast with GS1

What’s Really on Your Plate? Insights on Food Safety Risks

When food safety is on the line, every hour counts. The FDA’s new FSMA 204 rule is raising the standard for traceability, with stronger requirements designed to track products faster and manage recalls more effectively. At stake is not just compliance, but the ability to protect both consumers and businesses when outbreaks occur. In this episode, Angela Fields from the FDA joins hosts Reid Jacks

When food safety is on the line, every hour counts. The FDA’s new FSMA 204 rule is raising the standard for traceability, with stronger requirements designed to track products faster and manage recalls more effectively. At stake is not just compliance, but the ability to protect both consumers and businesses when outbreaks occur.

In this episode, Angela Fields from the FDA joins hosts Reid Jackson and Liz Sertl to explain what FSMA 204 means for supply chains. They explore why proactive traceability is replacing outdated reactive models and how better data is improving the speed and accuracy of investigations.

You’ll also hear real stories from outbreak response, how electronic records can cut weeks off investigations, and why collaboration across the food industry makes a difference for everyone.

In this episode, you’ll learn:

How FSMA 204 creates new opportunities for supply chain transparency

Why recalls work best when industry and regulators communicate clearly

What steps companies can take now to prepare for the 2028 compliance deadline

Jump into the conversation: (00:00) Introducing Next Level Supply Chain (01:58) Angela Fields’ background

(02:59) Food safety from a regulatory perspective

(04:13) How the environment affects supply chain risks

(06:59) What FSMA 204 means for industry

(08:40) Spinach outbreaks and the cost of recalls

(09:53) Why regulations also protect food companies

(14:23) How electronic records speed outbreak investigations

(17:17) Who triggers recalls and how they happen

(19:33) Best practices companies use to prevent recalls

(22:15) Where consumers can track recalls and outbreaks

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: Angela Fields on LinkedIn Check out the FDA


ResofWorld

Is Apple’s iPhone 17 launch a win for India? We asked experts

All U.S.-bound iPhones will reportedly be produced in India, showing how U.S.-China tensions are reshaping global supply chains.
Apple launched the much-awaited iPhone 17 on September 9, defying geopolitical headwinds and complications in its effort to decouple its supply chain from China. The company will produce all the...

Tuesday, 09. September 2025

OpenID

Registration Open for OpenID Foundation Hybrid Workshop at Cisco on Mon 20th October 2025

The OpenID Foundation will be holding a hybrid workshop on Mon 20th October 2025, just ahead of the Fall 2025 Internet Identity Workshop (IIW). (We are waiting on a discount code for OIDF members and will forward this ASAP!) This hybrid event will take place both in person at Cisco’s Santana Row offices in San Jose, […] The post Registration Open for OpenID Foundation Hybrid Workshop at Cis

The OpenID Foundation will be holding a hybrid workshop on Mon 20th October 2025, just ahead of the Fall 2025 Internet Identity Workshop (IIW). (We are waiting on a discount code for OIDF members and will forward this ASAP!)

This hybrid event will take place both in person at Cisco’s Santana Row offices in San Jose, California, USA and online making it accessible to participants worldwide.

  Event Details:

Date: Monday 20th October 2025

Time: 12:30 – 15:45 PDT
Location: Cisco, Santana Row | SJC34, 3098 Olsen Drive, San Jose, California, 95128 United States

Room: SJC34-1-Training Room 

Virtual Option: Details on how to join virtually will be emailed to registrants nearer the time

 

This meeting is an excellent opportunity for the community to engage with fellow experts, share updates, and collaborate on the latest advancements across the OIDF specifications and Community Groups. With IIW just around the corner, it’s the perfect chance to align efforts and gain valuable insights before the main workshop begins.

 

Agenda Highlights:

What’s New at OIDF & What’s Happening for the rest of 2025? Working Group Updates Discussion on Emerging Digital ID trends Deep dive into our latest white papers on AI Identity Management and Death and the Digital Estate

Why Attend?

Engage with leading experts in the industry  Shape the future and weigh in on the work of the Foundation Prepare for IIW with discussions relevant to the upcoming workshop

 

Whether you plan to join us in San Jose, CA or virtually, we look forward to your participation in shaping the next phase of digital identity standards.

Please note:

All registered participants will receive a link to participate virtually prior to the workshop. This is an after-lunch workshop with beverages and snacks provided to those attending in person. In-person participants and members attending IIW will be invited to evening drinks in Mountain View – stay tuned for details! The Foundation’s Note Well Statement can be found here and is used to govern workshops.

 

We will publish the full agenda soon. In the meantime, you can get ahead and guarantee your place by registering your place today!

Please register via Eventbrite HERE We hope you can join us!

 

The post Registration Open for OpenID Foundation Hybrid Workshop at Cisco on Mon 20th October 2025 first appeared on OpenID Foundation.


How SSF/CAEP and STIX/TAXII Secure Different Fronts

By Shared Signals Framework WG Contributor, Apoorva Deshpande, Okta In the realm of cybersecurity, there are two critical sets of frameworks that serve distinct yet vital roles in how organizations share and act upon security information – the Shared Signals Framework (SSF), with its Continuous Access Evaluation Protocol (CAEP), and the Trusted Automated eXchange of […] The post How SSF/CAEP and

By Shared Signals Framework WG Contributor, Apoorva Deshpande, Okta

In the realm of cybersecurity, there are two critical sets of frameworks that serve distinct yet vital roles in how organizations share and act upon security information – the Shared Signals Framework (SSF), with its Continuous Access Evaluation Protocol (CAEP), and the Trusted Automated eXchange of Indicator Information (TAXII) protocol built to transport Structured Threat Information eXpression (STIX). While both aim to bolster security postures, their fundamental designs dictate their suitability for different operational needs: SSF/CAEP excels in the fast-paced world of continuous authentication and real-time response, whereas STIX/TAXII is the standard for comprehensive threat intelligence sharing and in-depth investigations.

The core difference lies in their intended purpose and underlying architecture. The SSF, with its CAEP, is designed for the real-time communication of security events to enable continuous, dynamic access decisions. STIX, built to transport over the TAXII protocol, provides a rich, detailed language for describing the broad landscape of cyber threats for in-depth analysis and investigations.

Think of these standards in the emergency room analogy:

SSF/CAEP is like a real-time heart signal from the patient. It sends immediate, specific notifications like “heart rate is low”, which requires an immediate response of alarm sound, code blue, and use of a defibrillator. STIX/TAXII is like a detailed medical chart of the patient and a research library. It provides a rich, historical, and predictive understanding of who the patient is, their labs, and genetic history (indicators and vulnerabilities). The library (campaigns and TTPs) contains research on the disease and underlying medical conditions. Doctors use this information to diagnose the root cause and develop treatment plans. SSF/CAEP: The Sentinel of the Active Session

At its heart, SSF/CAEP operates on a real-time, event-driven, publish-subscribe model using a generic webhook to transmit standardized security events. SSF defines how a transmitter and receiver can exchange data in the form of CAEP events. It allows data exchange via Push, as well as poll mechanisms to the receiver. This means that when a significant event occurs, a transmitter (e.g., an identity provider, a mobile device management system…etc.) can immediately publish a signal to a receiver (e.g., an application, a VPN gateway…etc.) that has subscribed to receive such updates. This helps bridge the security silos between various systems/vendors in the customer environment using open standards for true interoperability. Security event sharing systems help secure the identities of customers from risks and threats detected by one of the systems. 

SSF and CAEP are separate specifications housed in the OpenID Foundation’s Shared Signals Working Group and are actively being developed. 

How it Fuels Continuous Authentication Decisions:

This event-driven nature makes SSF/CAEP exceptionally useful for continuous authentication and access control. Instead of a one-time authentication check at the beginning of a session, SSF/CAEP allows for ongoing, dynamic risk assessment. It enables a Zero trust principle of “never trust, always verify” by allowing continuous evaluation of access, even after initial authentication. Here’s how it works in practice:

Session Revocation: If, for example, a user’s credentials are leaked and detected by a threat intelligence system, that system can immediately issue a session-revoked event. Any application the user is logged into will receive this signal and can terminate the session in near real-time, preventing further unauthorized access. Credential Change: When a user changes their password or multi-factor authentication (MFA) method, a credential-change event can be transmitted. Sensitive applications can then prompt the user for re-authentication or reduce available functionality before allowing critical actions. Device Compliance Change: If a user’s device suddenly becomes non-compliant with security/compliance policies (e.g., malware is detected, security settings are disabled…etc.), a CAEP event can be triggered to limit or block access from that device until the issue is remediated. Risk and Assurance Level Changes: A sudden change in a user’s risk profile, such as logging in from a new and unusual location, can trigger an event that dynamically adjusts their access privileges. For instance, they might be moved to a lower-trust tier, restricting access to highly sensitive data. Putting SSF/CAEP to Work: Powering Real-Time Enforcement

SSF/CAEP events are not meant for leisurely analysis in a log file. They are high-priority, perishable signals designed to trigger immediate, automated action. When ingested, these events become the real-time fuel for your identity and access infrastructure.

Directly into Access Control Engines: CAEP events can be streamed directly to the systems that grant or deny access, such as your Identity Providers (IdP), Zero Trust Network Access (ZTNA) solution, APIs, and business-critical applications. This allows a session-revoked event to instantly terminate a user’s session network-wide.  Into Risk Calculation Infrastructure: These signals can be funneled into your risk engines to dynamically adjust a user’s trust score. For example, a device-compliance-change event could instantly raise an identity’s risk profile, automatically restricting their access to sensitive data until the issue is resolved. STIX/TAXII: The Archivist for Threat Investigation

In contrast to the immediate, session-focused nature of SSF/CAEP, STIX/TAXII serves as a robust framework for sharing comprehensive threat intelligence, with a model of STIX “Domain”, “Cyber” and “Relationship” object types that create inter-relationships between the objects. TAXII is the transport mechanism, defining how threat data is exchanged, while STIX is the language used to structure that data.

STIX and TAXII are separate but complementary standards governed by the OASIS Cyber Threat Intelligence Technical Committee (CTI TC). This nonprofit consortium drives the development, convergence, and adoption of open standards for the global information society.

How it Powers Investigation:

STIX provides a rich and detailed vocabulary to describe the “who, what, when, where, and how” of a cyberattack. This includes:

Threat Actors: Detailed profiles of adversary groups, including their motivations, capabilities, and typical targets. Campaigns: Information about coordinated malicious activities over time. Indicators of Compromise (IoCs): Specific artifacts like malicious IP addresses, file hashes, and domain names that can identify a breach. Tactics, Techniques, and Procedures (TTPs): Descriptions of the methods used by attackers. (Often mapped to frameworks like MITRE ATT&CK). Malware: Detailed analysis of malicious software. Vulnerabilities: Information about software weaknesses exploited by attackers. Relationship: The object that links everything together (e.g., Threat Actor APT29 uses Malware SUNBURST in Campaign SolarWinds).

STIX also has a concept of “extensions” to accommodate more information from other standards and custom events. Indicators of Behavior (IoB) and Collaborative Automated Course of Action and Operations (CACAO) use the extensions to fit within the STIX bundle to embed action playbooks, remediation actions as a base64 string, and more information about associated intrusions or campaigns.

This intelligence-centric model makes STIX/TAXII invaluable for security operations centers (SOCs), threat hunters, and incident responders. TAXII defines how clients and servers talk to each other to exchange STIX data. It supports various sharing models, including a hub-and-spoke model (one central repo) and a peer-to-peer model (multiple groups sharing with each other):

Post-Breach Forensics: After a security incident, investigators can use STIX-formatted intelligence to understand the full scope of the attack, identify the attacker’s TTPs, and determine what other systems may be at risk. Threat Hunting: Security analysts can proactively search their networks for the IoCs and TTPs described in STIX reports to uncover hidden threats. Enriching Security Alerts: When a security tool generates an alert, it can be enriched with STIX data to provide analysts with a more complete picture of the potential threat, enabling a more informed response. Strategic Threat Intelligence: By analyzing long-term trends in threat intelligence structured with STIX data, organizations can better understand the threat landscape and make more strategic decisions about their security investments and defenses. Operationalizing STIX/TAXII: Supercharging Your Security Analytics

While CAEP events trigger immediate enforcement, TAXII/STIX feeds provide the deep context that supercharges your security analytics and threat detection capabilities. 

Into Your SIEM Platform: SIEM systems can leverage TAXII to pull in threat intelligence feeds, enriching the security log with context from external sources. It allows the SIEM to correlate a seemingly minor internal alert with the TTPs of a known global threat actor, instantly escalating a low-level event into a high-priority incident. Into SOAR and Threat Intelligence Platforms (TIPs): When ingested by Security Orchestration, Automation, and Response (SOAR) platforms, STIX indicators can automatically trigger playbooks—for instance, taking a new malicious IP address and adding it to firewall blocklists across the enterprise without human intervention.

In essence, SSF/CAEP and STIX/TAXII are not competitors, but rather complementary technologies. An ideal security architecture would leverage both: SSF/CAEP to make rapid, tactical decisions to protect active sessions, and STIX/TAXII to provide the deep, strategic intelligence needed to understand and defend against the ever-evolving threat landscape.

Call to Action

The Shared Signals Working Group looks forward to working with STIX and TAXII implementers to realize the potential of bridging these standards. The OpenID Foundation looks forward to working with peers at OASIS, FS-ISAC, and other partners to support our shared communities in realizing the benefits of bridging our approaches. Together, we can facilitate the adoption of a more secure identity and security fabric that interoperates across organizations and silos.

 

Realizing this vision involves exploring practical ways to interoperate these complementary standards. For instance, a STIX message could ride on the SSF infrastructure to provide immediate context to a security event. Conversely, CAEP events could be available on TAXII, providing identity actions for additional analysis. This interoperability will provide added security value by fusing immediate enforcement with analytical context. This will help an alert from one ecosystem to inform an action in another, breaking down the walls, leading to a responsive security ecosystem.

Additional Resources: OIDF Specs and overviews  Shared Signal WG Home Page: https://openid.net/wg/sharedsignals/  Resource guide to SS: https://sharedsignals.guide/ Blog series: https://openid.net/shared-signals-framework-the-blueprint-for-modern-iam-part-1-of-4/

https://openid.net/juggling-with-fire-made-easier-provisioning-with-scim/

STIX & TAXII materials https://oasis-open.github.io/cti-documentation/ https://www.cloudflare.com/learning/security/what-is-stix-and-taxii/ About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

 

 

 

The post How SSF/CAEP and STIX/TAXII Secure Different Fronts first appeared on OpenID Foundation.


Center for Internet and Society

Call for Final Specification Commitments for Verifiable Credential Rendering Methods v0.9

On 8 September 2025 the Credentials Community Group published the following specification: Verifiable Credential Rendering Methods v0.9 This is a Call for Final Specification Commitments. To provide greater patent protection for this specification, participants in the Credentials Community Group are … Continue reading →

On 8 September 2025 the Credentials Community Group published the following specification:

Verifiable Credential Rendering Methods v0.9

This is a Call for Final Specification Commitments. To provide greater patent protection for this specification, participants in the Credentials Community Group are now invited make commitments under the W3C Community Final Specification Agreement by completing the commitment form. Current commitments are listed on the Web. There is no deadline for making commitments.

If you represent a W3C Member, please contact your Advisory Committee Representative, who is the person from your organization authorized to complete the commitment form.

If you have any questions, please contact the group on their public list: public-credentials@w3.org. Learn more about the Credentials Community Group.


ResofWorld

How a movement against corruption on Nepali social media triggered unrest and death

In what has come to be known as the “Gen Z protest,” thousands of demonstrators took to the streets and 19 were reportedly killed.
Nepali Prime Minister KP Sharma Oli has resigned following two days of protests against rampant corruption and inequality, exacerbated by the government’s recent decision to ban some social media platforms...

Latin American musicians say AI is stealing their streams

Musicians from Chile to Mexico say bots are stealing streams, siphoning income, and shortening the lifespan of songs. Even Bad Bunny isn’t safe.
Afro-Chilean artist Nekki has been spreading her anti-racist message through her reggae-rap lyrics for years, but recently, she feels like she is being blocked from reaching a wider audience.  She...

Monday, 08. September 2025

ResofWorld

Indigenous group in Brazil takes TikTok to court over planned data center

The Anacé Indigenous community is going to court to stop a planned TikTok data center they say is being built on their land.
In late August, an Indigenous group in the Brazilian state of Ceará, in the country’s northeast, brought a formal complaint before federal authorities, asking them to halt the development of...

Saturday, 06. September 2025

Project VRM

Protocols for MyTerms

MyTerms (IEEE P7012 Draft Standard for Machine Readable Personal Privacy Terms, unpacked here) has a simple conceptual structure that is open to many different protocols and roles for them. Note the arrows in this graphic: Protocols are required for those. Here is an alphabetized list of some protocols that I know so far, and what […]

MyTerms (IEEE P7012 Draft Standard for Machine Readable Personal Privacy Terms, unpacked here) has a simple conceptual structure that is open to many different protocols and roles for them. Note the arrows in this graphic:

Protocols are required for those.

Here is an alphabetized list of some protocols that I know so far, and what I think they might do (given my incomplete knowledge across all of them.). Note that the standard never says “user,” which has subordinate and dependent implications. It calls the first party a “person” or an “individual,” and the second party an “entity.”

A2A Protocol — “An open protocol enabling communication and interoperability between AI agents, giving them a common language – irrespective of the framework or vendor they are built on.” More here. ActivityPub — Can publish or reference a MyTerms URI in actor metadata or message extensions so follows/interactions and happen under the person’s terms. AT Protocol — Can include a MyTerms pointer in profile schemas or event metadata so interactions can be logged under the proffered terms. Beckn Protocol — Can carry a MyTerms URI (or the terms JSON) in discovery/order messages and bind acceptance in the async ACK/NACK flow. DIDComm v2 — Can attach MyTerms as a claim/document in DID-to-DID messages; the counterparty signs/acks to bind the contract. GNAP — Can pass a MyTerms URI/hash in the grant/interaction; record acceptance alongside the grant. HCP (Human/Hyper-Capability Protocol) — Called (at that link) “a user-owned, secure, and interoperable preference layer that grants individuals granular, revocable control over how their data steers AI systems,” it can store a MyTerms reference in the person’s preference set, gate releases on acceptance, and optionally include the URI/hash in OAuth flows to enable audit. HTTP Message Signatures (RFC 9421) — Can bind MyTerms to specific HTTP exchanges by signing requests/responses that include a terms reference. HTTPS — This is generic transport. It can attach or link MyTerms in headers/body and have the counterparty echo/ack to the transaction log. JLINC — Designed for MyTerms-like ceremonies, it can carry a MyTerms ID/hash for “data shared under an agreement.” Matrix — Can include a MyTerms pointer in a profile state or an event content so rooms/interactions are conducted under the person’s terms. Model Context Protocol (MCP) — Can send a MyTerms URI/hash in a tool/agent handshake or call metadata, so tools operate under those terms and log acceptance. NANDA (Internet of AI Agents) — Can expose MyTerms during agent discovery/handshake and metadata in registry so agents negotiate under the person’s terms. Nostr — Can include a MyTerms reference in profile/event tags so relays and clients can honor and log acceptance. OAuth 2.0 — Can carry MyTerms as a parameter or in a request object, recording consent/acceptance with the access transaction. OpenID Connect — Can include a MyTerms URI/hash as a claim (e.g., in the ID token) or request object with RP/OP log acceptance. Solid — Can host the person’s MyTerms in their wallet (formerly called a pod) and require apps or services to transact under those terms for resource access. UMA 2.0 — Can treat MyTerms as a policy at the resource server and share only with parties that have accepted the person’s terms. Web Linking (RFC 8288) — Can advertise a MyTerms URI via Link: headers or a /.well-known/ location for discovery and binding.

Please give me additions, corrections, and improvements.  And forgive the need for all of those changes. I think it’s important at this stage to get a list of possible protocols out there, and to get the discussion rolling. Thanks!


OwnYourData

OwnYourData @ Semantics 2025

OwnYourData @ Semantics 2025: Privacy-preserving Data Sharing for Renewable Energy Communities On September 3, 2025, we took part in Semantics 2025 in Vienna – specifically at the 2nd NeXt-Generation Data Governance (NXDG) Workshop. Together with our partners, we presented results from a current research project: “A Configurable Anonymisation Service for Semantically Annotated Data: A Case […] D
OwnYourData @ Semantics 2025: Privacy-preserving Data Sharing for Renewable Energy Communities

On September 3, 2025, we took part in Semantics 2025 in Vienna – specifically at the 2nd NeXt-Generation Data Governance (NXDG) Workshop. Together with our partners, we presented results from a current research project:

“A Configurable Anonymisation Service for Semantically Annotated Data: A Case Study on REC Data”.

Why anonymisation matters

With the energy transition and the growing adoption of Renewable Energy Communities (RECs), there is an increasing demand for data exchange – both within energy communities and with external stakeholders. However, energy data is highly sensitive: smart meter data can reveal detailed behavioral patterns.

The challenge: How can energy data be shared without compromising privacy?

Our solution: A configurable anonymisation service

As part of the USEFLEDS project, we developed an open, online anonymisation service that integrates seamlessly with our Semantic Overlay Architecture (SOyA).

Key features:

Semantic annotation of data to make privacy rules explicit and machine-readable.

Rule-based anonymisation pipelines that automatically apply generalisation and randomisation.

Configurable via YAML files, without requiring advanced programming knowledge.

Available as SaaS or On-Premises – with open-source code and Docker images for maximum transparency and reproducibility.

Try it now: anonymiser.ownyourdata.eu

Evaluation: Privacy vs. data value

For the evaluation of the service, we worked with synthetic datasets representing energy communities in Burgenland, Austria. The main question was how to achieve a sufficient level of protection without destroying the analytical utility of the data. The evaluation was based on k-anonymity, complemented by a similarity measure to also assess the effectiveness of randomisation. The results show that sufficient anonymisation was achieved in all tested scenarios: no dataset remained uniquely attributable to a single person, while the data could still be used meaningfully for analysis. This demonstrates that our approach provides a solid balance between privacy protection and data value.

Embedding in the European regulatory framework and outlook

The developed solution is closely aligned with European regulations. While the GDPR (2018) emphasizes the protection of personal data and the rights of individuals, the Data Governance Act (2022) focuses on trustworthy ecosystems and the role of neutral intermediaries. Anonymisation serves here as a key instrument to connect both levels: it protects individual privacy while at the same time providing the foundation for secure, compliant data sharing within data intermediaries and energy communities. This dual effect makes the approach highly relevant in practice.

In the next phases of the project, we plan to further expand the anonymisation service. This includes the integration of additional techniques to more flexibly handle different data types. In addition, the system will provide key performance indicators and risk assessments directly with the results, enabling users to better understand the effectiveness of the applied anonymisation. Finally, we will investigate how the service performs in large-scale productive environments to ensure its suitability as a building block in real-world data intermediary infrastructures.

Workshop Paper:

A Configurable Anonymisation Service for Semantically Annotated Data: A Case Study on REC Data (PDF)

Der Beitrag OwnYourData @ Semantics 2025 erschien zuerst auf www.ownyourdata.eu.

Friday, 05. September 2025

DIF Blog

DIF Newsletter #53

September 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Launches New Trusted AI

September 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Launches New Trusted AI Agents Working Group

DIF is launching the Trusted AI Agents Working Group, chaired by Nicola Gallo, Andor Kesselman, and Dmitri Zagidulin, to define an interoperable stack for trustworthy, privacy-preserving, and secure AI agents. As autonomous agents gain real-world responsibility in composing tools, making decisions, and exchanging verifiable data, this working group will build specifications, reference implementations, and governance patterns for enabling high-trust agent ecosystems using robust mechanisms for identity, authority, and governance.

The initial focus of the group will be exploratory and experimental, starting from use cases and a taxonomy of delegation mechanisms and patterns. The first formal work item will be a report evaluating use cases for agents (that make explicit dynamics of authorization), primarily focusing on the object capability school of distributed authorization. Future work may include standards for agentic identification mechanisms (applying DID and Verifiable Credential prior art), interoperability libraries for popular LLM frameworks, runtime trust enforcement workflows, and prototyping of human-to-agent delegation patterns. The working group welcomes in particular active participation from specification authors, security researchers, LLM infrastructure maintainers, and identity experts as it addresses the unique requirements of agentic workflows.

Stay tuned for meeting information!

DIF Labs Beta Cohort 2 Show & Tell

As we enter September, DIF Labs Beta Cohort 2 projects are in their final development phase, preparing for their showcase event later this month. The three selected projects have made significant progress on cutting-edge privacy-preserving technologies. Each project team has been receiving dedicated mentorship from DIF's expert community and is committed to delivering standards, protocols, implementations, or reports that will benefit the entire decentralized identity ecosystem.

👉 Register for DIF Labs Show & Tell, open to the public!

Join DIF Leadership: Executive Director & Steering Committee

This is your opportunity to shape the future of DIF! We have an upcoming Steering Committee election AND we're accepting applications for the next Executive Director term!

Steering Committee: The Steering Committee plays a crucial role in defining DIF policies and strategy. We will have 6 seats open for election. Stay tuned for election details, including the call for nominations (all members), the nominee's "platform" statements, and the voting process (Associate Members, one ballot per organization). Don't miss your chance to participate in this important process that will guide DIF's future. Executive Director: As we move towards the end of the 2025 Executive Director term, and current ED Kim falls sway to the siren call of building, we have an exciting opportunity to set the direction of DIF! We are accepting applications now. See application details here, and send questions to jobs@identity.foundation. Introducing "Scaling AI DIFferently": Misha Deville's New Blog Series on AI and Decentralized Identity

DIF Ambassador and Vidos Co-Founder Misha Deville has launched her blog series "Scaling AI DIFferently", exploring the critical intersection of artificial intelligence and decentralized identity technologies.

Misha examines how decentralized identity infrastructure serves as the foundation for building trust and accountability in autonomous AI systems. As AI agents become increasingly sophisticated, she investigates the essential role that verifiable digital identities play in ensuring responsible AI deployment at scale.

In this thought-provoking series, Misha addresses fundamental questions about identity verification, trust frameworks, and accountability mechanisms in AI-driven environments—tackling challenges at the heart of next-generation digital infrastructure.

Discover Misha's insights on how decentralized identity standards can help shape a more trustworthy AI future. Start reading here.

🛠️ Working Group Updates

Browse our active working groups here

Hospitality & Travel Working Group

The Hospitality & Travel Working Group has emerged as one of DIF's most active groups, conducting extensive work on the HATPro specification for traveler preferences and identity management. The group is currently developing comprehensive schemas for food preferences, dietary restrictions, accessibility requirements, and pet-related travel information, and engaging subject matter experts across various travel sectors. Subscribe to the H&T Working Group blog for updates.

👉 Learn more and get involved

Creator Assertions Working Group

The Creator Assertions Working Group made significant strides in media industry identifier standards and identity assertion frameworks. Major developments include:

Media Identifier Guidance: Developed comprehensive guidance for using metadata assertions with media industry identifiers, focusing on Dublin Core metadata fields and external authoritative data sources Identity Assertion Evolution: Continued refinement of identity assertion requirements to support manifest updates and improve data-model flexibility

The group's work on attribution assertions and metadata standards continues to advance the state of content authenticity and provenance tracking.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ work item achieved significant milestones in privacy-preserving cryptographic protocols:

Pseudonym Systems: Advanced work on cryptographic pseudonyms and blind signatures, with collaboration on general articles about cryptographic privacy techniques Performance Optimization: Completed performance testing and optimization work, with Rust implementations showing significant speed improvements over JavaScript Post-Quantum Research: Discussed post-quantum cryptography implications and integration with existing BBS+ protocols Test Vector Development: Continued development of comprehensive test vectors for the 0.9 draft release

The team's focus on practical implementation and standardization continues to drive adoption of privacy-preserving credential technologies.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused on standardization processes and method evaluation refinement:

W3C Charter Development: Continued work on the proposed W3C DID Methods Working Group charter, addressing concerns about scope and blockchain inclusion Method Evaluation Process: Refined the DIF recommendation process for DID methods, with DID:webvh currently in its 60-day comment period Method Champion Coordination: Established clearer processes for method champions and active development requirements Future Method Pipeline: Identified upcoming methods include did:webplus, did:cheqd and did:scid for evaluation

The group's DID method evaluation process ensures high-quality standards while maintaining transparency in the assessment process.

👉 Learn more and get involved

Identifiers and Discovery Working Group

The DIF Identifiers and Discovery Working Group advanced work on did:webvh (did:web + Verifiable History) and a new DID attested resources specification. Key progress includes deploying a sandbox server for did:webvh testing and proposing changes to DID resolution specifications to improve URL handling. The team discussed simplifying the webvh specification while preserving core functionalities and explored implementation details for blockchain DIDs generally, as well as key rotation patterns (and verification patterns for rotated keys) in verifiable credentials, with plans to continue collecting feedback through September 28th.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced protocol development and explored new application areas:

Binary Encoding Support: Progressed work on CBOR implementation for more efficient message encoding AI Agent Communications: Explored applications of DIDComm for AI agent-to-agent communications and autonomous system interactions Protocol Comparisons: Conducted detailed analysis comparing DIDComm with other protocols like OpenID for VC in various use cases Supply Chain Applications: Discussed new protocols for supply chain data management and product-recall systems

👉 Learn more and get involved

Claims & Credentials Working Group

The Credential Schemas team made significant progress on standardization and community engagement:

Community Schemas Initiative: Launched framework for organizations to contribute verifiable credential schemas to a shared repository Schema Standardization: Advanced work on aligning basic "person" schemata with schema.org standards while maintaining compatibility with existing frameworks

👉 Learn more and get involved

DIF Labs Working Group

DIF Labs Beta Cohort 2 projects entered their final development phase with impressive progress:

QES Project: Advanced development of Qualified Verifiable Credentials combining legal enforceability with privacy preservation Revocation Analysis: Completed comprehensive analysis of privacy-preserving revocation mechanisms with practical implementation guidance Anonymous Multi-Sig: Progressed work on anonymous multi-signature verifiable credentials for group-authentication scenarios

All projects are preparing for their final showcase event in September, with open-source implementations ready for community adoption. Register for DIF Labs Show & Tell

👉 Learn more and get involved

🌎 Special Interest Group Updates

Browse our special interest groups here

DIF Africa SIG

The Africa SIG continued its focus on identity challenges and solutions in Africa and beyond. The recent meeting featured the "Has Needs" project, a resource sharing protocol designed for humanitarian spaces, building on work since the Haiti earthquake of 2010.

The SIG's focus on humanitarian applications showcases the potential for decentralized identity to address critical social challenges.

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted comprehensive discussions on regional digital identity initiatives, including:

Korean Foreign Visitor System: Detailed presentation on digital identity solutions for foreign visitors to South Korea, including visa digitization and medical tourism applications Regional Interoperability: Explored challenges and opportunities for cross-border identity verification and credential recognition Privacy Frameworks: Discussed regulatory approaches across different jurisdictions and their implications for digital identity adoption Medical Tourism Applications: Examined specific use cases for digital credentials in healthcare and travel scenarios

The group's focus on practical, cross-border applications demonstrates the global relevance of decentralized identity solutions.

👉 Learn more and get involved

DIF Hospitality & Travel SIG

The Hospitality & Travel SIG showcased cutting-edge applications of decentralized identity in travel:

Agentic Commerce Platform: Featured comprehensive demonstration of AI-powered travel booking systems using decentralized identity for secure agent-to-agent communications Corporate Travel Innovation: Explored applications for corporate travel management with AI agents handling complex booking scenarios Industry Integration: Demonstrated practical implementations with major travel industry partners and payment systems Future Vision: Outlined roadmap for AI-driven, personalized travel experiences built on decentralized identity foundations

The SIG's presentations highlighted the transformative potential of combining AI agents with decentralized identity for seamless travel experiences.

👉 Learn more and get involved

📖 User Group Updates DIDComm User Group

The DIDComm User Group explored practical implementations and emerging applications:

AI Integration: Extensive discussions on integrating DIDComm with AI agents and home automation systems Technical Compatibility: Clarified DIDComm's compatibility with various cryptographic methods including BBS+ signatures Bluetooth Applications: Explored local mesh networking applications and proximity-based communications Implementation Challenges: Addressed practical questions about protocol implementation and deployment

The user group continues to drive practical adoption of DIDComm technologies.

👉 Learn more and get involved

📢 Announcements Join Steven McCown at Identity Week America

Steven McCown, Chief Architect at Anonyome Labs and DIF leadership member, will be speaking at Identity Week America Steve McCown | Identity Week America on Day 2 at 11:00 with "Digital identity: It may be secure, but will it protect your privacy?"

Steve McCown | Identity Week America A multi-faceted conference experience with over 150 sessions across eight tracks, providing the most comprehensive and multi-disciplined event. Identity Week America 2025

Steven will examine how "modern identity systems authenticate users with advanced identity, encryption, and communication technologies" yet still impact user privacy. His presentation will illustrate the privacy risks of "phone home" architectures and demonstrate how decentralized identity "helps platform providers overcome these problems through peer-to-peer identifiers and verification methods that don't require issuer participation."

The session will also explore how recent Utah legislation is creating new models for privacy-oriented digital identity in government systems.

Don't miss this essential discussion on building truly privacy-preserving identity solutions.

Join Nick Price and Doug Rice at HTNG Connect Europe

Join DIF H&T community leaders Nick Price and Doug Rice at HTNG Connect Europe in The Hague for two sessions on decentralized identity in travel and hospitality! Nick Price, CEO of NetSys Technology Ltd., will present "Redefining Trust: The Role of Decentralized Identity in the Future of Hospitality" (12:30-13:00, Mesdag Ballroom), exploring how decentralized digital identity offers a new paradigm for secure, guest-centric experiences and seamless travel solutions.

Doug Rice, chair of the DIF Hospitality & Travel Working Group, will join Nick for the interactive breakout session "Decentralized Identity in Action: Use Cases, Standards, and Industry Collaboration" (14:00-14:45, Red Room). This will be an open discussion on real-world pilots, emerging use cases, and the development of standard traveler profile schemas designed to support interoperability and privacy across the travel ecosystem.

Blog Series Exploring AI and Decentralized ID

DIF has kicked off the "Scaling AI DIFferently" blog series featuring guest articles from DIF Ambassador and Vidos Co-Founder Misha Deville on decentralized identity's critical role in building trust in agentic systems. Start here.

DIF Labs Beta Cohort 2 Final Showcase - September 23rd

Mark your calendars for the DIF Labs Beta Cohort 2 final showcase on September 23rd. Each project team will have 7 minutes to present their work followed by 8 minutes of Q&A. The event will feature:

Live demonstrations of privacy-preserving credential technologies Open-source implementations ready for community adoption Learning opportunities with DIF's expert community

Register here: https://luma.com/849iikfj

📅 Community Events Upcoming Industry Presentations

DIF community members will be presenting at several major industry events in September and October:

HTNG Connect in The Hague: Nick Price and Douglas Rice are presenting at HTNG Connect Europe. Identity Week America: Steve McCown is speaking at Identity Week America. IIW October 2025: Many in the DIF community will be attending the Internet Identity Workshop in October. Community Contributors

We continue to appreciate the dedicated contributions of our community members who drive DIF's technical work forward. Special recognition goes to the chairs and active participants of our most active working groups, whose consistent engagement and technical expertise make our progress possible.

🆔 Get involved! Join DIF

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

🐦 Follow us on Twitter/X 💻 Join us on GitHub 📺 Subscribe on YouTube 🔍 Read the DIF blog New Member Orientations

If you are new to DIF, join us for our upcoming new member orientations. Find more information on DIF's Slack or contact us at community@identity.foundation if you need more information.


ResofWorld

LLMs can never be fair and other scientific AI findings

Here’s a roundup of research, including that AI companions are emotional manipulators and whether AI voice agents make good recruiters (they do).
Artificial intelligence is one of the most researched fields in technology today. As more companies launch large language models and more users embrace generative AI, researchers are studying their potential...

Thursday, 04. September 2025

EdgeSecure

Accessibility Priorities for 2025: Policy, Procurement, and Practice

The post Accessibility Priorities for 2025: Policy, Procurement, and Practice appeared first on NJEdge Inc.

Webinar
September 25, 2025
1:00 PM ET

Is your institution ready for the next wave of digital accessibility expectations? In this webinar, Dr. Laura Romeo, Director of Digital Learning Innovation, Scholarship, and Educational Services at Edge, will walk through the 2025 action items every college should be prioritizing — from self-evaluations and procurement reviews to policy adoption, training, and barrier-reporting. We’ll break down what these steps mean, where institutions often get stuck, and how Edge can help strengthen your digital infrastructure. You’ll leave with a clear roadmap to meet today’s compliance needs and position your campus for long-term success in 2026 and beyond.

Key Takeaways:

A practical framework for 2025 digital accessibility action items institutions can act on now Insights into common pitfalls that derail accessibility progress — and proven ways to avoid them Strategies to align procurement, policy, and training with WCAG 2.1 AA standards Guidance on building sustainable digital infrastructure that positions your campus for compliance and innovation in 2026/27.

Presenter: Dr. Laura Romeo, Director of Digital Learning Innovation, Scholarship, and Educational Services at Edge

Register Now »

The post Accessibility Priorities for 2025: Policy, Procurement, and Practice appeared first on NJEdge Inc.


FIDO Alliance

PC Mag: Lose Your Device, Lose Your Accounts? Not If You Back Up Your Passkeys

Passkeys are more secure than passwords since they’re tied to a device, but what if you lose your phone? The trick lies in how you generate passkeys in the first […]

Passkeys are more secure than passwords since they’re tied to a device, but what if you lose your phone? The trick lies in how you generate passkeys in the first place.

Imagine you’ve created a host of passkeys on your iPhone or Android phone. If you lose your phone or it no longer works, what happens to the passkeys on your device? Are they gone? Can you get them back? Fear not. There’s a way to set up your passkeys so that they’re tied to your account and can follow you wherever you go.

The trick lies in how you generate passkeys in the first place. By using a password manager that supports passkeys, Google Password Manager, or Apple’s iCloud Keychain, you can save passkeys to your account and sync them across all your devices. If you lose your phone or upgrade to a new one, your passkeys will be available once you sign in. Here’s how this works.


Velocity Network

Enabling the Golden Age of Workforce Data

The post Enabling the Golden Age of Workforce Data appeared first on Velocity.

ResofWorld

EV jobs are starting to slip away

Legacy brands and startups alike grapple with slowing growth.
The electric vehicle industry is cutting staff faster than smoke. More than 30,000 workers have lost their positions at electric-vehicle makers and related industries worldwide over the past year, according...

Wednesday, 03. September 2025

FIDO Alliance

Wired: How Passkeys Work—and How to Use Them

Passkeys want to create a password-free future. Here’s what they are and how you can start using them. Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game […]

Passkeys want to create a password-free future. Here’s what they are and how you can start using them.

Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game of cybersecurity whack-a-mole with your most important accounts. That’s where passkeys come into play. The so-called “war on passwords” has taken off over the past two years, with titans like Google, Microsoft, and Apple pushing for a password-less future that the FIDO Alliance (a consortium made to “help reduce the world’s over-reliance on passwords”) has been trying to realize for over a decade.

Like it or not, you’ll be prompted to create a passkey at some point, and you likely already have. That’s a good thing, as passkeys aren’t only much easier to use than a traditional password, they’re also a lot safer. Here’s everything you need to know about using them.


EdgeSecure

Powering Collaborative Innovation and Enabling AI Research

The post Powering Collaborative Innovation and Enabling AI Research appeared first on NJEdge Inc.

At Edge, we are transforming New Jersey’s research and education landscape by creating a connected, resource-sharing environment that empowers discovery across all disciplines. Our mission: democratize access to cyberinfrastructure and accelerate innovation—no matter the size or resources of the institution.

Through visionary initiatives like our NSF-funded CRISPIE program, Edge is:

Enhancing network connectivity across campuses with regional Science DMZs and next-generation infrastructure Enabling cutting-edge AI research by supporting seamless data movement, scalable compute access, and training for faculty and students Promoting equity in research by bringing underserved and under-resourced institutions into the national innovation ecosystem Building a collaborative foundation with tools like Globus and perfSONAR that support data-intensive research and real-time analysis Fostering collaboration and secure access through participation in the InCommon Federation, allowing seamless, federated identity management across research networks

Edge is committed to making advanced cyberinfrastructure accessible to all, so every institution can fully participate in the opportunities of the AI era and beyond.

Connect. Collaborate. Innovate—with Edge.

The post Powering Collaborative Innovation and Enabling AI Research appeared first on NJEdge Inc.


Inside SHI International’s AI and Cyber Labs

The post Inside SHI International’s AI and Cyber Labs appeared first on NJEdge Inc.

SHI International Corp., a global leader in technology solutions, recently announced its cutting-edge AI & Cyber Labs facility in Piscataway, New Jersey that was designed to accelerate business innovation and guide organizations through every stage of their AI journey from ideation to adoption. “The introduction of SHI’s AI & Cyber Labs will empower many organizations to bring both clarity and speed to their AI efforts, helping them identify use cases with strong ROI and the best way to achieve the shortest time to value,” shares Thai Lee, President and CEO of SHI. “With the new AI & Cyber Labs, we are fulfilling our ongoing promise to help our customers solve what’s next.”

Bringing AI Use Cases to Life

The state-of-the-art facility provides cutting-edge generative AI solutions that can help guide customers through initial ideation and prototype development to the deployment of large-scale datacenter infrastructure. “The core purpose of the lab is to help our customers bring their AI use cases to life—use cases that are focused on solving real business problems,” explains Lee Ziliak, Field Chief Technology Officer and Managing Director of Architecture at SHI International. “We’re here to help them iterate quickly and move from concept to production faster than traditional models allow.”

SHI’s approach, known as Imagine, Experiment, Adopt, supports customers no matter where they are in the AI journey. “In the ‘Imagine’ phase, we sit down with customers to explore what’s possible with AI based on their specific business challenges,” shares Ziliak. “Some are very advanced in their journey, while others are just getting started. We work closely with them to define the use case and assess whether it’s something we can build and test in the lab. The ‘Experiment’ phase is where the lab really shines. We bring those ideas to life, typically in two to six weeks, which is a key differentiator. We’re not spending a year solving a problem. Even with complex challenges, we break them down into manageable pieces and iterate quickly to deliver measurable value.”

Beyond rapid prototyping, Ziliak emphasized the value of testing assumptions before committing major resources. “One of the most valuable things we do is helping customers avoid costly mistakes. We’ve had situations where we’ve proven a use case works, and others where we’ve shown it doesn’t. That insight can save millions in unnecessary investments in infrastructure or development. We’ve also had cases where a customer thought their data could support a use case, but once we got into the experiment phase, it became clear it couldn’t. That’s a critical part of what the lab offers, validating what works, but also uncovering what won’t before major investments are made.”

“One of the greatest strengths of the higher education community is its collaborative spirit. Networks like Edge and CENIC enable institutions to pool resources, share ideas, and collectively overcome funding and infrastructure challenges. SHI sees real value in supporting and contributing to these efforts. Sometimes the most impactful thing we can do is help raise awareness, whether it’s about available technologies, new approaches, or shared infrastructure. That kind of knowledge exchange accelerates innovation across the board, and is not just about advancing a single institution’s goal, but about moving the entire field forward.”

— Lee Ziliak
Field Chief Technology Officer and Managing Director of Architecture,
SHI International

Over the course of a typical engagement, the lab delivers a tangible output: a minimum viable product (MVP). “At the end of that experiment phase, we deliver an MVP,” Ziliak explains. “That MVP can then be taken into production, and we can assist with that, whether it’s on-premises or in the Cloud. Once the experiment phase is completed, we can help customers scale that into production as part of the ‘Adopt’ phase. As a top reseller of software and hardware, we can help deploy the solution on the right infrastructure, including a physical environment, cloud-based, or hybrid.”

SHI’s AI & Cyber Labs are built to reflect the diverse environments customers may use in the real world and are equipped with on-prem hardware from nearly every major AI provider using NVIDIA-accelerated infrastructure. “We have DGX-based systems, Dell AI Factories, HPE AI infrastructure, Lenovo systems, and storage solutions from vendors like DDN, NetApp, and Pure,” says Ziliak. “The idea is to match each use case to the right infrastructure to see how that workload performs in a real-world environment. We also leverage various cloud providers to create landing zones wherever our customers want to run their workloads. Depending on the scale, a solution might start small or move into supercomputing-class infrastructure. With a variety of GPUs and platforms available, we help size workloads correctly and demonstrate performance before going live. This flexible infrastructure is designed to help customers bring their use cases to life, size them appropriately, and scale into production. Whether you’re just starting your AI journey or looking to deploy enterprise-grade workloads, we’re here to help you get there with clarity and speed.”

Solving Real-World Problems

SHI’s AI & Cyber Labs are powered by advanced NVIDIA infrastructure, reflecting SHI’s growing role as a strategic NVIDIA partner in delivering end-to-end AI solutions. “When people think of AI, it’s very natural to think of NVIDIA, since they are one of the pioneers in the space, especially from a hardware and software perspective,” notes Ziliak. “As a complete stack of infrastructure designed to support AI, NVIDIA includes accelerated hardware and NVIDIA AI Enterprise (NVAIE) software—a comprehensive suite of development tools that enables us and our customers to build and run AI workloads effectively. The partnership we’ve built with NVIDIA really took shape last year, and both SHI and NVIDIA have leaned in, investing not just in infrastructure for the lab, but in people and training to ensure we can deliver true AI expertise.”

SHI has trained hundreds of team members through NVIDIA’s AI Advisor and advanced certification programs, enabling the company to deliver enterprise-grade AI support from ideation to implementation. “NVIDIA has recognized us as one of their top five partners,” shares Ziliak. “They’ve even called SHI their most complete partner, not just because of our infrastructure capabilities, but also due to our full-stack development expertise. The lab isn’t just about testing infrastructure, it’s also where we build. We have a team of over 120 professionals, including data scientists, MLOps engineers, application developers, and UX specialists who can bring customer use cases to life.”

SHI is also one of a select few NVIDIA SuperPod partners who is capable of delivering NVIDIA’s most powerful infrastructure solutions. “At the end of last year, NVIDIA announced that their SuperPods, previously deployed only by NVIDIA directly, would be made available through key partners,” explains Ziliak. “SHI was chosen as one of those partners, which is a testament to our capabilities and commitment. For SHI and NVIDIA, there’s investment on both sides and it’s all focused on bringing AI use cases to life for our customers. It’s been a strong partnership so far and we’ve helped each other open new doors, collaborate on opportunities, and most importantly, solve real customer problems.”
SHI’s partnership with NVIDIA helps customers across many sectors, including healthcare, education, and enterprise. “The advantages to our clients are manyfold,” says Ziliak. “NVIDIA brings an incredible suite of hardware and software solutions to the table, but just as important, they bring unmatched expertise. They’re on the cutting edge of developing the technologies that power AI workloads today. When you pair NVIDIA’s innovation and tools with SHI’s enterprise know-how and development capabilities, the result is a complete, end-to-end solution. I can honestly say we haven’t been presented with a problem that we couldn’t solve through this partnership. This type of collaboration is so exciting; we’re not just deploying technology, we’re solving real business problems every day.”

The Power of Visionary Leadership

In thinking about SHI’s leadership that continues to drive their future-forward mission, Ziliak reflects on the role of CEO, Thai Lee, whose vision and passion has helped shape SHI into the largest woman-owned business in the United States. “Before I ever worked here, I was actually an SHI customer,” shares Ziliak. “I spent 26 years at Verizon, and during that time, I worked closely with SHI, including Melissa Graham, who was employee number one after Thai Lee. Whatever problem we brought to SHI, they always found a way to solve it. Thai has been in this business for over three decades, and the company’s laser focus on solving customer problems begins with her. She has embedded this into the culture, and it shows up in every person across the organization. Thai also believes diverse perspectives lead to better problem solving. If everyone approaches things the same way, you get the same results. At SHI, we welcome new ideas and different ways of thinking, and that helps us keep delivering for our customers.”

Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, adds, “SHI International is a key innovation leader rooted right here in New Jersey. The launch of its AI & Cyber Labs, under the strategic leadership of Thai Lee and technical guidance of Lee Ziliak, represents a powerful commitment to accelerating applied AI and cybersecurity solutions. These labs offer immense opportunities for collaboration across sectors, including higher education and research. As we build regional AI and Quantum hubs, SHI’s expertise and infrastructure in partnership with Research and Education Networks, such as Edge, can play a vital role in shaping a more connected, secure, and forward-looking innovation ecosystem.”

“We love our higher ed customers and have built an entire team dedicated to supporting this sector. These institutions are often the source of thought leadership, and staying connected to that gives us insights that help shape SHI’s future. We partner with universities to help turn their research into real solutions. That’s where SHI’s AI & Cyber Labs come into play, by offering infrastructure, resources, and expertise to help higher ed institutions develop and test their use cases, especially in fields like health and life sciences. We go beyond selling products, we’re making actual investments in some of the universities we work with. Whether that’s through lab resources, technical support, or co-developing solutions, we see it as a true partnership.”

— Lee Ziliak
Field Chief Technology Officer and Managing Director of Architecture,
SHI International

As the landscape of higher education continues to evolve rapidly, often with tight budgetary and operational challenges, SHI sees the sector not just as a customer base, but as a strategic partner in innovation and growth. “We love our higher ed customers and have built an entire team dedicated to supporting this sector,” says Ziliak. “These institutions are often the source of thought leadership, and staying connected to that gives us insights that help shape SHI’s future. We partner with universities to help turn their research into real solutions. That’s where SHI’s AI & Cyber Labs come into play, by offering infrastructure, resources, and expertise to help higher ed institutions develop and test their use cases, especially in fields like health and life sciences. We go beyond selling products, we’re making actual investments in some of the universities we work with. Whether that’s through lab resources, technical support, or co-developing solutions, we see it as a true partnership.”

SHI’s approach to higher education is built on a symbiotic relationship that supports both innovation and real-world application across research, operations, and the student experience. “We’re here to help our customers solve problems, and colleges and universities have some unique challenges,” says Ziliak. “It’s not just about the research happening on campus; universities are also large, complex businesses. We look at both sides: how AI can support their academic innovation, and how it can improve operational efficiency.”

“We’re actively partnering with several institutions right now,” continues Ziliak. “In some cases, we’re investing in bringing their AI use cases to life. In others, we’re working on the business side, like improving the student experience or helping identify students who might be struggling before they fall behind. AI can help with things like streamlining prerequisites, enhancing course success rates, or even back-office functions like fraud detection. We also recently launched our Digital Ambassador, which is a web- or kiosk-based chatbot that features a digital human interface. You can walk up and ask questions like, ‘Where’s the registrar’s office?’ It’s all designed to improve the user experience, and I think students, especially digital natives, will be comfortable engaging with that kind of interface.”

Expanding Access to AI and Cybersecurity Innovation

SHI’s lab environment is designed to support not just AI experimentation, but also the increasing need for robust cyber defense. “AI & Cyber Lab is not just a good tagline, we built our AI Lab directly on top of our Cyber Lab,” says Ziliak. “Everything we do is secured, from the data access center where the internet enters the building, down to GPU-level isolation. And we do it in close partnership with our field CISOs to ensure every layer is protected. As AI introduces new threat vectors, the need for secure experimentation environments becomes more urgent and SHI’s lab addresses this reality head on. Things like prompt injection attacks on large language models (LLMs) weren’t even on the radar a few years ago. Now, we have to think about how data from different silos gets synthesized by AI, and how to protect that newly combined data set as a unique, secure domain.”

“Before I ever worked here, I was actually an SHI customer. I spent 26 years at Verizon, and during that time, I worked closely with SHI, including Melissa Graham, who was employee number one after Thai Lee. Whatever problem we brought to SHI, they always found a way to solve it. Thai has been in this business for over three decades, and the company’s laser focus on solving customer problems begins with her. She has embedded this into the culture, and it shows up in every person across the organization. Thai also believes diverse perspectives lead to better problem solving. If everyone approaches things the same way, you get the same results. At SHI, we welcome new ideas and different ways of thinking, and that helps us keep delivering for our customers.”

— Lee Ziliak
Field Chief Technology Officer and Managing Director of Architecture,
SHI International

Ziliak continues, “AI is being used to do amazing things like accelerate medical research, but it’s also being weaponized by threat actors. Our lab lets us test and validate AI-powered security solutions, so we can help customers protect their environments against increasingly intelligent and adaptive threats. In today’s digital landscape, AI and cybersecurity are inextricably linked. You can’t innovate responsibly in AI without addressing how to secure it, and that’s exactly what our lab is designed to do.”

With AI and cybersecurity increasingly intertwined, customers are bringing more complex and nuanced challenges into SHI’s AI & Cyber Lab. “One of the most common problems we hear about is prompt injection,” explains Ziliak. “That’s when users try to manipulate an AI system, usually by finding clever ways to bypass its guardrails and get it to do something it shouldn’t. For example, attempting to get an LLM to respond to a prohibited prompt, like providing instructions for illegal or harmful activity, by disguising the request in roundabout language. Many organizations now rely on SHI’s lab to test the integrity of their AI systems, fine-tune protections before deployment, and actively strengthen their security postures.”

SHI also helps clients use AI to detect and respond to threats and help secure their infrastructure. “For instance, using NVIDIA Morpheus to scan logs for anomalies—either in real-time or after an event—can catch things traditional tools might miss,” notes Ziliak. “Organizations can also turn to SHI’s AI & Cyber Lab to explore infused cybersecurity software or test an “applied AI” solution in a safe, scalable space. No matter if you’re building something from scratch or validating a third-party tool, the lab allows us to pressure test those solutions in a realistic, secure, and accelerated environment.”

As SHI continues to push the boundaries of emerging technologies, quantum computing is firmly on the radar, but with a grounded, practical focus. “While large-scale quantum computing is still a few years away, SHI is already taking proactive steps to help customers prepare for the coming shift,” explains Ziliak “Right now, we’re actively working in the post-quantum cryptography space. The concern is that once quantum computing becomes viable, the encryption standards we rely on today could be broken almost instantly. SHI is already testing and validating encryption products designed to be resilient in a post-quantum world. As far as actually testing quantum computers or doing more direct work, we’re not there yet, but with the pace of innovation, I wouldn’t be surprised to see real-world quantum use cases start to appear in the next three to five years. There’s definitely a lot of investment flowing in from startups and governments. Even the UN is calling this the ‘year of quantum.’ For now, it’s about readiness, and that means helping organizations protect their data today from the threats of tomorrow.”

As New Jersey accelerates its investment in emerging technologies through initiatives like the NJAI Hub, SHI sees a natural opportunity to integrate its AI & Cyber Labs into the state’s growing innovation ecosystem. “We see a lot of value in partnering with these initiatives and we’re actively exploring how we can contribute by providing hardware, but more importantly, by offering our expertise,” shares Ziliak. “Whether that’s data scientists or infrastructure specialists, it’s an investment we’re making to help support and scale these environments effectively. We’re also exploring lead times. As research projects spin up, hardware may not be readily available. We’re identifying ways SHI can help these hubs accommodate workloads while their infrastructure catches up. That includes multi-tenancy strategies, GPU isolation, and advanced configurations that require hands-on experience.”

“One of the most valuable things we do is helping customers avoid costly mistakes. We’ve had situations where we’ve proven a use case works, and others where we’ve shown it doesn’t. That insight can save millions in unnecessary investments in infrastructure or development. We’ve also had cases where a customer thought their data could support a use case, but once we got into the experiment phase, it became clear it couldn’t. That’s a critical part of what the lab offers, validating what works, but also uncovering what won’t before major investments are made.”

— Lee Ziliak
Field Chief Technology Officer and Managing Director of Architecture,
SHI International

This focus on collaboration and expertise naturally extends to research and education networks, which play a vital role in expanding access to AI and cybersecurity innovation. Ziliak sees these collaborative environments as essential to progress. “One of the greatest strengths of the higher education community is its collaborative spirit. Networks like Edge and CENIC enable institutions to pool resources, share ideas, and collectively overcome funding and infrastructure challenges. SHI sees real value in supporting and contributing to these efforts. Sometimes the most impactful thing we can do is help raise awareness, whether it’s about available technologies, new approaches, or shared infrastructure. That kind of knowledge exchange accelerates innovation across the board, and is not just about advancing a single institution’s goal, but about moving the entire field forward.”

The post Inside SHI International’s AI and Cyber Labs appeared first on NJEdge Inc.


Expanding Access to Data-Driven Discovery through Shared Infrastructure

The post Expanding Access to Data-Driven Discovery through Shared Infrastructure appeared first on NJEdge Inc.

As the educational landscape rapidly shifts toward data-driven and computational learning, one expert’s unconventional journey into distributed computing is helping shape the future of STEM education. Frank Wuerthwein, Ph.D., Director of the San Diego Supercomputer Center (SDSC), Executive Director of the Open Science Grid (OSG), Professor of Physics and Data Science at UC San Diego, describes himself as an experimental particle physicist by training and unexpectedly found his way into large-scale distributed computing more than 25 years ago.

This shift in focus has been both deliberate and deeply practical. Around three years ago, Wuerthwein noticed a growing and largely unmet need, not just in research institutions, but across the wider landscape of higher education. “Among the nearly 4,000 accredited institutions of higher learning in the U.S., fewer than 200 are classified as research-dominant (R1),” says Wuerthwein. “The 3,800 that are not R1s have education as a much stronger focus than research.”
Wuerthwein also observed how STEM education was evolving from chalkboards to cloud infrastructure. “More and more STEM education requires the students to at least have a laptop, and sometimes even that is not sufficient,” shares Wuerthwein. “They use their laptop as an entry point to Jupyter through institutional compute infrastructure. And in the age of AI and large language models (LLMs), this dependency has only grown. AI is fundamentally an experimental science, and educating the next generation in this field requires not just theory, but hands-on experience with both data and computing infrastructure.”

Wuerthwein says many institutions are undergoing a radical shift in their needs, particularly those that have traditionally not required large-scale computing systems, but now find such infrastructure essential to modern STEM education. “My work in distributed computing is following this shift, which presents both challenges and opportunities. It creates an innovation space for people like me—one that serves the country in ways nothing else I’ve done in my career ever has. It’s an exciting time to be in this field.”

 “Academia has an opportunity to provide value to industry in ways that go beyond just educating people. If we align workforce development with real-world challenges, then the people we educate will be that much more valuable and effective in industry roles. There’s a partnership model waiting to be built—one where research, education, and industry innovation all feed each other. In the years ahead, the financial footing of academia can’t rely solely on tuition, federal and state funding, or philanthropy. We need a new model where industry directly funds collaborative problem-solving and, in turn, derives real value. That’s how we ensure that academic research and education remain not only relevant, but essential to society’s future.”

— Frank Wuerthwein, Ph.D.,
Director of the San Diego Supercomputer Center (SDSC), Executive Director of the Open Science Grid (OSG), Professor of Physics and Data Science, UC San Diego

Supporting Social Mobility

When it comes to the future of education and the national workforce, Wuerthwein notes that the spotlight shouldn’t just be on elite research universities. “There are 20 million students in post-secondary education in the U.S. alone. Looking at California specifically, over 2 million students are enrolled in California’s community colleges and serve more than twice as many students as the California State University (CSU) and University of California (UC) systems combined. When you add up private institutions like Caltech, Stanford, and others, they barely register in comparison to these numbers.”

This reality, Wuerthwein says, is crucial for understanding where educational investments in data and computing infrastructure should be made. “The majority of higher education students and the future of the workforce are in community colleges. And if they all need to support data and compute infrastructure to teach generative AI, that presents a significant and urgent challenge.”

In California, a well-designed educational ecosystem connects public high schools, community colleges, and the state’s public university systems. “The state has a very strong organizational principle that provides clear, intentional pathways for student advancement,” says Wuerthwein. “Many high schools typically offer community college courses either on-site or allow high school students to enroll directly, and are designed to augment AP coursework.”
California community colleges also feed directly into CSU and UC campuses through a structured and long-standing transfer system. “Half of the incoming students at SDSU come from community colleges, and roughly 30% of UC San Diego’s student body enters through the community college system,” shares Wuerthwein. “This structure creates real social mobility. Students can complete two years at a community college and then transfer to CSU or UC to finish their bachelor’s degree. It’s a very well-established program.”

However, this structure isn’t without its challenges, particularly when it comes to academic continuity. “When you explore how the CSUs or UCs integrate these students, it requires an impedance matching between what they learn in the first two years and what they need to know for the final two years,” explains Wuerthwein. “That impedance matching has been a persistent challenge. For the system to fully serve students, especially in emerging fields like data science and AI, it must not only enable access, but ensure alignment in curriculum, skills, and technological infrastructure across all tiers.”

Developing Shared AI Infrastructure

By developing shared AI infrastructure, Wuerthwein says they can significantly ease the transition for students moving through the education pipeline. “The structured nature of California’s public education system creates a strong incentive for faculty at institutions like UC San Diego to actively engage with community colleges. It’s essential that transfer students arrive prepared for the advanced coursework they will encounter in their third and fourth years. San Diego is building a unified system that allows students from high school through community college, CSU, and UC to access and learn on the same computing infrastructure. This shared environment supports education in AI, data-intensive computing, programming, and related fields, creating consistency and continuity across institutions. We’re just at the beginning of this journey, but the goal is to develop infrastructure that supports a common curriculum, smooths academic transitions, and ultimately advances social mobility across the state.”

For Wuerthwein, one of the key lessons learned over more than two decades working in distributed computing is that the most significant challenges are often not technical, but social. “In computer science, there’s a saying that every problem can be solved with another layer of indirection. Technical problems can usually be addressed. Social problems are much harder. Creating a truly unified system, one that spans high schools, community colleges, CSUs, and UCs, requires more than just technology. It demands collaboration, alignment, and shared purpose among institutions that have traditionally operated in silos. To make this work, you have to build around something common that all these institutions can align with.”

“We’ve created a social network of educators through this infrastructure. Why not use that network to connect with experts in agriculture, align priorities, and figure out how to bring real-world problems into the classroom? In this way, the AI infrastructure is more than a technical backbone, it becomes a facilitator for curriculum development, workforce alignment, and cross-sector collaboration. That’s what makes it so exciting. We’re not just moving bytes around anymore. We’re building an ecosystem—one that’s socially and economically meaningful.”

— Frank Wuerthwein, Ph.D.,
Director of the San Diego Supercomputer Center (SDSC), Executive Director of the Open Science Grid (OSG), Professor of Physics and Data Science, UC San Diego

“To help advance education and research statewide, we have the Corporation for Education Network Initiatives in California (CENIC),” continues Wuerthwein. “CENIC serves educational institutions, as well as public libraries. In the long term, we want to integrate public libraries as well, because of their potential to be AI makerspaces. Many libraries already host physical makerspaces for hands-on learning, but I think they could evolve into digital hubs for AI education and experimentation. Ten years from now, I think we’ll see AI makerspaces in public libraries, in high schools, in community colleges, and in state universities. These shared environments would offer consistent tools and infrastructure and enable learners to move seamlessly through various stages of education and into careers.”

In California, where educational systems are structurally divided, such as the separation between the UC and CSU systems, a unified AI infrastructure could help create a continuum that supports not just degree programs, but also career training, lifelong learning, and workforce development. “A student could begin in high school, transition into a job, come back for certificates, or pursue ongoing learning,” Wuerthwein explains. “The whole pipeline would be built on a common, cost-effective infrastructure, and by scaling these systems efficiently and borrowing strategies from hyperscalers to optimize resources, there is an opportunity to drive down costs while expanding access. That’s the kind of system we’re building right now. One that supports education, equity, and innovation at scale.”

Wuerthwein says as they roll out the infrastructure, its broader social and educational implications have come into sharp focus. “At first, I thought of this purely as a distributed computing problem, but the more we roll out the infrastructure, the more we realize we’re building something bigger and creating social cohesion among educators. We’re building a community, and that community is becoming an interesting target for people who want to develop curriculum for different things.”

At a recent annual meeting, Wuerthwein collaborated with General Atomics and its partners in the fusion energy sector. “Their industry is preparing for a dramatic transformation: moving from research-focused efforts to commercial-scale fusion within the next 20 years, with hopes of becoming a trillion-dollar industry,” shares Wuerthwein. “That kind of scale-up creates a massive workforce challenge. Suddenly we had a platform where educators from community colleges, CSUs, and UCs were all in one place and could engage with fusion experts who are interested in education. This new ecosystem makes it possible to collaboratively develop curricula that respond directly to the needs of emerging industries.”

Looking ahead, Wuerthwein plans to bring agricultural technology into the fold. With California’s agricultural sector facing growing challenges, there’s a pressing need to incorporate ag-tech themes into STEM education across the state. “We’ve created a social network of educators through this infrastructure,” Wuerthwein says. “Why not use that network to connect with experts in agriculture, align priorities, and figure out how to bring real-world problems into the classroom? In this way, the AI infrastructure is more than a technical backbone, it becomes a facilitator for curriculum development, workforce alignment, and cross-sector collaboration. That’s what makes it so exciting. We’re not just moving bytes around anymore. We’re building an ecosystem—one that’s socially and economically meaningful.”

Solving Scalability Challenges

As director of the SDSC, Wuerthwein leads initiatives that advance high-performance computing, distributed cyberinfrastructure, and collaborative research across scientific disciplines, and sees today’s challenges through the lens of scalability. “Any kind of scalability problem is exciting to the people at SDSC; it speaks to our native skillset and is in our DNA,” says Wuerthwein. “From scaling user support and training to expanding systems across thousands of institutions, SDSC staff view these demands not just as logistical hurdles, but as compelling research questions in their own right. The scale of the educational challenge of serving 20 million students across nearly 4,000 institutions nationwide offers a fertile testing ground for innovation. This mission aligns perfectly with our core expertise, and our team specializes in solving scalability challenges, and this is exactly the kind of problem we’re built to tackle. For SDSC, the intersection of societal need and technical ambition creates a rare opportunity to make meaningful contributions to education and workforce development while advancing the science of distributed computing itself.”
As Principal Investigator (PI) of the National Research Platform (NRP), Wuerthwein sees the initiative as central to enabling the large-scale transformation of education and research that he’s championing. “At its core, the NRP is designed to support scalable, distributed cyberinfrastructure across a wide range of institutions and use cases,” explains Wuerthwein. “The platform operates with three core goals: enabling educational access at scale, reducing institutional costs, and fostering innovation in heterogeneous computing.”

One of the NRP’s foundational contributions is the development of a conceptual software stack, from networking and console layers to higher-level services such as JupyterHub and AI development tools. “The platform is being designed with both vertical and horizontal openness,” states Wuerthwein. “Horizontally, it aims to reach over a thousand institutions of higher education across the country. Currently, about 70 institutions participate. Vertically, the platform is built as an open environment where both academic and commercial developers can build tools and services, especially those focused on affordable, scalable AI education. Commercial entities are interested in building on NRP, because it provides a more cost-effective alternative to commercial cloud services, especially for educational institutions.”

NRP also addresses a pressing technical reality of the end of Moore’s Law, which predicted that computers would become more powerful and cheaper at a steady pace, roughly doubling in processing capability every couple of years. “The traditional computing model where all intelligence resides in the central processing unit (CPU) is being replaced by a new paradigm where peripherals themselves are becoming programmable,” explains Wuerthwein. “As the regular doubling of CPU performance slows, hardware architectures are becoming increasingly diverse and bringing new challenges in system design, integration, and programming. We’re building a garden of heterogeneous architecture. Using Kubernetes rather than traditional batch systems, NRP can support a wide variety of computing devices, including Field-Programmable Gate Arrays (FPGAs), programmable Network Interface Cards (NICs), and even programmable network switches. This flexibility turns NRP into a playground for experimentation, where computer scientists and domain researchers can collaborate.”

“By bringing together technologists and domain scientists on the same platform, we’re creating the opportunity for serendipity,” adds Wuerthwein. “The goal is to accelerate the adoption of emerging architectures while also helping researchers in fields like biology, physics, and astronomy make sense of, and take advantage of, these new tools. Part of the NRP’s mission is to scale out infrastructure for education affordably and inclusively and enable institutions of all sizes to access the tools they need to teach and research AI. Secondly, we want to foster innovation at the infrastructure level and create a collaborative space where new computing paradigms can be tested, adapted, and adopted.”

Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs at Edge shares, “We are deeply grateful for the partnership with the National Research Platform under Dr. Frank Wuerthwein’s leadership. Through the National Science Foundation (NSF)-funded CRISPIE project—Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education—we are working together to improve equitable access to advanced research networks and innovation.”

Growing Importance of Data-Intensive Science

In looking at scientific advancement over the past two decades, Wuerthwein says this growth has significantly been powered by progress in computing and data capabilities. “Much of this progress stems from how advances in computing have enabled both the collection and consumption of larger, more complex datasets. As instrumentation has improved, added more sensors, and sampled at faster rates, these tools have generated exponentially more data. But the ability to make sense of that data also hinges on computational advances. Moore’s Law has allowed for exponential growth in hardware performance at constant cost, but that exponential gain, however, is slowing. The only way capability can keep growing is either through radical changes in hardware architecture or through radical advances in algorithms.”

As institutions across the country look to scale data infrastructure for AI, science, and education, research and education networks (RENs) such as CENIC and Edge are proving to be indispensable collaborators in that effort. “Research and education networks play an incredibly crucial role in data-driven and computational learning. They have created a social network of all institutions in their region that provide valuable collaboration opportunities. In the layered cake of technology, we lay on top of each other, we’re not competitors. Together, we can provide services that neither of us could offer alone, and I view the entire REN community as a natural partner in helping us achieve our mission and drive national progress in education and research.”

— Frank Wuerthwein, Ph.D.,
Director of the San Diego Supercomputer Center (SDSC), Executive Director of the Open Science Grid (OSG), Professor of Physics and Data Science, UC San Diego

This is where Wuerthwein says AI becomes a pivotal factor: “AI is not just a trend, but a strategic lever for future scientific breakthroughs. It has the potential to give you exponential growth in algorithmic capability for the same amount of money and may be the most prominent path forward for maintaining the pace of scientific progress, particularly as hardware improvements plateau.”

With the growing importance of AI and data-intensive science, researchers are increasingly relying on scalable computing power. “Traditionally, researchers needed to rely on physical allocations from university computing centers or federally funded supercomputing facilities,” says Wuerthwein. “Access was limited, required planning, and was often restricted to select institutions. In contrast, cloud providers like AWS, Google Cloud, and Azure now offer on-demand scalability, but managing those resources in a federally accountable way posed a new challenge. CloudBank 2 is a program funded by the NSF and provides commercial cloud resources to the nation’s science and engineering researchers and educators.”
CloudBank allows the NSF to allocate cloud credits to researchers in a way that maintains full visibility and accountability. “This system is the cloud-computing equivalent of what the NSF has long done through its supercomputing allocation program, ACCESS,” explains Wuerthwein. “A researcher can be awarded, for example, $10,000 in cloud credits to run experiments on commercial platforms. CloudBank ensures the NSF can track that usage, including who accessed it, how it was used, and what results it supported, and offer a transparent structure for reporting back to Congress and the public. Ultimately, CloudBank is the interface between the cloud providers, the community, and the NSF, and helps democratize access to advanced computing, especially for data-intensive research and AI development.”

Creating a Seamless Compute Ecosystem

As a longtime advocate for distributed computing, Wuerthwein sees the Open Science Grid and the NRP not as separate entities, but as a unified way to build global distributed systems. “In my mind, OSG is part of a continuum of distributed computing,” Wuerthwein explains. “NRP actually shows up in OSG as a single cluster, so we’re effectively the cluster provider for institutions that don’t want or can’t afford to run their own. For large research universities, which often maintain their own high-performance computing clusters, there’s a need to retain tight control over access, security, and identity management. These institutions integrate with national infrastructure like OSG at higher layers, contributing resources while maintaining autonomy.”

“Smaller institutions, such as community colleges, typically lack the resources or need to manage their own clusters,” Wuerthwein continues. “For them, NRP acts as a turnkey solution and provides compute and data services run by experts at places like SDSC, without requiring in-house infrastructure or staff. If you want full control, you need your own people to manage systems, networks, storage, and that requires scale and money. If you don’t have that scale, outsourcing it to NRP makes more sense.”

Wuerthwein says the key difference between OSG and the NRP is where each integrates into the stack and how much control institutions prefer. “Research universities tend to run their own infrastructure and want to retain more control, so they connect at higher layers. But for community colleges or institutions without large-scale research needs, it may make more sense to rely on a platform like NRP to handle computing for them. What we’re doing is essentially providing a cluster on their behalf, where NRP appears inside OSG as a single cluster. That means we take care of the heavy lifting while giving these institutions access to national-scale resources.”

One challenge in connecting these systems is ensuring that they communicate and account consistently. “Making this ecosystem work requires solving a range of technical coordination issues, like agreeing on naming conventions, tracking usage, and aligning systems across layers,” explains Wuerthwein. “In the end, this layered model allows us to scale access to advanced computing in a cost-effective and equitable way, support institutions of all sizes, and enable scientific collaboration and education at scale.”

Advancing Science through Partnership

As institutions across the country look to scale data infrastructure for AI, science, and education, research and education networks (RENs) such as CENIC and Edge are proving to be indispensable collaborators in that effort. “Research and education networks play an incredibly crucial role in data-driven and computational learning,” says Wuerthwein. “They have created a social network of all institutions in their region that provide valuable collaboration opportunities. In the layered cake of technology, we lay on top of each other, we’re not competitors. Together, we can provide services that neither of us could offer alone, and I view the entire REN community as a natural partner in helping us achieve our mission and drive national progress in education and research.”

As AI and data-driven discovery continue to shape science, education, and industry, Wuerthwein remains energized by the opportunities that lie ahead. His motivation is rooted not only in solving complex technical challenges, but in connecting people, domains, and ideas. “What I am most excited about personally is the exposure my job provides me to so many different, exciting opportunities,” he says. “I love to learn about new things and put structures together that serve different domains and problems. Solving problems that are intellectually interesting and impactful inspires me to get up in the morning.”

“Academia has an opportunity to provide value to industry in ways that go beyond just educating people,” continues Wuerthwein. “If we align workforce development with real-world challenges, then the people we educate will be that much more valuable and effective in industry roles. There’s a partnership model waiting to be built—one where research, education, and industry innovation all feed each other. In the years ahead, the financial footing of academia can’t rely solely on tuition, federal and state funding, or philanthropy. We need a new model where industry directly funds collaborative problem-solving and, in turn, derives real value. That’s how we ensure that academic research and education remain not only relevant, but essential to society’s future.”

The post Expanding Access to Data-Driven Discovery through Shared Infrastructure appeared first on NJEdge Inc.


MyData

Why a decentralised web index puts people first?

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. A decentralised web index […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. A decentralised web index […]

EdgeSecure

Accessibility Priorities for 2025: Policy, Procurement, and Practice

The post Accessibility Priorities for 2025: Policy, Procurement, and Practice appeared first on NJEdge Inc.

Webinar
September 30, 2025
1:00 PM ET

As higher education institutions strengthen their ties to federal research funding and sensitive data collaborations, cybersecurity maturity is no longer optional; it’s a necessity. This session offers a practical, leader-focused overview of the Cybersecurity Maturity Model Certification (CMMC) framework, explaining why it matters even beyond Department of Defense-funded projects, and what higher education leaders need to do to prepare.

Featuring real-world case studies, this presentation highlights the actual risks of non-compliance and the chance to take the lead with Edge’s scalable cybersecurity solutions.

Key Takeaways:

Understand the Stakes: Discover why CMMC is increasingly a key standard for federally funded research and third-party risk management, extending beyond DoD contracts. Learn from real failures: Review high-profile cases where non-compliance resulted in lawsuits, reputational damage, and multi-million-dollar penalties. Navigate the Complexity: Achieve a clear understanding of CMMC 2.0 levels, their effects on your IT environment, and how they align with frameworks like GLBA and NIST 800-171. Tailored for Higher Ed: Discover how Edge helps navigate decentralized systems, limited budgets, and academic culture to create a roadmap that works for your campus. Actionable Roadmap: Leave with practical steps your institution can take now to assess, prepare, and align with CMMC – before it becomes mandatory.

Presented By:

Dr. Dawn Dunkerley, Principal Virtual Chief Information Security Officer (vCISO), Edge Bobby Rogers, Jr., Virtual Chief Information Security Officer, Edge

Dr. Dawn Dunkerley is a nationally recognized cybersecurity leader dedicated to helping organizations balance risk with mission success. With extensive expertise in cyber operations, cybersecurity, and organizational resilience, she has published widely, spoken at major industry events, and serves as an exam developer for ISC2. Her career includes pioneering cyber mission assurance initiatives for the U.S. Army and leading teams to address mission-critical national security challenges, along with years of experience supporting the cybersecurity needs of higher education institutions.

Bobby Rogers, Jr. is a senior cybersecurity professional with over 30 years of experience, serving as a trusted auditor and virtual Chief Information Security Officer (vCISO) for K-12, higher education, government, and commercial organizations. A retired U.S. Air Force Master Sergeant, he has built and secured networks worldwide and specializes in governance, risk, and compliance. Bobby is also a published author of multiple cybersecurity certification guides and an experienced trainer, bringing deep expertise in CMMC, NIST, and GLBA compliance.

Register Now »

The post Accessibility Priorities for 2025: Policy, Procurement, and Practice appeared first on NJEdge Inc.

Tuesday, 02. September 2025

EdgeSecure

EdgeCon Autumn 2025

October 9, 2025 at Rider University The post EdgeCon Autumn 2025 appeared first on NJEdge Inc.

Date: October 9, 2025
Location: Rider University
Time: 9 a.m.-5 p.m.
Attendee Ticket: $49

Event Location:
Rider University

REGISTER TODAY » Vendor/Sponsorship Opportunities at EdgeCon

Exhibitor Sponsorship and Branding/Conference Meal sponsorships are available. Vendors may also attend the conference without sponsoring, but at a higher ticket price of $250.

Contact Adam Scarzafava, Associate Vice President for Marketing and Communications, for additional details via adam.scarzafava@njedge.net.

Download the Sponsor Prospectus Now » Accommodations

Hilton Garden Inn Princeton Lawrenceville
1300 Lenox Drive
Lawrenceville, NJ 08648

BOOK YOUR ROOM TODAY »

You may also reserve a room, by contacting the hotel directly via 609-895-9200. Be sure to mention that you’d like your reservation under the “EdgeCon Group Block” to obtain the conference rate.

Details of Conference Proceedings and Submissions Form are now available » Conference Agenda

8-8:30 — Check In / Registration
8:30-9:30 — Exhibitor Networking & Breakfast
9:40-10:20 — Breakout Session 1
10:30-11:10 — Breakout Session 2
11:20-12:30 — Keynote Panel & Awards
12:30-1:30 — Exhibitor Networking & Lunch
1:40-2:20 — Breakout Session 3
2:30-3:10 — Breakout Session 4
3:10-4 — Coffee & Connections: Exhibitor Networking

Breakout Sessions Protecting your Data and Reducing Institutional Risk: SaaS ERP vs. On-Premise System

As cybersecurity threats grow and data regulations evolve, the limitations of legacy on-premise ERP systems are increasingly evident. This session will explore how shifting to a modern SaaS ERP like Workday strengthens data protection, reduces institutional risk, and ensures long-term compliance.

We’ll compare SaaS and on-premise systems in terms of governance, cybersecurity, and regulatory alignment, showing how a unified cloud platform enables real-time visibility, audit readiness, and consistent policy enforcement.

The session will also highlight how AI and automation built into SaaS ERPs proactively detect and mitigate risks—capabilities often lacking in older systems. Attendees will learn how Workday supports institutional resilience through faster recovery, ongoing updates, and simplified compliance with emerging legal standards.

Whether you’re already on Workday or just exploring cloud options, this session will offer practical strategies to protect data, manage risk, and future-proof your institution.

Presenters:

Stephanie Druckenmiller, Executive Director, Enterprise Technologies, Northampton Community College Bryan McGowan, Workday Principal Enterprise Architect, Workday Using AI to Improve Data Accessibility

We know that getting timely answers from institutional data can be challenging, especially when those answers are buried in reports or require technical skills to retrieve. That’s why we’ve started developing a set of AI-powered tools to make it easier for faculty, staff, and administrators to get the information they need, when they need it.

One of our key projects is a conversational AI tool that allows users to ask everyday questions like “How many students are enrolled in a specific program this term?” or “Which majors are growing the fastest?” and get real-time answers without needing to write a single line of code or run a complex report. The goal is to put data directly into the hands of the people who use it for advising, planning, and making decisions.

Behind the scenes, this tool uses Python, ThoughtSpot, and web technologies. But on the front end, it’s about simplicity and usability, removing technical barriers so users can focus on taking action, not figuring out how to run a query applying various filters on dashboards.

Attendees will leave with practical strategies and tools they can apply in their own offices to improve data accessibility  and increase efficiency. This is a traditional presentation enhanced by live user interaction.

Presenters:

Bharathwaj Vijayakumar, Assistant Vice President, Office of Institutional Research and Analytics, Rowan University Samyukta Alapati, Associate Director, Office of Institutional Research and Analytics, Rowan University Modernizing Cybersecurity in Higher Ed: How Stevens IT Transformed User Risk Management

This session will explore the blueprint development of high-quality short courses while embedding quality Join Jeremy Livingston, CISO of Stevens Institute of Technology, and David DellaPelle, CEO of Dune Security, for a practical discussion on what it takes to replace outdated training tools with a real-time, adaptive approach to user risk.

With social engineering threats becoming more personalized and evasive, Stevens needed more than annual check-the-box training. They needed visibility into user-level risk, and a way to act on it. By onboarding Dune Security’s User Adaptive Risk Management platform, Jeremy and his team replaced static modules with role-based testing and training tailored to faculty, staff, and students.

In this session, you’ll learn how Stevens IT:

Eliminated generic compliance training in under a month Scored risk at the individual and departmental level Integrated with Workday and Okta to monitor user behavior and access Enabled campus-wide accountability without adding administrative burden

You’ll walk away with a blueprint for transitioning from awareness to action, using real-world signals, not assumptions. Whether you’re responsible for securing a university, agency, or school system, this session will show you how to build a modern human-layer defense that actually scales.

Presenters:

Jeremy Livingston, Chief Information Security Officer, Stevens Institute of Technology David DellaPelle, Chief Executive Officer, Dune Security From Data Chaos to Clarity: Evolving Toward an AI-Enabled Data Ecosystem

“As institutions face increasing demands to modernize fragmented data systems, the path forward often starts with foundational change. This session highlights lessons learned through a data modernization initiative involving Miami University and a strategic consultant to Miami. The effort focused on cloud-first infrastructure, scalable reporting environments, and readiness for AI use cases.

The presentation will discuss a non-exclusive implementation approach using commercially available platforms to support data integration across enterprise systems, including HR and financial systems. The speakers will outline how these efforts improved internal data coordination, enabled more consistent access to analytics, and set the stage for responsible exploration of AI and automation tools.

Attendees will gain:

A practical framework for evolving institutional data infrastructure Lessons from integrating enterprise platforms to streamline analytics Examples of how AI tools (e.g., forecasting and decision support) can be enabled through foundational change Insights on data governance, change management, and user enablement Reflections on navigating internal challenges in public-sector modernization efforts

This session offers applied, real-world perspectives on building a sustainable foundation for enterprise data transformation. No endorsement of any specific product or vendor is implied.”

Presenters:

Randy Vollen, Director of Data & Business Intelligence, Miami University Jon Fairchild, Director, Cloud & Infrastructure, CBTS Designing for the Whole: A Multidimensional Framework for Responsible Innovation in the Age of AI

As artificial intelligence and emerging technologies rapidly transform our world, innovation must evolve beyond efficiency and novelty to reflect deeper human, ethical, and environmental priorities.

This session introduces a multidimensional framework for responsible innovation organized around four core domains: Performance & Design, Creative & Cognitive Dimensions, Human-Centered Values, and Ethical & Governance Principles.

Each dimension includes four key attributes—from Functionality and Originality to Empathy and Integrity—that collectively offer a holistic model for evaluating and guiding innovation in the AI era.

Learning Outcomes:

Apply a four-dimensional framework to assess the responsibility of innovations. Distinguish key attributes across design, creativity, human values, and ethics. Evaluate existing practices through a multidimensional, equity-centered lens. Integrate the framework into decision-making, curriculum, or organizational strategy.

Presenter:

Michael Edmondson, Associate Provost, NJIT AI Unlocked: Resources, Policy, and Faculty Training

Ready to move your institution beyond AI buzzwords and into real-world impact? Join us for a collaborative session that demystifies AI adoption in higher education from three critical vantage points. Forough Ghahramani (Edge) will kick things off with an insider’s tour of the National AI Research Resource (NAIRR) Pilot—an invaluable toolkit now available to educators and researchers nationwide. Next, John Schiess (Brookdale/Ellucian) will tackle the often-murky waters of institutional AI policy and regulation, sharing actionable strategies for crafting guidelines that support innovation while managing risk. Rounding out the session, Mike Qaissaunee (Brookdale) will reveal lessons learned from piloting faculty training programs designed to boost AI literacy and spark creative teaching applications. This traditional presentation is packed with practical insights, but we won’t leave you empty-handed—participants will walk away with curated instructional materials and resources to jumpstart their own AI journeys.

Presenters:

Michael Qaissaunee, Professor and Co-Chair, Engineering and Technology, Brookdale Community College Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge John Schiess, Technical Director. Office of Information Technology (OIT), Brookdale Community College Protecting Privacy in the Age of AI Infused Pedagogy

This presentation, originally designed for K-12 settings, offers a foundational understanding of the critical privacy and security considerations that arise with the increasing adoption of Artificial Intelligence in educational environments. For higher education faculty and administrators, these principles are equally, if not more, relevant given the diverse data streams and research contexts within universities.

The core content covers:

Understanding AI Applications: We explore various ways AI is being utilized in education, from personalized learning platforms and intelligent tutoring systems to automated assessment tools, content generation, and administrative analytics. In higher education, this extends to research support, student success prediction, and advanced pedagogical tools. Navigating Privacy Concerns: The presentation highlights the “data dilemma,” emphasizing the types of sensitive student and interaction data collected by AI tools. Key concerns include data storage, access protocols, the risk of de-anonymization, and the need to align with relevant data privacy regulations (e.g., GDPR, state-specific privacy laws, institutional data governance policies). Addressing Security Risks: We delve into the cybersecurity threats posed by AI, such as data breaches and sophisticated phishing attacks. Crucially, the presentation addresses the pervasive issue of algorithmic bias, explaining how biased training data can lead to unfair outcomes in areas like admissions, grading, or resource allocation. The challenge of AI-generated misinformation and its impact on academic integrity is also discussed. Implementing Safeguards & Best Practices: The presentation outlines a proactive, multi-step approach for responsible AI integration. This includes developing clear institutional policies (acceptable use, data governance, AI ethics), conducting rigorous vendor vetting, providing comprehensive training for faculty and staff, fostering digital literacy among students, and maintaining transparent communication with the university community.

Ultimately, the presentation underscores that while AI offers transformative potential for teaching, learning, and administration in higher education, its responsible and ethical implementation hinges on a deep understanding of its privacy and security implications, coupled with robust institutional frameworks.

Presenter:

Teresa Keeler, Project Manager, NJIT Are We Ready? Rethinking AI Readiness, Risk, and Responsibility in Higher Education

As artificial intelligence reshapes nearly every industry, higher education faces a pressing question – Are we truly ready to harness AI effectively, responsibly, and sustainably? By the end of this session, attendees will be able to:

Assess AI Readiness using a practical self-assessment checklist across technical, cultural, and governance dimensions. Recognize Barriers including technical, financial, and cultural challenges to adoption. Evaluate Sustainability and Ethics by analyzing environmental impact, equity, and algorithmic fairness. Formulate Strategic Questions for assessing AI vendors, platforms, and models. Design Actionable Roadmaps to move from AI interest to readiness aligned with institutional missions.

Attendees also will leave understanding that readiness ≠ awareness; AI adoption depends as much on governance and culture as on technology. Sustainability must be central, addressing cost and environmental impact. Higher education needs tailored AI models rather than retrofitted industry

Presenters:

Nandini Janardhan, Programmer Analyst/Applications Manager, Fairleigh Dickinson University Sahana Varadaraju, Senior Application Developer, Rowan University Data in Action: Empowering Decision-Making and Driving Efficiency with Tableau Online

In the dynamic environment of higher education, data-driven decision-making is not a luxury—it’s a necessity. This session explores how our community college leveraged Tableau to transform raw institutional data into interactive, insightful dashboards across key business areas including enrollment management, finance, student services, and academic affairs. By centralizing data visualization and analysis, we’ve empowered stakeholders with real-time insights that drive efficiency, support strategic planning, and uncover opportunities for process improvement.

Presenters:

Moe Rahman, AVP/CIO, Community College of Philadelphia Laura Temple, Associate Director, Community College of Philadelphia An Emerging Trend: Learning Experience Design and Design Thinking Together

As online education continues to evolve beyond emergency remote learning paradigms, institutions face the critical challenge of creating engaging, effective digital learning experiences that rival traditional classroom instruction. This presentation examines the strategic expansion and integration of specialized design roles within Villanova University’s Office of Online Programs, demonstrating how a multidisciplinary approach combining Learning Experience Designers, Multimedia Experience Designers, and Instructional Designers can transform online program delivery and faculty collaboration.

Drawing from our organizations structure, this session will explore the distinct yet complementary roles of each design team member: Learning Experience Designers who focus on holistic student journey mapping and engagement strategies; Multimedia Experience Designers who create immersive, interactive content that enhances cognitive load and retention; and Instructional Designers who ensure the learning management system and learning technologies work together to achieve the goals of the faculty member. We will demonstrate how this specialized division of labor, while maintaining collaborative workflows, has enabled more sophisticated and targeted design interventions across diverse academic disciplines.

Central to our approach is the integration of Design Thinking methodology, which has fundamentally reshaped how our office approaches faculty partnerships and program development. Through iterative cycles of empathy mapping, ideation, prototyping, and testing, we have moved beyond traditional service-provider models to become strategic partners in curriculum innovation. This human-centered approach has not only improved learning outcomes but has also enhanced faculty buy-in and engagement with online pedagogical practices.

Recognizing that not all institutions have the resources for extensive staffing, the session will conclude with practical strategies for implementing similar frameworks with smaller teams. We will share role hybridization models, technology solutions that amplify individual capacity, and partnership strategies that can help smaller offices achieve comparable outcomes through strategic resource allocation and cross-functional collaboration.

Attendees will leave with concrete tools for assessing their own organizational needs, building compelling cases for design team expansion, and implementing design thinking approaches regardless of team size. This session is ideal for online learning administrators, instructional design managers, and institutional leaders seeking to elevate their digital learning offerings.

Presenter:

Brian Gall, Director, Learning Experience Design, Villanova University Beyond Compliance: Designing Digital Learning Environments that Are Accessible, Equitable, and Sustainable

In response to the federal mandate that all public institutions comply with revised Title II of the Americans with Disabilities Act by April 2026, The College of New Jersey (TCNJ) has launched a coordinated initiative to improve the accessibility of digital course materials and online environments. Rather than approaching compliance as a legal checkbox, our campus has framed the work to fundamentally improve student and faculty experiences through inclusive design, transparency, and collaboration. This presentation offers a case study in progress, tracing our institutional journey from grassroots collaboration and capacity-building to structured, strategic initiatives.

This session shares strategies that have led to meaningful progress in faculty development and systemic change. These include:

The implementation of an Accessible Course Design Sprint, a low-barrier, high-impact semester-long professional development opportunity using the Pope Tech Canvas accessibility checker. Strategic communication and outreach efforts, including partnership-driven emails and meetings with departments to frame accessibility as equity work. A centralized accessibility resources hub developed by the Center for Excellence in Teaching and Learning (CETL), supporting faculty in building “born accessible” content. Collaborations with the Division of Inclusive Excellence/ARC, legal counsel, and Academic Affairs to align accessibility goals across units, informed by an internal audit.

The session emphasizes sustainable change—approaching accessibility not as a project with an endpoint, but as a continual part of the digital transformation of teaching and learning.

Presenters:

Judi Cook, Executive Director, Center for Excellence in Teaching and Learning, The College of New Jersey Ellen Farr, Director of Online Learning, The College of New Jersey Mel Katz, Accommodations Support Specialist for Curriculum and Assessment, The College of New Jersey Putting the Pieces Together: Solving the AI Faculty Development Puzzle

How can institutions offer meaningful faculty development opportunities that speak to all faculty, from enthusiastic early adopters to staunch resistors who view AI as an existential threat to academic integrity? This session presents hard-won insights from a year-long journey of implementing comprehensive AI faculty development programming across diverse academic disciplines. We’ll share our successes and offer a candid examination of the unexpected challenges of serving faculty with conflicting perspectives on generative AI’s role in education.

Our experience reveals a fundamental tension: one-size-fits-all approaches to faculty development centered on AI cannot address the complex spectrum of faculty needs, from pedagogical autonomy to discipline-specific implementation strategies. Some faculty view AI tools as critical additions to their curriculum; others grapple with deeper philosophical questions about creativity, authorship, and knowledge creation. Designing development opportunities that address the needs of this entire spectrum has proven to be a challenging task. An effective approach requires careful planning and a nuanced understanding of faculty motivation. Ultimately, our goal is to ensure that faculty are equipped to help students thrive in a learning environment that has been transformed by AI.

During this session, we’ll share models of successful AI faculty development programming that meet the needs of diverse faculty perspectives. These program models include (a) a flagship lunch & learn series, (b) a self-paced, asynchronous pathway program, (c) a faculty reading group, and (d) a student-faculty discussion panel on AI. Attendees will walk away with battle-tested approaches for building AI literacy across their campus.

Presenters:

Carly Hart, Director, Instructional Design & Technology, Rutgers University-Camden Naomi Marmorstein, Associate Provost for Faculty Affairs, Rutgers University-Camden Strategic Foundations for AI: Turning Process, Architecture, and Data into Institutional Advantage

This session challenges the notion that artificial intelligence can be a plug-and-play solution. For colleges and universities, sustainable success begins not with algorithms but with clarity of process and discipline of design. By mapping how the institution operates across the student lifecycle (business process modeling) and ensuring technology truly aligns with mission (enterprise systems architecture), leaders establish the strategic bedrock on which AI can deliver impact.

This approach also equips institutions to govern the Five Vs of data—the scale, speed, diversity, reliability, and value of information—and to harness next-generation real-time platforms (HTAP) that integrate daily operations with immediate analytics. The message is clear: without these foundations, AI is a distraction; with them, AI becomes a catalyst for competitiveness, innovation, and student success.

Key Takeaways include:

Leadership, not just technology. AI only creates value when presidents, provosts, CFOs, and CIOs first demand clarity in institutional processes and alignment in system design. Data as a strategic asset. Managing the Five Vs of data turns raw information into trustworthy intelligence that drives financial sustainability, operational resilience, and academic excellence. Real-time readiness. HTAP platforms that merge daily transactions with instant analytics can transform decision-making—provided the institution has invested first in process discipline and architectural vision. A Small University's Path to GLBA Compliance

In the spring of 2024, Saint Elizabeth University was required to put into place compliance requirements for GLBA to be in compliance with the FTC Safeguards Rule. During this session, Ron Loneker, Jr., Director, IT Special Projects at Saint Elizabeth University, will present how the university reacted to the requests of our auditors and how it was cleared by them and the Federal Student Aid Office. Following the presentation and Q&A, the session will be opened up to hear from other schools in the audience to share their experiences as time allows for networking purposes.

Presenter:

Ron Loneker Jr, Director, IT Special Projects, Saint Elizabeth University …and more sessions to be announced! Exhibitor Sponsors

The post EdgeCon Autumn 2025 appeared first on NJEdge Inc.


Empowering Change Through Authenticity

The post Empowering Change Through Authenticity appeared first on NJEdge Inc.
“I’ve always believed in being my authentic self. At a recent legislative breakfast, I didn’t use a prepared speech but spoke from the heart and let the moment guide me. At this event, I shared the stories of our students in the CHAMPSS program and Brooklyn Recovery Corps. After I spoke, someone approached me and said, ‘I wasn’t even considering your school, but now I want to give your institution money.’ That’s the power of speaking honestly, it connects people and can move them to act. When you speak from a genuine place, it resonates and your audience doesn’t just hear you, they feel the message.” – Dr. Patricia Ramsey
President, Medgar Evers College

Dr. Patricia Ramsey’s journey to the college presidency is grounded in a deep commitment to service, education, and science. A biologist by training and an academic at heart, she steadily built a career that blends scholarly expertise with institutional leadership. On May 1, 2021, she made history as the first woman and the first scientist to become President of Medgar Evers College and has already made an impressive impact bringing a thoughtful, student-centered vision to campus and the higher education community.

Leadership Rooted in Legacy

Before joining Medgar Evers College, Dr. Ramsey built a distinguished record of leadership across several respected institutions. She began her career at Norfolk State University, serving as both a Deputy Fundraising Officer and Associate Professor of Biology. She went on to become Vice President for Academic Affairs at Shaw University, followed by nearly 12 years at Bowie State University, where she held multiple leadership roles, including Department Chair, Provost, and Interim President. Ramsey later served as Provost and Vice President for Academic Affairs at Lincoln University in Pennsylvania. Most recently, she was a Senior Executive Fellow with the Thurgood Marshall College Fund (TMCF), where she contributed to the advancement of Historically Black Colleges and Universities (HBCUs).

When Ramsey was named president of Medgar Evers, she admits she didn’t know much about the campus beyond what she had read. But as someone who’s always led with authenticity, she trusted her instincts and something about this opportunity felt right. As she began researching the college’s namesake, civil rights icon Medgar Evers, she discovered a profound connection to the institution. “I took the route most scientists don’t,” she recalls. “I started with Wikipedia and while scanning the page, a name jumped out at me—that of a pastor who had organized a gathering of civil rights leaders a year after Evers’ death. When I clicked through, a familiar name appeared in the list of participants: my cousin, Curtis. He was known in our family as “Uncle Buck,” and had marched with Dr. Martin Luther King, prayed on the Edmund Pettus Bridge during Bloody Sunday, and passed down a legacy of activism that had always been a part of my upbringing.”

The moment gave her chills. “If cousin Curtis knew I was a finalist for the presidency of Medgar Evers College, he would be so tickled, so proud,” says Ramsey. That feeling only deepened once she officially arrived on campus. While giving a personal tour of the campus library to her sister-in-law, she came across a display case dedicated to Medgar Evers. Inside was a book, and on its cover was a photograph of the Edmund Pettus Bridge. Standing with Dr. King, Ralph Abernathy, and John Lewis was none other than her cousin Curtis. “It was like another sign. To see him honored here, right where I now serve, it was confirmation that this is where I’m supposed to be.”

Breaking Down Barriers

CUNY’s Medgar Evers College is located in Central Brooklyn, and just prior to Ramsey joining the college as president, she learned of the severe impact of the COVID-19 pandemic on that community. Wanting to make a difference, when she learned of a grant proposal opportunity where she could potentially get significant funding for the College, even though she had less than a 24-hour deadline, Ramsey dropped everything to focus on getting the proposal submitted. She was successful in meeting the deadline and the College received $20 million in grant funding just six weeks after Ramsey’s arrival. This was the single largest grant in the history of Medgar Evers College. Ramsey recalls, “As a scientist, in the past I’ve received a call on a Friday that if I submitted a proposal by Monday, I had an opportunity to get funding, but never one where I had less than 24 hours to submit. Needless to say, I didn’t sleep and called two of my new team members to assist. I want to publicly thank Dr. Kimberly Whitehead and Dr. Jesse Kane for the assistance that they provided me”.

CHAMPSS was designed for students who have exhausted traditional financial aid options but still can’t afford to complete their education. “We provide funding not only for tuition, but also for books and Metro Cards. We’re a commuter campus and most people don’t realize how many of our students struggle to afford even a train ride.”

— Dr. Patricia Ramsey
President, Medgar Evers College

Two projects resulted from the proposal, an economic recovery project for Central Brooklyn, the Brooklyn Recovery Core (BRC), and a student success project, affectionately known as CHAMPSS (Cultivating Holistic Academic Mindsets to Promote Student Success). The BRC provides a paid internship for Medgar Evers College students, and CHAMPSS provides gap funding, up to the full cost of tuition, for each year that the student meets the CHAMPSS requirements.

Regarding the BRC, Ramsey explains, “Student interns are placed with small businesses and non-profits through a careful vetting process. The students benefit through paid internships that provide experiential learning, while helping them with their financial needs, and the small businesses or non-profits benefit from being able to add additional staffing, without the financial burden on their respective payrolls, especially considering the income loss that was exacerbated by the COVID-19 pandemic.” Ramsey continues, “Before placing the students, we focus on developing their soft skills, since one of the biggest challenges when graduates enter the workforce is the lack of strong interpersonal and professional skills.” This comprehensive preparation included professional dress standards, workplace etiquette, and communication. The BRC has grown from 30 partners at its inception, to more than 80 partners. Ramsey adds, “We’ve received nothing but positive feedback from our partners, and some interns have even been inspired to become entrepreneurs or start nonprofits, after their internship experience.”

CHAMPSS, the other project that was funded, was designed for students who have exhausted traditional financial aid options but still can’t afford to complete their education. “We provide funding not only for tuition, but also for books and Metro Cards,” Ramsey explains. “We’re a commuter campus and most people don’t realize how many of our students struggle to afford even a train ride.” According to an article released by 24/7 Wall Street in December of 2023, Medgar Evers College ranked No. 32 in the United States on its “Colleges with Most Upward Mobility” list. The median family earnings among enrolled students was $18,815 upon entry and grew to $42,968 after 10 years. Ramsey expressed, “Imagine a median family income below $19,000, for someone living in New York City.”
To maximize the return on investment, the CHAMPSS Program requires new freshmen to commit to graduating in four years, and new transfer students in two. “Most of our students work, often full time, which is why many of them do not finish their degrees,” notes Ramsey. “But now, with CHAMPSS providing gap funding that can cover up to full cost of attendance, plus providing students with books, and with metro cards so they can get to campus, there are fewer barriers to them earning that degree. In May 2025, we celebrated the graduation of our first (2021) CHAMPSS freshmen cohort, which is evidence that when financial barriers are lifted, student potential can thrive.”

Staying the Course

Dr. Ramsey has been widely recognized for her visionary leadership and impact, not just within the college, but across New York and beyond. Her work has earned her a place among some of the most influential figures in education and public service. She was named to City & State’s Brooklyn Power 100 list three years in a row (2021, 2022, and 2023), and to their Higher Education Power 100 in both 2022 and 2023, recognizing her as one of the top college and university leaders in New York State.

Her influence has also been honored by Schneps Media’s Brooklyn Power List and celebrated with numerous awards highlighting her dedication to education, community advancement, and social justice. She even received the “Lifetime Achievement Award,” from the 46th President of the United States in 2023, and the “Trailblazer of the Year Award” from the “Friends For A Better Buffalo,” in 2024, In 2025 she has already received an Emerald Award, a Congressional citation, a Living Legend Award, and has received notice of three additional honors that will be bestowed later this year.

“I’ve always believed in being my authentic self,” says Ramsey. “At a recent legislative breakfast, I didn’t use a prepared speech but spoke from the heart and let the moment guide me. At this event, I shared the stories of our students in the CHAMPSS program and Brooklyn Recovery Corps. After I spoke, someone approached me and said, ‘I wasn’t even considering your school, but now I want to give your institution money.’ That’s the power of speaking honestly, it connects people and can move them to act. When you speak from a genuine place, it resonates and your audience doesn’t just hear you, they feel the message.”

In the face of ever-changing policies and potential challenges, Ramsey says higher education leaders need to remain strategic and focused on their core mission. “It’s a challenging time, between executive orders and changes in funding language, like the National Science Foundation’s flagged terms that can jeopardize grant approvals. But my advice to higher ed leaders is not to panic. Be strategic. Choose your words carefully but stay true to your mission. We must continue doing the essential work, but with greater mindfulness. Describe the communities you serve and don’t stop advocating for your students. Political winds shift and we can’t afford to lose focus. We must stay the course, adapt thoughtfully, and remember why we are here—to uplift, empower, and create lasting change for the communities we serve.”

The post Empowering Change Through Authenticity appeared first on NJEdge Inc.


Reimagining Education In the Age of AI

The post Reimagining Education In the Age of AI appeared first on NJEdge Inc.

At EdgeCon Spring 2025, Steven D’Agustino, Ph.D., Senior Director for Online Programs at Fordham University, delivered a thought-provoking presentation on the evolving role of AI in education. In this discussion, he explored foundational concepts like machine learning (ML), natural language processing, and neural networks, while examining the promise and pitfalls of AI integration. D’Agustino encourages educators to be curators, contextualizers, creators, and collaborators and ensure AI is used deliberately and equitably to benefit all students.

Bringing Technology into the Classroom

Beginning his career in education in The Bronx, D’Agustino taught English at Theodore Roosevelt and James Monroe High Schools to children who spoke languages other than English. After nearly a decade in the classroom, D’Agustino then transitioned into administration, taking on a role as an English department chair at a middle school in Rockland County, while also pursuing his Ph.D. “During this time, I decided I wanted to pursue a career in higher education,” shares D’Agustino. “I joined Fordham University as a director for a federally funded grant under Title II-D of the No Child Left Behind Act. This early 2000s project focused on bringing technology into classrooms by equipping public and non-public school teachers in The Bronx with laptop computers and training.”

D’Agustino guided teachers on how to use this new technology through professional development sessions at Fordham University as well as weekly visits to their schools from an instructional technologist. Participating teachers received graduate credits and stipends as incentives. If teachers successfully completed the 10-month program, they received five laptop computers for use in their school. Recognizing that earlier efforts to support schools had been disrupted by staff turnover and school closures, D’Agustino responded by launching a more sustainable and community-centered initiative.

“We secured a grant to establish a 21st Century Community Learning Center in The Bronx that was specifically designed to support over-aged and under-credited high school students who were at risk of not graduating on time,” explains D’Agustino. “The center offered an innovative after school program that blended online learning with in-person academic support. Using an early learning management system called Plato Learning, (now Edmentum), students could retake courses like Algebra II online while working alongside licensed teachers trained in the platform.”

“When I asked the students about their process, they offered a striking insight. Since they already had the answer, they didn’t need to focus on finding it and could instead think deeply about why it was the answer. This truly shows the transformative potential of AI in education. AI is not a shortcut, but is a tool that can empower learners to engage more critically and reflectively with content and each other. Knowledge is deeply contextual and cultural and is shaped by the norms of the environments in which its produced and valued. In schools, certain types of knowledge and ways of knowing are often priveiliged, along with specific methods of demonstrating understanding. AI offers a unique opportunity to challenge and reflect on these ingrained preferences.”

— Steven D’Agustino
Senior Director for Online Programs,
Fordham University

“While understanding foundational knowledge remains essential, our culture often lacks deep reflection and historical awareness. This dynamic adds complexity to how artificial intelligence is integrated into education. AI challenges educators to rethink not just how we teach, but what it means to teach and learn in a rapidly evolving world. Traditional models, where instructors hold the answers and students passively receive them under controlled conditions, may no longer be adequate.”

“Part of the work is to think about effective methodologies that would be able to integrate AI, but that conversation begins with deeper questions about practice: What are you doing? Why are you doing what you’re doing? How do you know you’re doing it? The emergence of AI in education is similar to the earlier wave of technology integration, when simply placing laptops in classrooms prompted educators to rethink their approaches. AI is another kind of provocation and can promote inquiry-based student-centeredness and constructivism. AI is forcing higher education to confront its core pedagogical assumptions and reconsider what meaningful teaching and learning should look like in a digitally evolving world.”

During the next several years, the center evolved into a vibrant hub of academic assistance, mentorship, and community service. It became a fieldwork site for social work students, provided tax preparation and financial planning support with help from the university’s accounting faculty, and partnered with local organizations to distribute desktop computers to families. Fordham University also donated fifty free dinners a week and set up mentoring sessions where college students could connect with high schoolers over a shared meal.

Through these efforts, what started as an afterschool program transformed into a comprehensive, community-based model for educational equity and student support.
The program quickly grew beyond its original scope and the center started opening earlier and earlier and eventually included weekends. This unique community space partnered with the Boys and Girls Club across the street to introduce Wii Fit-based PE classes. “Believe it or not, the credit that prevents students from graduating more than any other is physical education,” notes D’Agustino. “Since this program required the use of technology, we gave students devices to track things like blood pressure and heart rate.”

The center’s comprehensive, student-first approach gained national attention and the program was featured in a compendium highlighting innovative education grants across the U.S. Over its six-year run, the center helped approximately 2,500 high school students graduate and demonstrated the power of combining academic support, community partnership, and creative problem-solving. “Building this program sparked my transition into supporting the development of online courses and programs and ultimately led to my role as Fordham’s first Director of Online Learning.”

Transforming Teachers into Creators

With a focus on supporting the development of online programs and courses at the graduate level, D’Agustino says he partnered with faculty to bring the entire undergraduate core online for Fordham’s School of Professional and Continuing Studies. “Much of my writing and publications center on effective instructional practices within technologically integrated environments, with a particular emphasis on issues of access and equity in education. I also continue to teach undergraduate students as an adjunct instructor at the university and am involved in a grant-funded initiative known as the Fordham Science and Technology Entry Program (STEP) which supports middle and high school students, primarily from The Bronx and surrounding areas.”

Through this program, students gain hands-on learning experiences that incorporate AI and digital media. Activities include using AI tools to design podcast logos, learning audio editing software, and exploring advanced language models for creative and academic purposes. “I’m particularly interested in how large language models can support both instructional design, such as lesson planning, and post-instructional tasks like assessments and discussions. I also think about inquiry-based learning, especially within online and technology-integrated spaces. The structure of many online platforms can hinder engagement and curiosity.”
“Online spaces transform teachers into creators of content and learners into consumers of that content,” continues D’Agustino. “While instructors have broad capabilities in most learning management systems, students often do not. This lack of agency can make it difficult to create true spaces for play, experimentation, and collaboration.” Reflecting on his earlier work with technology integration, D’Agustino recalls giving teachers laptops and training them on digital tools. “Looking back, there was an important assumption built into that model: that every instructor has a teaching methodology that is fully developed and coherent, but that is not always the case. This continues to shape how I think about educational technology and its real impact on pedagogy.”

For D’Agustino, the integration of AI into education isn’t just a technical shift, it’s a philosophical one. “Part of the work is to think about effective methodologies that would be able to integrate AI, but that conversation begins with deeper questions about practice: What are you doing? Why are you doing what you’re doing? How do you know you’re doing it? The emergence of AI in education is similar to the earlier wave of technology integration, when simply placing laptops in classrooms prompted educators to rethink their approaches. AI is another kind of provocation and can promote inquiry-based student-centeredness and constructivism. AI is forcing higher education to confront its core pedagogical assumptions and reconsider what meaningful teaching and learning should look like in a digitally evolving world.”

In examining how traditional educational structures are being carried over into online environments, D’Agustino says educators often replicate the constraints of physical classrooms in digital platforms and miss the opportunity to rethink what learning can look like in a virtual context. “Rather than leveraging the limitless and flexible nature of digital spaces, these environments are often designed with outdated metaphors that can limit innovation. There is tension between possibility and passivity, and we must step away from familiar paradigms and envision new models for teaching and learning in digital spaces that are more imaginative and student centered.”

Modernizing Teaching and Learning Methods

While reflecting on the cultural context in which teaching and learning take place, particularly in the United States, D’Agustino says our society tends to prioritize the present and future over the past. “While understanding foundational knowledge remains essential, our culture oftenlacks deep reflection and historical awareness.

This dynamic adds complexity to how artificial intelligence is integrated into education. AI challenges educators to rethink not just how we teach, but what it means to teach and learn in a rapidly evolving world. Traditional models, where instructors hold the answers and students passively receive them under controlled conditions, may no longer be adequate.”

“Instead, AI invites a deeper interrogation of educational power dynamics, agency, and the assumptions embedded in our teaching methods. I recently saw two students working together in the university’s learning commons. One student is writing on a whiteboard while the other is typing on a laptop. They had uploaded their teacher’s lecture slides into a large language model in order to do a side-by-slide explanation. Rather than passively accepting the AI’s output, the students actively critiqued and discussed it, translating the explanation into their own language on the whiteboard. Once they reached a shared understanding, one of them transcribed the final version.”

What stood out to D’Agustino was the collaborative nature of the process—not just between the two students, but also involving the teacher (via the original slides) and the AI tool. “There were “four people” in the room: the teacher, the two students, and the “artificial person,” says D’Agustino, “When I asked the students about their process, they offered a striking insight. Since they already had the answer, they didn’t need to focus on finding it and could instead think deeply about why it was the answer. This truly shows the transformative potential of AI in education. AI is not a shortcut, but is a tool that can empower learners to engage more critically and reflectively with content and each other. Knowledge is deeply contextual and cultural and is shaped by the norms of the environments in which it’s produced and valued. In schools, certain types of knowledge and ways of knowing are often privileged, along with specific methods of demonstrating understanding. AI offers a unique opportunity to challenge and reflect on these ingrained preferences.”

With AI capable of providing immediate answers, the urgency to recall and reproduce information diminishes. “This shift creates space for a deeper exploration of meaning: not just what the answer is, but why it matters and how it came to be,” says D’Agustino. “Such a change, however, isn’t just a matter of adopting new tools. It calls for a fundamental reimagining of how teaching and learning are structured and one that balances critical inquiry with the development of core competencies—ones that have often been marginalized in traditional models. As technology, including AI, takes on more of the cognitive load, there’s a growing need to lean into what is distinctly human: agency, creativity, collaboration, and other humanistic qualities. These competencies not only support deeper learning but also prepare students to navigate complex, dynamic environments where ethical reasoning and interpersonal skills will be increasingly vital.”

The post Reimagining Education In the Age of AI appeared first on NJEdge Inc.


OpenID

Three Shared Signals Final Specifications Approved

The OpenID Foundation membership has approved the following three specifications as an OpenID Final Specifications: OpenID Shared Signals Framework: https://openid.net/specs/openid-sharedsignals-framework-1_0-final.html OpenID CAEP: https://openid.net/specs/openid-caep-1_0-final.html OpenID RISC: https://openid.net/specs/openid-risc-1_0-final.html A Final Specification provides intellectual proper

The OpenID Foundation membership has approved the following three specifications as an OpenID Final Specifications:

OpenID Shared Signals Framework: https://openid.net/specs/openid-sharedsignals-framework-1_0-final.html OpenID CAEP: https://openid.net/specs/openid-caep-1_0-final.html OpenID RISC: https://openid.net/specs/openid-risc-1_0-final.html

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. These three Final Specifications are the product of the OpenID Shared Signals Working Group.

The voting results were:

Approve – 85 votes Object — 1 vote Abstain – 25 votes

Total votes: 111 (out of 433 members = 25.6% > 20% quorum requirement)

Marie Jordan – OpenID Foundation Secretary


About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net



The post Three Shared Signals Final Specifications Approved first appeared on OpenID Foundation.


Origin Trail

Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge…

Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge ecosystem driving the next generation of agentic science A vast amount of valuable clinical trial information exists in the world, but much of it is fragmented, hard to verify, and difficult to use, slowing medical research and patient care. This lack of connectivity slows research, complicates eviden
Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge ecosystem driving the next generation of agentic science

A vast amount of valuable clinical trial information exists in the world, but much of it is fragmented, hard to verify, and difficult to use, slowing medical research and patient care. This lack of connectivity slows research, complicates evidence synthesis, and limits the ability of healthcare professionals, patients, and other stakeholders to access clear, reliable information.

To address these challenges, Trace Labs, the core developers of OriginTrail, and Oxford PharmaGenesis have partnered on a groundbreaking initiative to globally connect and verify medical knowledge.

The challenge: Lost value in unconnected knowledge

Pharmaceutical companies and researchers generate a continuous flow of high-quality outputs — trial registrations, regulatory summaries, and peer-reviewed publications. Yet these resources are scattered across multiple platforms and formats, making it difficult to integrate with advanced AI systems.

As a result, vast amounts of valuable knowledge remain underused:

● Researchers struggle to locate relevant clinical studies and real-world evidence,

● Healthcare professionals lack quick access to verified, up-to-date information,

● Patients are left without clear, trustworthy resources to guide their decisions.

These inefficiencies keep knowledge fragmented and opaque because evidence is hard to find and harder to verify, resulting in slower progress and less transparency and trust, with real consequences for patients.

The vision: Building a connected and trusted health knowledge pool

Oxford PharmaGenesis — a global leader in the healthcare communications industry that collaborates with over 50 healthcare organizations worldwide, including eight of the world’s top ten pharmaceutical companies — and Trace Labs, have partnered to create the world’s first structured, connected, and verifiable pool of clinical trial knowledge on the OriginTrail Decentralized Knowledge Graph (DKG).

The OriginTrail DKG merges blockchain technology with semantic, machine-readable knowledge structures, ensuring every contribution carries verifiable ownership, a transparent version history, and rich contextual links for both AI and human use. Oxford PharmaGenesis’ partnerships span pharmaceutical and biotech companies, as well as professional societies, patient groups, and academic institutions. It is also a co-founder, co-funder, and facilitator of Open Pharma with the mission to advance open science, transparency, and equity for pharma-sponsored research communications, placing it at the center of trusted knowledge exchange in healthcare.

This initiative will launch through an incentivized data-sharing program to create a domain-specific Decentralized Knowledge Graph (or “paranet”) within the OriginTrail DKG. Leading pharmaceutical organizations will be invited to join as trusted knowledge contributors, making their clinical information accessible to AI agents, research tools, and human users alike. The result: faster, more accurate discovery and reuse, empowering experts and the public with reliable, transparent, and actionable insights.

From pilot to scalable implementation

The collaboration begins with a pilot, which will link together publicly available information from multiple medicines produced by a global pharmaceutical company. It will create the blueprint for rapid expansion to additional contributors through a structured, incentivized data-sharing program that will form a domain-specific paranet within the OriginTrail DKG. This first phase will establish the core framework — secure, intuitive tools for contributing and exploring data, robust systems for verifying and connecting clinical knowledge, and safeguards to ensure every piece of information remains trusted and protected.

Once operational, the paranet will allow AI agents to both produce and consume verifiable knowledge directly from the OriginTrail DKG. In practice, this means transforming complex clinical data into plain-language summaries, in-depth scientific reports, visual explainers, and other formats tailored to audiences ranging from researchers and clinicians to patients and the public. As more organizations contribute, the paranet will grow to billions of structured, connected, and verifiable data points — a rich foundation with the potential to accelerate medical research, speed up discoveries, and equip healthcare professionals, patients, and innovators worldwide with better tools for informed decision-making.

Looking ahead: A path toward a trusted public knowledge ecosystem

This collaboration marks the start of an ambitious journey to build the world’s most extensive decentralized repository of trusted clinical trial knowledge on the OriginTrail DKG, stemming the tide of medical misinformation by providing a solid bedrock of trusted information that genAI tools can use. Driven jointly by Trace Labs, the core developers of OriginTrail, and Oxford PharmaGenesis, a global leader in scientific and medical consulting for the pharmaceutical and healthcare industries, the initiative will transform valuable clinical data into a structured, verifiable, and AI-ready resource. By incentivizing collaboration and uniting leading pharmaceutical organizations, the network will grow rapidly — unlocking knowledge that can accelerate research, fuel innovation, and ultimately improve lives worldwide.

About OriginTrail

OriginTrail is an ecosystem dedicated to making the global economy work sustainably by enabling a universe of AI-ready Knowledge Assets, allowing anyone to take part in trusted knowledge sharing. It leverages the open-source Decentralized Knowledge Graph that connects physical and digital worlds in a single connected reality, driving transparency and trust.

Advanced knowledge graph technology currently powers trillion-dollar companies like Google and Facebook. By reshaping it for Web3, the OriginTrail Decentralized Knowledge Graph provides a crucial fabric to link, verify, and value data on both physical and digital assets.

Learn more about OriginTrail: https://origintrail.io/.

About Oxford PharmaGenesis

Oxford PharmaGenesis is a HealthScience communications consultancy. They are the largest independent company in the healthcare communications sector. Founded in 1998, their award-winning organization comprises more than 500 talented people working from North America, Europe, and the Asia Pacific.

Oxford PharmaGenesis is connected by a strong company culture and a clear mission: to help clients accelerate the adoption of evidence-based innovations for patients in areas of unmet medical need.

Learn more about Oxford PharmaGenesis: https://www.pharmagenesis.com/.

Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


FIDO Alliance

Passkeys Are Not Broken. The Conversation About Them Often Is

Nishant Kaushik, Chief Technology Officer, FIDO Alliance Every few months, like clockwork, a talk or article appears claiming that new research has uncovered a “vulnerability” with passkeys.  This can understandably […]
Nishant Kaushik, Chief Technology Officer, FIDO Alliance

Every few months, like clockwork, a talk or article appears claiming that new research has uncovered a “vulnerability” with passkeys.  This can understandably raise concern for executives and product leaders looking to uplift their authentication frameworks. But these reports have a pattern: they highlight opportunities for exploitation in the environment where passkeys are used, not any vulnerability in passkeys themselves.

Passkeys are FIDO authentication credentials that leverage public key cryptography. The authentication protocol relies on the user having control of their private key, which is generated on the user’s device (their smartphone, their FIDO Security Key, etc) and is never shared with the service they are authenticating to (all the service receives and saves is the corresponding public key). That design makes passkeys inherently resistant to phishing, credential stuffing, and large-scale data breaches. Breaking the security model of passkeys would require stealing the private key itself, something cryptographically and practically infeasible without compromising the device in some manner. 

Where the “Breaks” Actually Happen

When researchers announce they’ve “broken passkeys,” what they usually mean is that they’ve compromised something else in the operational environment:

Browser vulnerabilities that let malicious extensions hijack sessions or impact user behavior. Device compromises where malware takes control of the endpoint. Application weaknesses in how the authentication flow is integrated.

To be clear, these are real risks, but these are risks for any authentication solution (in addition to other secure tools such as encrypted messaging apps and VPNs). They are not flaws in passkeys themselves. Rather, they are examples of broader environmental compromise which can be mitigated with well-known security controls and policies that IT teams have been deploying for years.

Do Not Confuse Headlines with Reality: Passkeys Work as Intended

No reports have found vulnerabilities in the cryptography or the technical standards underpinning passkeys. What’s being demonstrated by researchers are scenarios where, if the user’s environment is already compromised, attackers may be able to misuse otherwise secure credentials or circumvent the secure authentication process. That’s a meaningful security discussion, and a good reminder that while passkeys are the gold standard for secure authentication, they don’t eliminate the need to have a comprehensive security program. 

Our Commitment to Security and Research

The FIDO Alliance is deeply committed to advancing security through ongoing research, rigorous testing, and collaboration with our members and the broader security community. Our members are actively exploring the impact of emerging technologies like post quantum cryptography, and emerging threats like deepfakes. We also welcome engagement with security researchers who approach their work responsibly, as constructive collaboration helps us strengthen our specifications, certification programs, and implementations. Sensationalist headlines may help a few to market their products or services, but the real win for strong, phishing-resistant authentication is when we combine forward-looking research with open, responsible dialogue. That’s at the heart of the Alliance’s ethos.

The Bottom Line

For anyone responsible for product, security, or compliance, here’s what this means when it comes to adopting passkeys:

Stay focused on fundamentals: Passkeys eliminate entire classes of attacks (phishing, credential theft, reuse) that drive the majority of breaches today. Adopt thoughtfully: Pay attention to the integration and rollout plans, following guidance and best practices with special attention to fallback models. Pair with environmental protections: Continuing to strengthen your security program remains essential, especially focusing on strong endpoint security, browser governance, and app hardening. Lean on certification: Certified implementations ensure consistency and reduce integration risk across platforms and devices.

Passkeys represent one of the most significant advances in digital identity security in decades, and they work as intended. Headlines suggesting otherwise often sensationalize research that demonstrates something we’ve known forever: no system is immune if the environment it runs in is compromised. Passkeys remain the best path forward to reducing fraud, lowering breach risk, and building customer trust in a digital-first world. 

Thursday, 28. August 2025

Project VRM

On Being Agentic

I wrote a chapter on Agency in The Intention Economy because back then (2012) the word mostly meant an insurance or advertising business. The earlier meaning, derived from the Latin agere, meaning “to do,” had mostly been forgotten. Now agency is everywhere, and is given fresh meaning with the adjective agentic. We can thank AI […]

This appears atop a DuckDuckGo search. A few years ago, numbers 1 and 2 would have been down next to number 6.

I wrote a chapter on Agency in The Intention Economy because back then (2012) the word mostly meant an insurance or advertising business. The earlier meaning, derived from the Latin agere, meaning “to do,” had mostly been forgotten.

Now agency is everywhere, and is given fresh meaning with the adjective agentic.

We can thank AI for that. The big craze now is to have AI agents for everything, and to make all kinds of stuff “agentic,” using AI.

Including each of us. We should all maximize our agency with our own personal AI.

With that in mind, and thinking toward upcoming conferences on AI (and our own VRM Day, this coming October 19th ), I just added this section to the VRM Development Work page in our wiki:

Personal AI

Balnce.ai † “Your personal AI, your loyal agents and a network that makes your data work for you.”

Base.org “Base is built to empower builders, creators, and people everywhere to build apps, grow businesses, create what they love, and earn onchain.”

Decentralized AI Agent Alliance “…offers a compelling alternative, giving individuals sovereignty, including ownership of their identity and data.”

GPTbuddy “Human in the loop AI” ([1] @GPTbuddy) is in development by FractalNetworks.

Kwaai “a volunteer-based AI research and development lab focused on democratizing artificial intelligence by building open source Personal AI.” Also, KwaaiNet “AI running distributed on a P2P fabric,” now (July 2025) with Verida “Create and deploy personalized AI agents with secure data connectors, custom knowledge bases, and configurable inference endpoints.”

NANDA: The Internet of AI Agents “Pioneering the Future of Agentic Web.”

The AI Alliance “building and advancing open source AI agents, data, models, evaluation, safety, applications and advocacy to ensure everyone can benefit.”

Please add more, or make corrections on what’s there. If you don’t have editing privileges, just write to me and I’ll make the changes. Thanks!


Oasis Open

Invitation to comment on UBL v2.5 CSD01

OASIS and the UBL TC are pleased to announce that UBL v2.5 CSD01 is now available for public review and comment.  The UBL TC facilitates interoperability in business data exchange by defining a semantic library and syntax bindings of business documents. The documents and all related files are available here: Universal Business Language Version 2.5 […] The post Invitation to comment on UBL v

Public Review ends - September 28th

OASIS and the UBL TC are pleased to announce that UBL v2.5 CSD01 is now available for public review and comment. 

The UBL TC facilitates interoperability in business data exchange by defining a semantic library and syntax bindings of business documents.

The documents and all related files are available here:

Universal Business Language Version 2.5

Committee Specification Draft 01

20 August 2025

XML:https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.xml (Authoritative)

HTML: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.html

PDF: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.pdf

Additional Artifacts

Code lists for constraint validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/cl/ Context/value Association files for constraint validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/cva/ Document models of information bundles: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/mod/ Default validation test environment: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/val/ XML examples: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xml/ Annotated XSD schemas: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xsd/ Runtime XSD schemas: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xsdrt/ Endorsed XSD schemas for forward validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/endorsed/xsd/

The ZIP containing the complete files of this release is found in the directory:

https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.zip

How to Provide Feedback

OASIS and the UBL TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review is now open and ends September 28, 2025 at 23:59 UTC.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility located here.

Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy applicable especially to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the UBL TC can be found at the TC’s public home page located here.

The post Invitation to comment on UBL v2.5 CSD01 appeared first on OASIS Open.

Wednesday, 27. August 2025

Digital ID for Canadians

DIACC Submission to the Government of Canada’s Consultation on a Possible Canada‑EU Digital Trade Agreement

About DIACC The Digital ID & Authentication Council of Canada is a public–private coalition working to advance a trusted Canadian digital economy through open, industry-driven frameworks for…
About DIACC

The Digital ID & Authentication Council of Canada is a public–private coalition working to advance a trusted Canadian digital economy through open, industry-driven frameworks for identity, authentication, and trust services. DIACC stewards the Pan‑Canadian Trust Framework™ (PCTF). This consensus-developed, living industry standards-based framework aligns with national priorities and global norms to support interoperability, privacy, and digital trade readiness across sectors and borders.

Addressing the Consultation Scope

The consultation invites input on digital trade topics such as:

Digital identities, trust services, and authentication Electronic transactions and e‑signatures Cross‑border data flows and localization requirements Consumer protection, fraud prevention, and cybersecurity Digital inclusion and participation of MSMEs Artificial intelligence and emerging technologies Standards, interoperability, and regulatory alignment

This submission addresses each of these priority areas with a clear emphasis on the role of the PCTF and industry‑led standards in advancing Canada’s trade, regulatory predictability, and inclusion objectives.

Executive Summary of Recommendations Mutual Recognition of Industry Standardized Trust Frameworks
Support interoperability between the PCTF and the EU’s eIDAS 2.0 framework to facilitate secure digital trade in regulated sectors (e.g., housing, finance, energy), while maintaining Canada’s ability to define and govern its trust standards to reflect domestic policy and legal norms.
Mutual Recognition of Trust Frameworks
Support interoperability between the PCTF and the EU’s eIDAS 2.0 framework to facilitate secure digital trade in regulated sectors (e.g. housing, finance, energy).
Economic Growth in Key Sectors
Enable standards-based digital trust infrastructure to promote inclusion, reduce friction, accelerate and unlock innovation in e-commerce, housing finance, energy trading, public services, and cross-border logistics.
Recognition of Digital Sovereignty
Acknowledge that frameworks like the PCTF deliver practical, scalable solutions that complement formal national and international standards, strengthening the country’s digital sovereignty by ensuring homegrown, democratically governed solutions play a core role.
Secure and Privacy‑Respecting Cross‑Border Data Flows
Align privacy and cybersecurity standards to preserve both trust and efficiency, enabling secure cross-border data sharing.
Cybersecurity, Fraud Prevention & Consumer Protection
Leverage shared industry- and government-level practices, including fraud mitigation, identity-proofing, and privacy‑by‑design, to protect consumers in cross-border digital transactions.
Digital Inclusion and MSME Participation
Ensure the agreement empowers micro, small and medium enterprises, especially in rural, Indigenous and remote communities, to participate securely in digital trade. Industry‑Led Standards in Canada’s Digital Trade Toolkit

Industry frameworks, such as the PCTF, serve as deployable tools within a broader toolkit that includes national and international standards. While ISO or eIDAS establish global principles, industry-led standards:

Translate principles into workable operational guardrails (e.g. technical protocols, risk models). Accelerate adoption through flexible, market-driven updates. Preserve national sovereignty by ensuring Canadian-made governance structures retain accountability and transparency. Bridge jurisdictional or regulatory gaps. And prepare the ground for future alignment or recognition.

In the Canada-EU context, the PCTF has the scale, backing, and governance necessary to serve as a core commercial interoperability engine alongside formal standards.

Alignment with Consultation Topics Digital Identities, Trust Services & Electronic Transactions Support mutual recognition of digital credentials and trust service providers under eIDAS 2.0 and PCTF. Promote technology-neutral, cross-recognized approaches to e‑signatures and authentication. Enable market actors to use Canadian or EU‑based trust providers under harmonized rules, fostering innovation and consumer confidence. Cross‑Border Data Flows & Localization Advocate for privacy-respecting data mobility in sectors like real estate, energy, finance, and logistics. Oppose unnecessary data localization requirements that add cost without commensurate privacy or security gains. Ensure that Canadian data flows operate in accordance with Canadian laws and values, even when exchanged across borders. Encourage alignment of privacy and cybersecurity approaches that preserve trust and legal clarity. Cybersecurity, Fraud Prevention & Consumer Protection Recommend the adoption of shared fraud scoring, risk assessment, and identity verification frameworks. Embed privacy‑by‑design principles into digital transactions and identity services. Coordinate cross-jurisdictional responses to cyber incidents, identity theft and digital scams.Standards & Interoperability Promote adoption of open international standards for identity, authentication, data portability, and interoperability. Explicitly include industry‑driven frameworks, such as PCTF, as recognized tools for implementation. Ensure Canada retains the ability to define, adapt, and govern its digital identity and trust frameworks independently, in alignment with national law, values, and economic strategy. Leverage PCTF to reduce regulatory fragmentation and increase interoperability across sectors. Digital Inclusion & MSME Access Ensure MSMEs can access affordable, certified trust services for cross-border commerce. Support inclusive access in underserved communities, including rural and Indigenous, via interoperable service models and affordable trust credentials. Artificial Intelligence & Responsible Data Use Align responsible AI principles for trust services and risk modelling, emphasizing transparency, fairness, and accountability. Apply these principles to AI-enabled identity verification tools used in cross-border trade and digital wallets. Sectoral Impact Examples SectorBenefit from Mutual Recognition of PCTF & eIDASHousingStreamlined mortgage and property transactions; AML compliance with reduced frictionEnergy & ResourcesCertified credentials for emissions tracking, trade, and grid interoperabilityFinance & InsuranceReduced friction in cross-border lending, payments, and claims processingPublic Safety & HealthTrusted sharing of credentials for emergency response and cross-border healthcare Conclusion

Canada has a unique opportunity, through this consultation, to shape a forward-looking Digital Trade Agreement with the EU, one that prioritizes trust, privacy, interoperability, digital sovereignty, and inclusion.

By integrating Canadian-governed, industry-led standard frameworks, such as the PCTF, into this toolkit alongside national and international norms, Canada can lead in building a scalable, resilient, and trusted digital trade architecture, without compromising its ability to govern its digital future.

DIACC welcomes further collaboration to refine the role of PCTF in Canada’s digital trade strategy and ensure that Canadian businesses, especially MSMEs, can participate confidently in secure, cross-border digital commerce.


DIDAS

The Big Comparison: From Flop to Personal Digital Fortress

The E-ID proposal of 2019 was a classic own goal: privatized, centralized, and with privacy on the back burner. The result in March 2021: a resounding 64% No. The Swiss people didn’t want a “private flavour” national digital identity. Rightly so. Now, with the new BGEID (2025), Switzerland has made a serious course correction. Instead ...

The E-ID proposal of 2019 was a classic own goal: privatized, centralized, and with privacy on the back burner. The result in March 2021: a resounding 64% No. The Swiss people didn’t want a “private flavour” national digital identity. Rightly so.

Now, with the new BGEID (2025), Switzerland has made a serious course correction. Instead of half-baked privatization, there is now full state authority, privacy-by-design, and open source. The E-ID resides only on the smartphone in the user’s wallet – not in some cloud run by whoever. Age verification? No longer “name + date of birth correlated with a profile,” but simply “over 18.”

Even more exciting: this is no longer just about an E-ID, but about a trust infrastructure. Driver’s licenses, diplomas, tickets – all digital, tamper-proof, in your own wallet. A federal network-of-networks in which municipalities, cantons, universities, companies, or associations can issue their own credentials.

In short: Switzerland is finally translating its political DNA motto “diversity through federalism” into the digital realm. And this was not cooked up in a backroom, but through a participatory process with NGOs, business, academia, and civil society. Democratically legitimized, technically modern, internationally interoperable.

But here’s the real point: the E-ID is only the key – the lock and the doors are built by the ecosystem. A digital identity alone is of little use if it cannot be applied anywhere. Everyday value emerges only when government, business, and society actively use this infrastructure – with credentials we still carry around on paper today: from residence certificates to debt enforcement extracts, from bank guarantees to e-prescriptions and medical reports. Only then does SWIYU become a universal tool for secure, privacy-preserving, and efficient processes.

The ecosystem does more than create convenience – it strengthens trust: less fraud, less bureaucracy, more automation, genuine freedom of choice, and strict data minimization. That’s the difference compared to the old proposal – and compared to global login solutions offered by tech giants. Without an ecosystem, the E-ID remains a key without doors. With a broadly established ecosystem, it becomes a digital public service, open to innovation and anchored in Swiss values.

For those who want to dig deeper: we’ve compared the 2019 proposal and the new 2024 BGEID in detail (to the best of our knowledge). The document shows in black and white why the new E-ID is not a reheated version, but a true paradigm shift – from a private product to a state trust infrastructure.

The bottom line?

For citizens: more sovereignty, stronger privacy, less “Big Brother” feeling.

For the economy: legal certainty, less dependency, more innovation.

For Switzerland: digital infrastructure as a public service – as essential as roads, bridges, water, and electricity.

Resources:

Read/Download our Comparison: BGEID 2021 vs. 2025

 

 


OpenID

Notice of Vote to Approve Proposed FAPI 2.0 Message Signing Final Specification

The two-week voting period will be between Wednesday, September 10, 2025 and Wednesday, September 24, 2025, once the 60 day review of the specification has been completed. The FAPI working group page:  https://openid.net/wg/fapi. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining […]

The two-week voting period will be between Wednesday, September 10, 2025 and Wednesday, September 24, 2025, once the 60 day review of the specification has been completed.

The FAPI working group page:  https://openid.net/wg/fapi. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/benefits-members/.

The vote will be conducted at https://openid.net/foundation/members/polls/379 

Marie Jordan – OpenID Foundation Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post Notice of Vote to Approve Proposed FAPI 2.0 Message Signing Final Specification first appeared on OpenID Foundation.


Next Level Supply Chain Podcast with GS1

What Retailers Can Learn from Carter’s RFID Journey

Carter’s just pulled off what many retailers thought was impossible.  In only three months, the iconic children’s apparel brand rolled out RFID technology across 700 stores:  improving accuracy on every item and making life easier for both store teams and customers. In this episode, Gina Maddaloni and Anna Marie Blackburn from Carter’s join hosts Reid Jackson and Liz Sertl to dis

Carter’s just pulled off what many retailers thought was impossible. 

In only three months, the iconic children’s apparel brand rolled out RFID technology across 700 stores:  improving accuracy on every item and making life easier for both store teams and customers. In this episode, Gina Maddaloni and Anna Marie Blackburn from Carter’s join hosts Reid Jackson and Liz Sertl to discuss how RFID became central to Carter’s retail operations, what it took to win buy-in across the company, and how it is improving both inventory management and customer experience.

You’ll also hear how Carter’s uses RFID to cut payroll costs for year-end inventory by 50 percent, why the rollout became a recruiting tool for store teams, and where the company sees new opportunities to extend RFID into supply chain operations.

In this episode, you’ll learn:

How Carter’s achieved one of the fastest RFID deployments in retail

Why RFID is no longer “too complex” or “too expensive”

What’s next as Carter’s expands RFID use into its supply chain operations

Jump into the conversation: (00:00) Introducing Next Level Supply Chain (01:29) Anna Marie and Gina’s backgrounds (03:52) What RFID technology means for retail (06:47) The process of rolling out RFID across Carter’s stores (13:21) RFID’s impact on Carter’s operational efficiency (17:49) RFID as a recruiting tool for store teams (18:54) Asset protection benefits and peace of mind (19:34) Expanding RFID into DC operations (21:35) What’s next, Carter’s move toward serialization (23:01) Advice for companies starting their RFID journey (24:02) Busting RFID myths: cost, complexity, and adoption (26:29) Favorite tech beyond RFID (29:22) What Gina and Anna Marie want to learn next

Connect with GS1 US: Our website - www.gs1us.org GS1 US on LinkedIn

Connect with the guests: Gina Maddaloni on LinkedIn Anna Marie Blackburn on LinkedIn Check out Carter’s

Learn more about the GS1 US Solution Partner Program: https://www.gs1us.org/industries-and-insights/partners

 

Tuesday, 26. August 2025

Hyperledger Foundation

How EBSI, LACChain, and RBB are building trust into the core of government services with Besu, an LF Decentralized Trust project

Read the full case studyhere. Around the world, governments are deploying blockchain as a way to restore public trust and deliver citizen services more effectively. From EBSI in Europe, to LACChain in Latin America, to Rede Blockchain Brasil (RBB), a common decision stands out. They all chose Ethereum—specifically, Besu, an open-source project hosted by LF Decentralized Trust—as the

Read the full case studyhere.

Around the world, governments are deploying blockchain as a way to restore public trust and deliver citizen services more effectively. From EBSI in Europe, to LACChain in Latin America, to Rede Blockchain Brasil (RBB), a common decision stands out. They all chose Ethereum—specifically, Besu, an open-source project hosted by LF Decentralized Trust—as the backbone of their public sector blockchain networks.

Why Ethereum? The choice was not accidental and this multi-regional case study highlights some of the reasons why. Drivers include building on the proven and familiar infrastructure with Ethereum, the world’s most widely adopted smart contract platform, flexibility and long-term optionality for Public and Permission use, vendor neutrality, and compatibility with global standards.

__


Energy Web

Energy Web Foundation Announces Technology Partnership with BlockDeep Labs

Energy Web Foundation (EWF), the nonprofit accelerating the energy transition with open-source, decentralized technologies, has announced a new partnership with BlockDeep Labs, a leading blockchain engineering firm specializing in Polkadot and Substrate Through this collaboration, BlockDeep Labs will support the development of new features on the Energy Web X (EWX) parachain, with an initial focu
Energy Web Foundation (EWF), the nonprofit accelerating the energy transition with open-source, decentralized technologies, has announced a new partnership with BlockDeep Labs, a leading blockchain engineering firm specializing in Polkadot and Substrate

Through this collaboration, BlockDeep Labs will support the development of new features on the Energy Web X (EWX) parachain, with an initial focus on liquid staking, multi-token support, and decentralization analysis

Both organizations share a strong commitment to open-source innovation and community collaboration, aiming to deliver impactful solutions that can accelerate the digital energy transition.

“We’re excited to partner with Energy Web Foundation on advancing the Energy Web X chain and shaping its future. At BlockDeep Labs, our mission is to lower barriers for users by building robust and efficient Web3 solutions. Collaborating with EWF gives us the opportunity to apply that expertise in a sector where trust, scalability, and interoperability are critical. Together, we aim to deliver infrastructure that not only strengthens the Energy Web ecosystem but also showcases how Polkadot SDK technology can drive real-world impact in energy and beyond.”— Gautam Dhameja, Founder, BlockDeep Labs
“Partnering with BlockDeep Labs brings deep Polkadot expertise to the Energy Web ecosystem at a critical moment in our roadmap. Their support on the Energy Web X parachain will accelerate key features like liquid staking and multi-token functionality, while ensuring our chain remains secure, scalable, and open-source. Together, we are building the digital infrastructure needed to enable the energy transition at global scale.” — Mani Hagh Sefat, CTO, Energy Web
About Energy Web

Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

About BlockDeep Labs
BlockDeep Labs is a Berlin-based blockchain engineering company with deep expertise in Polkadot SDK, tooling, and blockchain innovation.

Energy Web Foundation Announces Technology Partnership with BlockDeep Labs was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 25. August 2025

DIF Blog

Scaling AI DIFferently: Translating Promise into Value

In the second of a series of guest posts by DIF Ambassador Misha Deville, Misha explores how decentralized identity provides the missing trust infrastructure needed for AI systems to scale delegation, personalization, and content authenticity. Read the first post in the series, "The Missing Growth Lever." Everyone

In the second of a series of guest posts by DIF Ambassador Misha Deville, Misha explores how decentralized identity provides the missing trust infrastructure needed for AI systems to scale delegation, personalization, and content authenticity. Read the first post in the series, "The Missing Growth Lever."

Everyone is talking about the promises of AI. Faster decisions, tailored experiences, intelligent agents. But delivering on that promise requires more than powerful models. It requires trusted infrastructure.

AI systems create value through delegation, personalisation, and decision-making. Yet these capabilities can’t scale consentfully or securely without the ability to prove who the system is working for (context), what it’s been authorised to do (consent), or whether its outputs can be trusted (credibility). Decentralised identity models and verifiable credentials can provide the missing infrastructure to ensure AI systems can deliver on their promises.

Agentic Delegation at Scale

“To scale humans, we deploy agents. But to scale agents, we must manage them like humans.”[1] - Director Product Management, Writer.

Agentic AI is no longer a future proposition, it’s a present bottleneck. Organisations are deploying more autonomous agents, but they’re hitting “scaling cliffs” as agents multiply faster than their supervision and governance systems can manage them. Unlike APIs or scripts, agents are semi-autonomous systems with memory, tool access, and significant sensitive data exposure. Without clear scopes, audit trails, or authority checks, most delegation turns into vast liability surfaces, and a system of patchwork permissions quickly becomes unmanageable.

As Huang et al. write, ““Failure to address the unique identity challenges posed by AI agents operating in Multi-Agent Systems (MAS) could lead to catastrophic security breaches, loss of accountability, and erosion of trust in these powerful technologies”[2].

Most AI systems today are still rooted in prediction, but this is rapidly shifting toward delegated action. The agentic AI market has already reached $13.8 billion in 2025, and as agents start taking action, the question becomes: who is acting, on whose behalf, and under what authority?

‘Authenticated delegation’ enables third parties to verify that:

“(a) the interacting entity is an AI agent, (b) that the AI agent is acting on behalf of a specific human user, whether pseudonymised or identifiably known, and (c) that the AI agent has been granted the necessary permissions to perform specific actions”.[3]

This sounds simple, but delegated ‘trust’ doesn’t end with a single permission. Especially in asynchronous flows or agent-to-agent communication, it must be enforced dynamically over time.

Most systems today use token-based models, like OAuth, that assume trust within a known and bounded hierarchy is established when a token is issued, and meaningful wherever invoked. Once the token is delivered however, there is no enforcement that the agent will continue to act within its authorised scope over time, or the domain within which it is meaningful. A delegation-first model like ZTAuth, or other Authorization languages based on Object Capabilities (e.g. ZCaps, UCAN, Hats Protocol), adds runtime checks, making sure agents are still trusted, acting on behalf of the right user, and following the right rules, every time they take an action rather than with a token's expiry period.

Similar ideas are emerging in The MIT Computer Science and AI Lab’s framework for ‘Identity-Verified Autonomous Agents’, which introduces cryptographic proofs of authority and full auditability into multi-agent workflows2. Meanwhile, projects like the A2A protocol are building agent registries to support discovery, entitlements, and secure agent-to-agent communication across trust boundaries and OAuth-style enterprise hierarchies[4].

At the standards level, DIF’s Trusted AI Agents Working Group is building open specifications to support these use cases. Their work spans data models, object capability frameworks, interoperability libraries and runtime trust enforcement patterns. This is about more than securing agent-to-agent interactions. It’s about enabling a full lifecycle of trust, from credentialed instantiation of agents to delegated (logged and fully-auditable) execution, all the way through to forensic audit and remediation in the worst case scenario.

Hyper-personalisation that works

AI-driven hyper-personalisation promises to unlock entirely new value in digital experiences. McKinsey reports show meaningful increases in customer engagement and spend when personalisation is done right[5]. But it can just as easily backfire. A 2019 Gartner study found that 38% of users will walk away from a brand if personalisation feels “creepy”[6], and recent research with Gen Z confirms the duality that personalisation is welcome up until it crosses the line[7].

That line is defined by context and consent. When AI systems infer personal data from web-scraping profiles, browser fingerprinting, adtech data, and other opaque signals, they significantly undermine trust and user agency. When they employ algorithmic transparency, ethical frameworks, and user-authorised data inputs they mitigate the risks of conscious and unconscious mistrust and backlash[8].

Verifiable credentials in this context offer a solution that can give AI systems structured, consent-based attributes that users explicitly approve. This helps shift personalisation away from prediction and toward permission. It reduces the risk of misfires and irrelevant outputs, and increases both system reliability and user trust.

The travel industry is a clear example of the opportunity gap. Identity and preference checks occur at nearly every step, yet the ecosystem remains fragmented[9]. Travellers routinely overshare sensitive data multiple times, with little visibility into where it’s stored or how it’s used. Providers, in turn, struggle to deliver seamless or personalised services without duplicating traveller effort or violating privacy regulations.

That’s starting to change. Initiatives like IATA’s One ID aim to eliminate repetitive ID checks using biometric-backed credentials, creating a more secure, contactless experience. Live pilots by SITA and Indicio, in partnership with Delta Airlines and the Government of Aruba, have also shown how digital travel credentials can streamline identity verification at check-in, boarding, and border control.

These foundational shifts pave the way for more advanced personalisation use cases. With credential infrastructure in place, providers can begin supporting traveller-owned profiles that store personal preferences, enable selective data sharing, and allow AI agents to act on a traveller’s behalf. The DIF Hospitality & Travel Working Group is developing schemas to support this, with traveller profiles that are dynamic, revocable, and built for interoperability. As Nick Price notes, when preferences are embedded in credentials and shared on the traveller’s terms, personalisation becomes possible while still preserving privacy and trust[10].

Decision-Making and Sense-Making in Synthetic Noise

Identity fraud isn’t new, but AI has supercharged its scale, speed, and sophistication. In 2025, 1 in 20 ID verification failures are already being linked directly to deepfakes, while synthetic audio and video forgeries have increased 20% and 12% respectively in fraud attempts year-over-year[11]. National Security Agencies of the US, UK, Canada, and Australia, have warned that the quality and pace of AI-generated forgeries “have reached unprecedented levels and may not be caught by traditional verification methods”[12].

Ironically, fraud detection is one of AI’s strongest use cases. But its success depends on the quality of input data. Risk models tend to rely on patterns from historical data to flag anomalies. If the data is synthetic, spoofed, or unverifiable, the model can learn the wrong patterns or miss the threat altogether. It's a clear case of “attacker's advantage,” since automated attacks are almost free to launch at brute-force scale. What's worse,s AI adversaries are improving at impersonation, so hallucinated and forged content is proliferating across search engines, media outlets, and public discourse, contaminating LLMs at the lowest level of training data.

“As AI agents grow increasingly adept at mimicking human behavior - crafting text, creating personas, and even replicating nuanced human interactions - it becomes harder to maintain digital environments genuinely inhabited by real people.”2

Detecting ‘what’s real’ at scale now requires cryptographic certainty. Verifiable credentials offer a solution to the ‘garbage in, garbage out’ problem by allowing systems to verify data attributes without exposing raw personal data. Content credentials, as standardised by C2PA, provide tamper-evident metadata that can trace authorship, modification history, and usage rights across files, namespaces, and industry associations. This helps prevent both fraud in high-stakes transactions and reduce the risk of model “pollution” by synthetic content[13].

These mechanisms are quickly moving from optional to operational. California’s SB 942, set to take effect in 2026, will require that all AI-generated or AI-altered content be disclosed and tied to an immutable record of provenance. As Erik Passoja writes, “Compliance is just the on-ramp… the real destination is an authenticated digital ecosystem.”[14]. Infrastructure built on signed manifests, cryptographic consent, and watermark durability won’t just prevent fraud, it will underpin new forms of value, from automated licensing to portable and just-in-time reputation.

In all of these cases, a reliable identity layer isn’t a ‘nice to have’, but a prerequisite for trust, adoption, and real-world value. Decentralised identity and verifiable credentials provide the infrastructural foundation that lets AI scale and deliver new opportunities. DIF’s working groups are tackling these challenges head-on, from authenticated AI agents to verifiable travel profiles and content authenticity.

The next article in this series will dive into the work of the Content Authenticity Initiative and DIF’s Creator Assertions Working Group, exploring how open standards are enabling AI to be used confidently in media, preserving trust, provenance, and creative integrity.

Find out more about DIF’s working groups here:

Creator Assertions Hospitality and Travel DIF Labs

If you’d like to stay updated on the launch of DIF’s Trusted AI Agents Working Group, reach out to contact@identity.foundation.

M. Shetrit (2025). “Supervising the synthetic workforce: Observability for AI agents requires managers, not metrics”. Writer. ↩︎ Huang et al. (2025). “A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control”. arXiv. ↩︎ South et al. (2025). “Authenticated Delegation and Authorized AI Agents”. arXiv. ↩︎ A2A Project. “Agent Registry - Proposal”. GitHub. ↩︎ McKinsey & Company (2025). “Unlocking the next frontier of personalised marketing”. ↩︎ Gartner (2019). “Gartner Survey Shows Brands Risk Losing 38 Percent of Customers Because of Poor Marketing Personalization Efforts↩︎ Peter et al. (2025). “Gen AI – Gen Z: understanding Gen Z’s emotional responses and brand experiences with Gen AI-driven, hyper-personalized advertising”. Frontiers in Communication. ↩︎ Park, K & Yoon, H (2025). “AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI”. Nature. ↩︎ DIF (2025). “DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group↩︎ Dock (2025). “How Digital ID is Reshaping the Travel Industry↩︎ Bondar, Ira (2025). “Real-time deepfake fraud in 2025: Fighting back against AI-driven scams”. Veriff. ↩︎ NSA et al. (2025). “Content Credentials: Strengthening Multimedia ↩︎ Adobe (2025). “Content Credentials”. ↩︎ Passoja, Erik (2025). “From Compliance to Prosperity” LinkedIn. ↩︎

Wednesday, 20. August 2025

OpenID

OIDF receives security analysis of OpenID for Verifiable Presentations

The OpenID Foundation is pleased to announce the completion of a comprehensive security analysis of OpenID for Verifiable Presentations (OpenID4VP) when used over the Digital Credentials API (DC API). This represents the first security analysis of OpenID4VP and DC API together, which allowed potential security vulnerabilities to be detected and mitigated before the spec went […] The post OIDF re

The OpenID Foundation is pleased to announce the completion of a comprehensive security analysis of OpenID for Verifiable Presentations (OpenID4VP) when used over the Digital Credentials API (DC API). This represents the first security analysis of OpenID4VP and DC API together, which allowed potential security vulnerabilities to be detected and mitigated before the spec went to final in July

Conducted by researchers from the Institute of Information Security at the University of Stuttgart using the proven Web Infrastructure Model (WIM) methodology, this analysis builds on a track record of rigorous, mathematical security modelling of OIDF protocols, and a two way exchange between the researchers and the OIDF work groups to ensure the protocols deliver the expected security properties.  It has also been used to analyse other OpenID Foundation standards, including OpenID Connect, FAPI 1.0 and FAPI 2.0, as well as OAuth 2.0.

This approach has previously uncovered potential attack vectors allowing for proactive mitigation of the vulnerability, such as a recent responsible disclosure that impacted several spec families. 

As part of the scope of this study, the University of Stuttgart presented a formal model of the OpenID4VP specification in conjunction with the Digital Credentials API, identified and formalized relevant security properties, and completed formal proofs for those security properties successfully. 

These proofs confirm the security of the protocol within the bounds of the mathematical assumptions and formal modelling. Importantly, no new vulnerabilities were identified during the verification process.

Analysis scope and objective

The primary goal was to demonstrate that using OpenID4VP over the DC API delivers a fundamental security guarantee, that of a ‘claims unforgeability’. Put simply, this means proving that attackers cannot trick honest verifiers into accepting false claims that appear to come from legitimate issuers for genuine users with credentials issued to honest wallets.

The analysis takes a focused approach to protocol level security, deliberately excluding attack vectors like Cross-Site Scripting or cryptographic implementation vulnerabilities. These fall outside the scope of protocol specifications and are typically addressed through other security measures.

Rigorous methodology

The WIM analysis follows a systematic three step process that ensures thorough coverage. First, researchers create a detailed mathematical model covering all possible protocol executions not explicitly prohibited by the specifications. This model accounts for arbitrary numbers of participants with varying trust relationships, running multiple protocol instances simultaneously across all possible interaction patterns.

Next, they formulate precise security properties based on goals stated in the specifications. Finally, they provide mathematical proofs showing these security properties hold true across every conceivable protocol execution scenario..

Continuing a commitment to security

This work follows the OpenID Foundation’s first comprehensive security analysis of OpenID for Verifiable Credentials completed in October 2023 with the goal of increasing confidence in these critical specifications. That previous study used the same WIM methodology. 

The Digital Credentials Protocols Working Group (DCP WG) has accepted this security report on OpenID4VP+DC API, continuing the collaborative approach between academic researchers and standards development. As demonstrated with previous analyses, the DCP WG incorporates relevant feedback into current specification versions, ensuring robust security foundations for implementers.

The complete report is available here on the DCP WG homepage for review by implementers and the broader community.

Expert perspectives

The team of academic researchers from the University of Stuttgart, said: “We thank the OpenID Foundation for another fruitful collaboration and look forward to further joint efforts in analyzing high-impact standards.”

Kristina Yasuda, Co-Chair of the OpenID Foundation’s DCP WG, said: “Proactive security analysis is critical in identifying potential gaps before they impact implementers and End Users. Collaborating closely with academic researchers allows us to validate our specifications against rigorous formal models, strengthen the protocol’s security guarantees, and ensure that OpenID4VP and the DC API deliver the trust and reliability that the ecosystem depends on.”

Daniel Fett, founder of the OAuth Security Workshop, said: “It’s great to see the OpenID Foundation adopting formal analysis of web protocols as a standard tool. Beyond the usual expert review, formal analysis has repeatedly proven to be an effective means of uncovering hidden vulnerabilities and challenging underlying assumptions.”

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

 

The post OIDF receives security analysis of OpenID for Verifiable Presentations first appeared on OpenID Foundation.


Errata Corrections to JWT Secured Authorization Response Mode for OAuth 2.0 (JARM) Approved

Errata to the following specification have been approved by a vote of the OpenID Foundation members: JWT Secured Authorization Response Mode for OAuth 2.0 (JARM) – This specification was created to bring some of the security features defined as part of OpenID Connect to OAuth 2.0 An Errata version of a specification incorporates corrections identified after the Final Specificat

Errata to the following specification have been approved by a vote of the OpenID Foundation members:

JWT Secured Authorization Response Mode for OAuth 2.0 (JARM) – This specification was created to bring some of the security features defined as part of OpenID Connect to OAuth 2.0

An Errata version of a specification incorporates corrections identified after the Final Specification was published. This specification is a product of the OpenID FAPI Working Group.

The voting results were:

Approve – 79 votes Object – 0 votes Abstain – 17 votes

Total votes: 96 (out of 420 members = 22.9% > 20% quorum requirement)

The specification incorporating the errata is available at the standard locations as well as:

https://openid.net/specs/oauth-v2-jarm-errata1.html 

See the Introduction section of the specification for the link to the previously approved version.

Marie Jordan – OpenID Foundation Secretary


About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net



The post Errata Corrections to JWT Secured Authorization Response Mode for OAuth 2.0 (JARM) Approved first appeared on OpenID Foundation.


Hyperledger Foundation

Bridging Privacy and Public Blockchains: Reflections from the Privacy for Financial Services Workshop

At the inaugural Privacy for Financial Services Workshop hosted by LF Decentralized Trust in New York, a clear signal emerged: privacy is no longer a theoretical aspiration in digital finance. It’s a necessity, and it’s becoming deeply practical.

At the inaugural Privacy for Financial Services Workshop hosted by LF Decentralized Trust in New York, a clear signal emerged: privacy is no longer a theoretical aspiration in digital finance. It’s a necessity, and it’s becoming deeply practical.


Digital ID for Canadians

Spotlight on Giesecke+Devrient (G+D)

1. What is the mission and vision of Giesecke+Devrient (G+D)? We shape trust in a digital world. We create innovative security solutions for the reliable…

1. What is the mission and vision of Giesecke+Devrient (G+D)?

We shape trust in a digital world.

We create innovative security solutions for the reliable protection of highly critical sectors. We engineer customized security technologies with passion and precision.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Trustworthy digital identity is foundational to the secure functioning of both existing and emerging markets. As digital interactions become integral to daily life, ensuring the authenticity and security of these interactions is critical. Reliable digital identities enable secure access to services, protect against fraud, and foster user confidence. In emerging markets, they are instrumental in promoting financial inclusion and enabling access to essential services, thereby driving economic growth and social development.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital identity is poised to revolutionize the Canadian and global economy by streamlining access to services, enhancing security, and fostering innovation. It enables seamless interactions across sectors such as finance, healthcare, and government services, reducing friction and building trust in digital transactions.

Giesecke+Devrient (G+D) addresses the challenges of this transformation by offering secure, user-centric digital identity solutions. Our technologies ensure data privacy, comply with international standards, and are adaptable to various regulatory environments. By focusing on interoperability and scalability, we support the development of robust digital identity infrastructures that can evolve with the changing needs of societies and economies.

4. What role does Canada have to play as a leader in this space?

Canada is uniquely positioned to lead in digital identity and trust services due to its strong regulatory frameworks, commitment to privacy, and collaborative approach among public and private sectors. By investing in secure digital infrastructures and fostering innovation, Canada can be a leader in setting global standards for digital identity solutions that are inclusive, secure, and user-friendly. G+D supports these initiatives by providing technologies that align with Canada’s vision for a trustworthy digital ecosystem.

5. Why did your organization join the DIACC?

G+D joined the Digital ID & Authentication Council of Canada (DIACC) to collaborate with industry leaders in shaping the future of digital identity. We recognize the importance of a unified approach to developing secure, interoperable, and user-centric identity solutions. Through our membership, we aim to contribute our global expertise in security technologies to support DIACC’s mission of advancing digital identity innovation in Canada.

6. What else should we know about your organization?

Our organization, G+D, has a rich global history of being a pioneer in the industry, achieving numerous milestones that have set standards worldwide. In addition to our Canadian firsts, G+D has been a leader in innovation on a global scale, consistently demonstrating our commitment to advancing security technology and improving user convenience across various markets.


Globally, G+D was one of the first companies to introduce banknote processing equipment back in the early 20th century, revolutionizing how financial institutions handled currency. This innovation laid the groundwork for our later advancements in secure currency technologies. We were also pioneers in developing and implementing smart card technologies, which have become fundamental in today’s digital security applications, including telecommunications, banking, and government identity systems.

In the realm of telecommunications, G+D was instrumental in the development and widespread adoption of SIM cards, which transformed the mobile phone industry by enabling secure and personalized services. This innovation not only advanced mobile technology but also significantly enhanced security and functionality for mobile device users worldwide.
Moreover, G+D has been a leader in the development of secure identity solutions, providing governments and organizations with advanced technologies for passports, national IDs, and other secure documents. Our innovations in biometric and encryption technologies have helped shape global standards for identity verification and data protection.

Our commitment to sustainability is also evident on a global scale. We have been involved in developing eco-friendly technologies and materials for our products, significantly reducing the environmental impact of our manufacturing processes and end products.

By highlighting these global achievements alongside our Canadian firsts, it is clear that G+D’s legacy of innovation spans not only decades but also diverse industries and markets. Our forward-thinking approach and investments in new technologies continue to drive our mission of making people’s lives more secure and convenient, while also respecting our planet. As we move forward, G+D remains dedicated to being a leader in digital identity and authentication, setting new benchmarks for excellence and sustainability worldwide.


Spotlight on NOETRONIQ Strategic Initiatives

1. What is the mission and vision of NOETRONIQ Strategic Initiatives? Mission:To empower organizations with trusted, interoperable identity solutions by delivering expert architectural guidance rooted…

1. What is the mission and vision of NOETRONIQ Strategic Initiatives?

Mission:
To empower organizations with trusted, interoperable identity solutions by delivering expert architectural guidance rooted in privacy, security, and open standards.

Vision:
To shape a resilient digital future where individuals and institutions interact with confidence, enabled by transparent, decentralized identity ecosystems and intelligent trust frameworks.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Digital trust and identity verification are foundational to securing interactions in today’s global digital economy. In both mature and emerging markets, they underpin everything from financial access to cross-border compliance and fraud prevention. But as the threat landscape evolves, driven by increasingly sophisticated actors and accelerated by AI, traditional security models are no longer sufficient. Verifiable identity becomes the anchor for ensuring accountability, protecting privacy, and enabling secure, scalable systems. In a world of rapid digital and AI-driven transformation, strong identity infrastructure is not just about access; it’s about resilience, governance, and trust at every layer of interaction.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital trust and identity verification will redefine how value, services, and decisions flow across both the Canadian and global economies. As digital interactions become more decentralized, cross-border, and mediated by AI, trusted identity is the linchpin for enabling secure access, protecting user autonomy, and ensuring compliance at scale. In Canada, this transformation supports economic inclusion, public service modernization, and global interoperability through frameworks like the PCTF. Globally, it empowers new markets, mitigates fraud, and creates the foundation for verifiable, privacy-respecting ecosystems.

NOETRONIQ Strategic Initiatives addresses these challenges by offering expert architectural guidance to help organizations align with evolving standards, integrate verifiable credentials, and future-proof their identity infrastructure. Our focus is on building resilient, interoperable systems that embed trust by design. Bridging policy, technology, and security in a rapidly shifting digital and AI-enhanced landscape.

4. What role does Canada have to play as a leader in this space?

Canada is well-positioned to lead in digital trust and identity verification by advancing inclusive, privacy-respecting, and interoperable frameworks. With the Pan-Canadian Trust Framework and a strong tradition of public-private collaboration, Canada offers a model for securing digital ecosystems that scale across borders. As global trade and digital services expand, Canada’s leadership in trustworthy identity infrastructure strengthens its role as a reliable partner in international commerce. By prioritizing security, accountability, and user control, Canada can shape the standards and governance models that underpin a resilient, globally connected digital economy.rust services, across many industries, including Auto Finance.

5. Why did your organization join the DIACC?

NOETRONIQ Strategic Initiatives joined the DIACC to contribute to and align with the collaborative development of Canada’s digital identity and trust ecosystem. As a firm specializing in identity architecture, verifiable credentials, and privacy-first design, we see DIACC as a critical forum for shaping interoperable, standards-based solutions that serve both national interests and global alignment. Participation in DIACC enables us to engage with leading experts, support the evolution of the Pan-Canadian Trust Framework, and ensure our clients’ solutions are future-proof and policy-aware. We believe that trusted identity is foundational to a resilient digital economy—and that collaboration is key to getting it right.

6. What else should we know about your organization?

NOETRONIQ Strategic Initiatives brings deep expertise in digital identity architecture, cybersecurity, and verifiable credentials, with a focus on building systems that are both technically robust and aligned with emerging governance models. We operate with the precision and foresight of a strategic partner, helping clients navigate complexity at the intersection of policy, privacy, and technology. We’re especially attuned to the shifting landscape brought on by AI, decentralized infrastructure, and evolving trust frameworks. Our mission is to help shape a digital future where identity is secure, user-controlled, and interoperable by design.

Monday, 18. August 2025

OpenID

Notice of Vote to Approve OpenID for Verifiable Credential Issuance 1.0 Final Specification

This is a notice of the upcoming vote to approve OpenID for Verifiable Credential Issuance 1.0 as a Final Specification. The two-week voting period will be between Monday, September 1, 2025 and Monday, September 15, 2025, once the 60 day review of the specification has been completed. The Digital Credentials Protocols (DCP) working group page:  […] The post Notice of Vote to Approve OpenID

This is a notice of the upcoming vote to approve OpenID for Verifiable Credential Issuance 1.0 as a Final Specification.

The two-week voting period will be between Monday, September 1, 2025 and Monday, September 15, 2025, once the 60 day review of the specification has been completed.

The Digital Credentials Protocols (DCP) working group page:  https://openid.net/wg/digital-credentials-protocols/. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/benefits-members/.

The vote will be conducted at https://openid.net/foundation/members/polls/376.

Marie Jordan – OpenID Foundation Secretary

About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post Notice of Vote to Approve OpenID for Verifiable Credential Issuance 1.0 Final Specification first appeared on OpenID Foundation.

Friday, 15. August 2025

FIDO Alliance

Dark Reading: NIST Digital Identity Guidelines Evolve with Threat Landscape

In a bid to improve overall security of the identity ecosystem, the National Institute of Standards and Technology updated its Digital Identity Guidelines earlier this month. The first revision since 2017, many […]

In a bid to improve overall security of the identity ecosystem, the National Institute of Standards and Technology updated its Digital Identity Guidelines earlier this month. The first revision since 2017, many organizations should be able to implement the updated guidelines without much difficulty as part of their identity strategy.

Attackers are always sharpening their skills to bypass organizations’ identity and access management (IAM) protocols – the key to gaining critical access – and artificial intelligence (AI) is making phishing attacks even more effective, and deepfakes are tricking even the most security-savvy mind. New authentication measures such as passwordless technologies, exist, but implementation challenges have hindered adoption.


Research Snipers: Microsoft Authenticator Deletes All Stored Passwords, Pushes Users Toward Passkeys

As announced, Microsoft today deletes all stored passwords from his authenticator app. Users have to secure their access data before they are lost permanently. Other functions of the app are […]

As announced, Microsoft today deletes all stored passwords from his authenticator app. Users have to secure their access data before they are lost permanently. Other functions of the app are preserved.

Users must secure passwords

For users of the Microsoft Authenticator, it is a critical deadline: As announced at the beginning of May, all stored passwords are irrevocably deleted from the app on August 1, 2025. If you do not ensure or transfer the data, you permanently lose access to the stored passwords.

The changeover takes place gradually and has been going on for months. Since June 2025, it has no longer been possible to save new passwords in the app or to import from external sources. This affects both manual entries and the synchronization with other services. In July 2025, the autofill function was then deactivated, so that the app no longer automatically entered login fields on websites or in apps.


Security.World: HID Unveils Next-Generation FIDO Hardware And Centralized Management At Scale

HID, a worldwide leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials—now powered by the new Enterprise Passkey Management (EPM) solution— designed to help organizations deploy and manage passkeys at the […]

HID, a worldwide leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials—now powered by the new Enterprise Passkey Management (EPM) solution— designed to help organizations deploy and manage passkeys at the enterprise scale. 

Newresearch from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers.HID’s solution streamlines the shift to passwordless authentication.


Oasis Open

OASIS Staff Honors Gershon Janssen with the Distinguished Contributor Award

It is our privilege to announce that Gershon Janssen has been awarded the OASIS Distinguished Contributor Award — an honor he has richly deserved for many years.  With his recent announcement of transitioning off the OASIS Board of Directors, it’s a fitting time to reflect on Gershon’s extraordinary contributions. Since joining OASIS in 2007, he […] The post OASIS Staff Honors Gershon Janss

Recognizing 18 years of exceptional leadership, global advocacy, and dedication to OASIS open standards

It is our privilege to announce that Gershon Janssen has been awarded the OASIS Distinguished Contributor Award — an honor he has richly deserved for many years. 

With his recent announcement of transitioning off the OASIS Board of Directors, it’s a fitting time to reflect on Gershon’s extraordinary contributions. Since joining OASIS in 2007, he has given more than his time — he has offered steady leadership, sharp insight, and unwavering commitment to our mission. Over the past 18 years, he has served in numerous roles, including:

OASIS Member since 2007, elevated to Sponsor Member status in 2022
Board Member since 2012, serving as Secretary (2012–2016) and as President and Chairman (2016–2025)
Secretary and Contributor for the PMRM and WS Calendar Technical Committees
Contributor to the IDtrust and WS-I Member Sections, as well as the PKCS #11, KMIP, MQTT, VIRTIO, BPEL4People, OData, and many other Technical Committees
Member of the OECD Internet Technology Advisory Council and Chair of its Security and Privacy Working Group
Board of Managers member for the OASIS Open Development Foundation
Treasurer for the OASIS Open Europe Foundation
Active member of the Process, Finance, Staffing, Governance, and Technology Committees

Much of Gershon’s work has been in roles that rarely receive deserved recognition — yet are vital. His steady hand has been a stabilizing force during turbulent times, including leadership transitions in which he shouldered the responsibilities of both Chair and Interim Executive Director on multiple occasions, some lasting from several months to nearly a year. 

Gershon’s dedication has gone far beyond formal responsibilities. On his own time — and at his own expense — he has served as an ambassador for OASIS around the globe, personally sponsoring activities when resources were stretched, representing our work as a TC member at key events, and acting as a liaison to numerous industry and government bodies. 

Through it all, Gershon led with quiet strength, patience, and a deep respect for the collaborative process, OASIS staff, and the members. His influence has helped guide OASIS through challenges and successes alike, ensuring our community remained strong and forward-looking. 

Words cannot truly capture our appreciation for Gershon’s years of service, nor can they measure the impact he has made. We thank him not only for his leadership, but for his enduring dedication to OASIS and the global open standards community. 

Please join us in congratulating Gershon on this well-deserved recognition. 

— OASIS Staff

The post OASIS Staff Honors Gershon Janssen with the Distinguished Contributor Award appeared first on OASIS Open.


Velocity Network

Velocity Network Foundation Joins Global Leaders at Inaugural Geneva Conference on Digital Trust 

The post Velocity Network Foundation Joins Global Leaders at Inaugural Geneva Conference on Digital Trust  appeared first on Velocity.

Velocity Charitable Foundation Elevates Statewide Approach to Verifiable Credentials and Open Ecosystems

The post Velocity Charitable Foundation Elevates Statewide Approach to Verifiable Credentials and Open Ecosystems appeared first on Velocity.

Velocity Network Technology Becomes “Verii” Under the Linux Foundation’s LF Decentralized Trust Initiative 

The post Velocity Network Technology Becomes “Verii” Under the Linux Foundation’s LF Decentralized Trust Initiative  appeared first on Velocity.

What is a Trust Framework? 

The post What is a Trust Framework?  appeared first on Velocity.

The post What is a Trust Framework?  appeared first on Velocity.

Thursday, 14. August 2025

Energy Web

From Off-Chain Execution to On-Chain Trust: Inside Energy Web’s Consensus Overhaul

Energy Web has rolled out a major upgrade to the consensus mechanism governing Worker Nodes on the Energy Web X (EWX) network. This enhancement aligns Energy Web X and the Worker Node Networks with a core vision: using secure on-chain consensus and rewards to validate off-chain computations while incentivizing the highest level of node performance. Why This Matters A growing number of appli
Energy Web has rolled out a major upgrade to the consensus mechanism governing Worker Nodes on the Energy Web X (EWX) network. This enhancement aligns Energy Web X and the Worker Node Networks with a core vision: using secure on-chain consensus and rewards to validate off-chain computations while incentivizing the highest level of node performance. Why This Matters

A growing number of applications in the energy sector and beyond are leveraging decentralized Worker Node networks on Energy Web X. For example, Green Proofs for Bitcoin (GP4BTC) uses EWX to verify green Bitcoin mining, and the recently launched Carbon-Aware Nomination system orchestrates compute workloads to maximize the use of clean energy. As these and other DePIN (Decentralized Physical Infrastructure Networks) use-cases expand, it becomes ever more critical to enforce accurate and verifiable computation in a secure, scalable manner.

This consensus upgrade directly addresses that need. It introduces several improvements to how EWX validators reach agreement on Worker Node outputs and distribute rewards: changes that align incentives with performance and ensure the integrity of off-chain execution. This upgrade empowers enterprises to pair their off-chain computational systems with highly configurable on-chain reward mechanisms, creating strong business and financial incentives for both enterprises and community members operating Worker Nodes to actively contribute to these decentralised systems.

Setting The Bar on Performance

Only the most consistent, high-performing worker nodes will now be rewarded for their contributions. The upgrade introduces an SLA performance threshold, a minimum standard for correct vote submissions that a node must meet to qualify for any rewards. In other words, a worker’s voting accuracy over each reward period has to exceed a predefined percentage (set on a per-Solution Group basis) for that node to earn a share of the rewards. A “correct” vote means the worker’s submitted result from their off-chain execution aligns with the majority consensus for a given round (as determined by EWX validators). If a node’s correct vote rate falls below the threshold, it won’t receive rewards for that period, no matter how many votes it cast.

This change pushes every Worker Node operator to perform above a clearly defined bar. Energy Web X validators now track each worker’s voting performance across rounds (via on-chain metadata) and calculate the percentage of that worker’s votes that matched the accepted consensus. Only those exceeding the SLA threshold are deemed eligible. Among those that qualify, rewards are weighted by accuracy and stake, meaning those who contribute more correct results and have more stake on the table earn proportionally more. See the reward formula below:

worker_reward = (worker_correct_votes × user_stake) / total_weighted_correct_votes × voting_reward_per_block × active_blocks

Where: worker_reward: the amount of EWT distributed to the worker node operator as their active reward for participation in eligible voting rounds within the concluded reward period. worker_correct_votes: The number of correct (consensus aligned) votes submitted by the worker node in the eligible voting rounds within the concluded reward period. user_stake: The amount of tokens locked by the operator upon registration. total_weighted_correct_votes: Sum of correct votes weighted by stake across all worker nodes ( Σ(correct_votes_i * stake_i) ). voting_rewards_per_block: Amount of tokens allocated to voting rewards per block (configured by the solution registrar). active_blocks: Number of blocks spanning the reward period. Worked example:

A Solution Group contains 150 operators. At the end of a reward period, 100 operators submitted sufficient votes to exceed the SLA threshold and are eligible for rewards.

From the eligible 100 operators, the average correct votes during the reward period is 100 and the average user stake is 1000.

Therefore: total_weighted_correct_votes = 100 * 100 * 1000 = 10,000,000

Worker Node A had 110 correct votes with a stake of 1000. There are 7200 active blocks in a voting round, and the voting rewards per block are set to 1 token.

Therefore: worker_node_A_reward = (110 * 1000) / 10000000 * 1 * 7200 = 79 tokens

Consensus Validated Via Quorum & Majority Threshold

The consensus mechanism employs a two-tier validation process to guarantee both sufficient participation and accuracy of submissions before finalising the result on-chain. Energy Web X requires two conditions for a voting round to produce a valid result:

Quorum: A minimum percentage of eligible worker nodes must participate by submitting their votes. Majority Threshold: Within the specified quorum of participants, a majority of workers, over the defined threshold, must agree (i.e. submit matching results) for the result to be accepted as the round’s consensus.

If either condition isn’t met the round is marked Unresolved. These thresholds optimise for security and trust without sacrificing scalability. Quorum ensures that a sufficiently broad sample of the network contributes to each consensus decision, while the majority threshold ensures accuracy and trust in the result. Only when both conditions are satisfied will the Energy Web X validator set record the final result on-chain.

Improving Withdrawal Delay

The upgrade also refines how and when worker node operators can withdraw their stake from the network. The withdrawal delay is the period a user must wait between submitting an unsubscription request and receiving their tokens. This means every action (vote) has a consequence: correct votes will always be rewarded while protecting against malicious actors who may otherwise submit false votes and then withdraw to evade penalties.

With the upgrade, withdrawal delays are measured in reward periods instead of blocks. In practice, after an operator initializes a withdrawal, they must wait a defined (by the registrar) number of additional reward periods before withdrawing their collateral. During this delay period, the node can still participate in voting rounds and continue to earn rewards, except for the final reward period in which the stake is released and voting eligibility ends. This ensures all pending rounds are properly settled and any rewards or penalties processed before a node can exit the network.

For example, a solution group has a withdrawal delay of 2 reward periods. A subscribed worker node operator votes in Reward Period 1 then submits an unsubscribe request in Reward Period 2. The operator would need to wait for Reward Period 3 and 4 to conclude, before receiving their funds back during a block (specific timing depends on system load) in Reward Period 5. The operator can participate (vote) in Reward Period 3 and 4 but not in 5.

The Outcome: Accurate, Scalable and Secure Off-Chain Compute

Together, these enhancements bring Energy Web X’s consensus mechanism in line with the vision of incentivizing top-performing worker nodes to deliver accurate, verifiable outputs from off-chain computation.

What does this mean for Energy Web ecosystem participants? Solution owners and their users can have complete trust in the outputs of decentralised compute. Energy Web X’s blockchain will securely handle the heavy lifting of coordinating nodes, validating results, and distributing rewards, all in the background. This allows developers and enterprises to focus on what they do best: building high-value applications, confident that a robust, trusted decentralized compute layer is reliably powering their workloads behind the scenes.

About Energy Web

Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

How to Get Involved

Review the docs
Join the conversation

From Off-Chain Execution to On-Chain Trust: Inside Energy Web’s Consensus Overhaul was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 13. August 2025

Velocity Network

Velocity Network Foundation – MidYear Updates

The post Velocity Network Foundation – MidYear Updates appeared first on Velocity.

Tuesday, 12. August 2025

FIDO Alliance

GB News: Microsoft will start DELETING your passwords from today, and there’s only one way to save them

Microsoft has started to delete all passwords saved in its Authenticator app — and if you want to make sure you’re not locked-out of your favourite websites and apps, there’s one way […]

Microsoft has started to delete all passwords saved in its Authenticator app — and if you want to make sure you’re not locked-out of your favourite websites and apps, there’s one way to save your previous logins.

It might seem a little extreme, but the decision to start wiping passwords from its users has been months in the making. Microsoft has been slowly winding down Microsoft Authenticator, a free mobile app developed by Microsoft for Android and iOS that lets you securely sign-in to online accounts using two-factor authentication (2FA) or passwordless logins.


ZDNet: Syncable vs. non-syncable passkeys: Are roaming authenticators the best of both worlds?

Like or not, a replacement for passwords — known as passkeys — is coming your way, if it hasn’t already. The three big ideas behind passkeys are that they cannot be guessed […]

Like or not, a replacement for passwords — known as passkeys — is coming your way, if it hasn’t already. The three big ideas behind passkeys are that they cannot be guessed in the way passwords often can (and are), the same passkey cannot be re-used across different websites and apps (the way passwords can), and you cannot be tricked into divulging your passkeys to malicious actors, often through techniques such as phishing, smishing, quishing, and malvertising. 


Intelligent CISO: HID unveils next-generation FIDO hardware and centralised management at scale

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – […]

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – designed to help organizations deploy and manage passkeys at the enterprise scale. 

New research from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers. HID’s solution streamlines the shift to passwordless authentication.

The next phase of HID’s passwordless authentication roadmap gives enterprises choice, flexibility and speed to deploy FIDO without compromising user experience or security posture. The expanded portfolio delivers phishing-resistant authentication with enterprise-grade lifecycle management, making scalable passwordless security accessible to organisations of all sizes. The solution works seamlessly across diverse work environments while reducing IT support requirements through centralised visibility and control.


ZDNet: What if your passkey device is stolen? How to manage risk in our passwordless future

Part of the “passkeys are more secure than passwords” story is derived from the fact that passkeys are non-human-readable secrets — stored somewhere on your device — that even you have […]

Part of the “passkeys are more secure than passwords” story is derived from the fact that passkeys are non-human-readable secrets — stored somewhere on your device — that even you have very limited access to. 

OK, so what happens to those passkeys if your device is stolen?


Intelligent CISO: HID unveils next-generation FIDO hardware and centralised management at scale

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – […]

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – designed to help organizations deploy and manage passkeys at the enterprise scale. 

New research from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers. HID’s solution streamlines the shift to passwordless authentication.

Friday, 08. August 2025

DIF Blog

Scaling AI DIFferently: The Missing Growth Lever

In the first of a series of guest posts by DIF Ambassador Misha Deville, Misha sets the stage for an underappreciated business problem one layer deeper than traditional User Experience analyses. The Trust Ceiling The paradox is that we want AI systems that understand us, but only on our own

In the first of a series of guest posts by DIF Ambassador Misha Deville, Misha sets the stage for an underappreciated business problem one layer deeper than traditional User Experience analyses.

The Trust Ceiling

The paradox is that we want AI systems that understand us, but only on our own terms. We’re uneasy when AI appears to use personal data without permission, yet we’re frustrated when it doesn’t ‘get us’ on the first try (in ways that it would need lots of personal data to fill in the context). This might seem like a philosophical problem, but I am pointing to an underappreciated technical one. Without embedding trust negotiation into the system design itself, through features such as transparent data flows and consent mechanisms, these systems cannot negotiate subtext and context; without this capability, we will fail to see AI adoption (and utility) scale as promised. 

Friction in trust, and therefore adoption, stems from the lack of transparency and control, not from ‘poor model intelligence’. 66% of U.S. shoppers, for example, say they would not allow AI to make purchases for them, even if it meant securing better deals, because “consumers suspect AI is working for the retailer, not them. Until that trust gap closes, AI will remain a product discovery tool”1.

Public patience with AI product rollouts is already wearing thin. Global brands like Duolingo are facing significant backlash after announcing ‘AI-first’ strategies2, and the perception gap between those building AI systems and the addressable markets expected to adopt them is ever widening. 51% of adults interviewed in a recent Pew Research Study said they were more concerned than excited about AI, which contrasts sharply with the mere 15% of ‘AI experts’ that held this view3. 

The generative-AI race to market that made AI more powerful and more personal, also made systems more opaque, leaving users in the dark about how and why decisions are made. In the absence of transparency, even well-intentioned systems lose public trust. The WEF frames this as a missed opportunity to build new markets on a healthy footing: “Without transparency, AI systems might be used that are not value-aligned at a level that is acceptable to users, or users might distrust AI systems that actually are sufficiently value-aligned because they have no way of knowing that.”4

To embed trust into the system itself, people need to be able to verify:

who the AI is working for (context),  what it’s allowed to do (consent), and,  whether the output can be trusted (credibility). 

The solution therefore isn’t more intelligent AI models, it’s a complimentary, verifiable identity layer. An identity layer doesn’t just enable trust at the level of individual users trusting individual interfaces; it also supports a healthier marketplace overall by making AI systems traceable, comparable to one another, and accountable to the users and services they interact with. It helps users in aggregate trust AI more in aggregate.

Verifiable credentials built on a backbone of decentralised digital identifiers, enable cryptographic proofs of user and object attributes. Context, consent, and credibility become programmable, and the user experience transforms from coercive to empowering. 

The Market Opportunity

Digital identity and AI are fundamentally interdependent, but the current investment landscape and dominant business strategies do not reflect this. Today, AI is seen as a growth engine and identity infrastructure is seen as compliance overhead. This mental model is not just outdated, it’s economically limiting.

In 2024, global VC investment into AI-related companies exceeded $100 billion, marking an 80% increase from 20235. Meanwhile, investment in digital identity declined. In the UK, one of the world’s leading digital identity markets, the sector saw only $58 million in VC funding in 2024, a 69% decline from the year before6. This stark investment gap reveals a misunderstanding of the technology stack required for trustworthy, scalable AI. 

The convergence of these technologies is both ethically necessary and commercially advantageous. An identity layer that’s fit for this new era will enable AI breakthroughs to scale with direction, grounding, and accountability. If AI is the engine, then digital identity is the navigation system. It doesn’t slow the rocket down. It ensures it lands where we need it to. 

The companies that align AI with verifiable digital identity will capture disproportionate market share where others hit trust ceilings. Strategies that capitalise on both technologies will unlock the promised value in use cases such as:  

In fintech, verified delegation allows AI agents to execute trades securely, with cryptographic proof of authority and clear audit trails. In healthcare, patient-controlled access to verified medical records enables truly personalised care without compromising consent or privacy. In global supply chains, AI systems can confirm the authenticity of every product and actor, preventing counterfeits, improving traceability, and automating trust at scale.

Digital identity is not a constraint on AI. It’s the infrastructure that allows it to scale responsibly and profitably. Standards like W3C’s Verifiable Credentials Data Model, provide a vital foundation for AI systems to verify context, consent, and credibility without compromising privacy. The companies that embrace this interdependence will define the next wave of digital infrastructure. Those that don’t, will risk building impressive technology that nobody trusts enough to use.

In the next article, we’ll explore how decentralised identity unlocks the real-world value of AI, starting with three core functions behind its promises: personalisation, delegation, and decision-making.

If you’d like to stay updated on the launch of DIF’s AI-focused working group, reach out to contact@identity.foundation.

1 Charleston, SC (2025). “Two-Thirds of Shoppers Say ‘No’ to AI Shopping Assistants – Trust Issues Could Slow Retail’s AI Revolution”. Omnisend.
2 Braun, S (2025). “Duolingo’s CEO outlined his plan to become an ‘AI-first’ company. He didn’t expect the human blacklash that followed.” Fortune.
3 McClain et al. (2025). “How the U.S. Public and AI Experts View Artificial Intelligence”. Pew Research Center.
4 Dignum et al. (2024). “AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals”. World Economic Forum.
5 Fairview Capital. (2024). “Preparing for the Agentic Era in Venture Capital”.
6 Wyman, O. (2025). “Digital Identity Sectoral Analysis 2025”. Gov.UK


OpenID

Adoption now and ahead: mDL Day ‘Voices of the Future’ panel

On July 14, 2025, OpenID Foundation’s Executive Director Gail Hodges was delighted to moderate the ‘Voices from the Future’ panel at the Federal Mobile Driver’s License (mDL) Industry Day | GSA, which included leaders from five US States and the American Association of Motor Vehicle Administrators (AAMVA) on the use cases driving mDL adoption and […] The post Adoption now and ahead: mDL Day ‘Voi

On July 14, 2025, OpenID Foundation’s Executive Director Gail Hodges was delighted to moderate the ‘Voices from the Future’ panel at the Federal Mobile Driver’s License (mDL) Industry Day | GSA, which included leaders from five US States and the American Association of Motor Vehicle Administrators (AAMVA) on the use cases driving mDL adoption and ‘happiness’ in their states.

State officials delivering on the promise of digital identity in the USA

The panelists included state officials on the leading edge of mDL issuance and adoption:  

Christine Nizer – Administrator, Maryland Department of Transportation Motor Vehicle Administration and Governor’s Highway Safety Representative    Ajay Gupta – Chief Digital Transformation Officer, California Department of Motor Vehicles Brett Young – Assistant Deputy Commissioner of Innovation and Technology, Georgia Department of Driver Services Lori Daigle –  Program Manager, Outreach and Education Identity Management, American Association of Motor Vehicle Administrators Ashley Hall – Senior Project Manager, Virginia Department of Motor Vehicles David Knigge – Modernization Director, Arizona Department of Transportation, Motor Vehicle Division

In the US, states play a pivotal role in the issuance of digital identity credentials. These states are on the leading edge of creating great user experiences for their state residents, and they appreciate the power of robust digital identity infrastructure.

Compelling use cases and future roadmaps 

The state and AAMVA panelists brought their experience from the ‘front row seat’ as issuers of the mDL/mID credentials. They shared a wide range of use cases that are already resonating within their communities, way beyond mDL presentation at a TSA checkpoint to travel:

Login and step up authentication for access to state website and benefits   Skippie Enforcement acceptance to save officers and residents time (a 15-20 minute roadstop reduced to 5 minutes; the prospect of saving lives by getting people and officers back on the road swiftly) Event venue acceptance (e.g. Merryweather post pavilion in Maryland, first few LA arenas accepting mDLs, prospect of Olympics 2028 in Los Angeles 2028) Financial services use cases, such as opening a bank account with an mDL, as demonstrated by the NIST NCCoE Mobile Driving Licenses project and their first delivery phase. 

The panelists were collectively optimistic about the prospect of mDL acceptance accelerating, and accelerating across a wide range of use cases as merchants, government departments, and the public become progressively comfortable with the technology and experience it in their everyday lives. 

As one speaker said: “We see ‘happiness’ – happy residents delighted with the prospect of using their mDL.” Some of that ‘happiness’ can be measured in app scores where apps, such as the CA DMV Wallet app, has 4.8/5 stars (85k users) in the Apple App store.

Tangible adoption and issuance underway in US states  

The panelists highlighted that there is real momentum on mDLs, the technology is ‘already here’ and growing rapidly now, up the adoption curve:

5 million+ mDLs have been issued to date (AAMVA) 18 US states are issuing standards-compliant mobile driving licenses as of July 2025, with many more expected to launch later this year   40% of US residents live in a state that offers their residents a mDL   TSA has enabled over 250 priority airports for mDL acceptance, with more airports scheduled to come

The OpenID Foundation’s Executive Director, Gail Hodges, said: “These expert panelists are at the coal face working on the safe issuance of mobile driving licenses in their states, and ensuring that this new technology works for their residents and all Americans. The OpenID Foundation is delighted to work with market leading issuers and thought leaders like these panelists, and to ensure our global open standards meet their exacting requirements for security, interoperability, and both national and global scale.”

Market context: The urgency of the work

Gail’s opening remarks as moderator set the stage for the stakes facing the global  community.  

According to Vasu Jakkal, Microsoft Global Head of Security, $9.2 trillion is the value of the global cybercrime industry. “How much is it costing our world?” This is the equivalent to the third largest country by GDP and growing faster than any country.  US GAO estimate of US Federal Government annual losses to fraud is $233-$521 billion, as estimated based on 2018-2022 data. “No area of the federal government is immune to fraud….given the scope of this problem, a government-wide approach is required to address it.”   US financial institutions reported suspicious activity worth $212 billion – from US FINCEN analysis of identity-related suspicious activity, or financial crime related to weaknesses in our identity infrastructure.  

The hope is that digital identity infrastructure, including mobile driving licenses, can play a material role to mitigate these persistent and growing attacks, and better protect US residents and businesses. 

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

 

About the panelists

Christine Nizer was appointed Administrator of the Maryland Department of Transportation Motor Vehicle Administration (MVA) and Governor’s Highway Safety Representative in August 2015. Prior to that appointment, Ms. Nizer served as the MVA’s Chief Deputy Administrator and Deputy Administrator for Central Operations and Safety Programs for over eight years. She also held management positions at the Maryland Public Service Commission, the Maryland General Assembly and the Office of Homeland Security. 

Ashley Hall is a Senior Project Manager at the Virginia Department of Motor Vehicles (DMV), where she leads the state’s groundbreaking Mobile ID initiative. With more than two decades of project management experience at the DMV and a PMP certification, Ashley has overseen a wide range of innovative efforts—from the development of Mobile Service Units to the implementation of Photoless Identification Cards.

Ashley holds a Bachelor of Arts from Virginia Tech and a Master of Public Administration from Virginia Commonwealth University. Since 2022, she has also served as a member of the AAMVA Joint Mobile Driver’s License Subcommittee, helping to shape national standards and best practices for digital identity.

Virginia’s Mobile ID is currently in a soft launch phase, with a full public rollout anticipated later this summer. The launch will include five key use cases: TSA checkpoints, Virginia DMV services, Virginia ABC stores, Virginia State Police and local law enforcement, and select casinos—bringing secure, convenient digital identification to Virginians across the state.

Ajay Gupta was appointed Chief Digital Transformation Officer in February 2020. Gupta leads business and technology transformation effortsfor the DMV to become a modern enterprise. Gupta has served as a special advisor to the DMV Director since 2019.

Before joining state service, Gupta worked as a managing director at KPMG, where he led the delivery of legacy transformation, technology innovation, and managed services for State departments nationally. Gupta brings more than 27 years of public sector experience to the DMV. He served state departments in California, Texas and Hawaii while working for CGI Inc., Visionary Integration Professional Inc., Deloitte LLP, and Tata Consulting Services.

Gupta has a B.E. in Electrical Engineering from Delhi College of Engineering and an MBA in Marketing and Information Technology from UC Davis. Gupta is also certified as a PMP, CSPO, Cloud Practitioner, and Enterprise Architect.

Assistant Deputy Commissioner of Innovation & Technology, Georgia Department of Driver Services, Brett Young has been with the Department of Driver Services since 2001 and has served in several roles. Brett is currently the Assistant Deputy Commissioner of Innovation & Technology leading the Information Technology, Innovation & Strategy and Program Management work units.

He has previously served as the Director for the Program Management Office. His experience includes managing regulated programs, defining the processes and procedures for operating the agency’s programs, establishing driver program vision and leading efforts to continuously improve the process of the Department’s Driver License Issuance System. He has previously led the Department through a Modernization Project, Card Production Procurement and placed Georgia’s License in three Digital Wallets.

Brett is currently a member of the AAMVA Card Design Standard Subcommittee and previously served on the AAMVA Autonomous Vehicle Best Practices Working Group.

Mr. Knigge has more than 40 years of experience in the technology field with a diverse range of experience from executive to consultant to IT project delivery roles.  For over two decades, he has worked primarily in the Motor Vehicle Industry with an exclusive focus on the DMV business.

Currently, Mr. Knigge is the Motor Vehicle Modernization Director leading the IT organization focused on technology support for the Arizona Department of Transportation, Motor Vehicle Division (AZ MVD).  AZ MVD has fully deployed an in-house, contemporary, cloud based modernized solution including comprehensive core internal systems, portals, identity solutions and related technologies.

AZ MVD developed technology includes mDL/digital identity capabilities and many features leveraging artificial intelligence functionality.

Lori Daigle joined the AAMVA (American Association of Motor Vehicle Administrators) Identity Management Team in November of 2023 as a Program Specialist. Her current role is Program Manager, Outreach & Education focusing on the mDL ecosystem and working to enhance Relying Party outreach and engagement. Lori has presented at the Identiverse Conference (2024), the International Association of Police Chiefs Mid-Year Conference (2024), the UL Payment Summit (2024), the Identity and Access Forum (2024), the Mortgage Brokers Association Conference (2024), the Identity and Payments Summit (2025), the Fime Innovation Days (2025) as well as several AAMVA Regional Conferences. She is also an active member in the numerous Jumpstart mDL working groups with the Secure Technology Alliance.

Prior to her current role, she spent nearly nine (9) years with the Colorado Department of Motor Vehicles (DMV) and served as the Director of the Driver License section. She has also led two nonprofits: the Northern Colorado AIDS Project and the Alliance for Suicide Prevention and taught high school Marketing, Management, and Psychology at Pinewood Preparatory School in Summerville, South Carolina where she was named Teacher of the Year.

The post Adoption now and ahead: mDL Day ‘Voices of the Future’ panel first appeared on OpenID Foundation.

Thursday, 07. August 2025

DIF Blog

DIF celebrates v1.0 release of did:webvh, with 3 complete implementations

After years of steady and detailed work, DIF celebrates v1.0 release of did:webvh, with two implementations and even a dockerized installation option.

Including recently-donated Rust implementation by Affinidi. More below!

Wait, is that a typo? Do you mean did:web? 

I did not, but I also did not not mean did:web. did:webvh is a new DID method, but it is also specifically designed to be a more production-worthy, “grown up” version of did:web, everyone’s favorite “training-wheel” DID method and growth hack for getting VCs issued from highly-trusted issuers. Crucially, did:webvh is a “backwards-compatible upgrade” to did:web, meaning it can be consumed in "legacy mode" by any software that already resolves did:web, but upgrades the featureset and trust-model when consumers upgrade their resolution logic to take advantage of the new syntax.

It was incubated at the Identifiers and Discovery Working Group at DIF, meaning it got extensive review and input from many other DID connoisseurs deep in the trenches of DID method design and “user research” as to what use-cases developers really want DIDs for in the first place. Having previously incubated other DID methods in the Sidetree family, early versions of the KERI system, and the did:peer specifications foundational to DIDComm, DIF is happy to see another DIF-incubated DID method graduate to v1!

If you’re hearing about did:webvh for the very first time here and now, let’s start at the beginning with a few high-level differences between did:web and did:webvh:

The VH stands for Verifiable History. Each DID version links cryptographically (“chains”) back to its predecessor and ultimately to the verifiable self-certifying identifier (SCID) that is embedded in the DID identifier. The SCID is derived from the DID’s initial state. Further, each update is signed by a key authorized to update the DID. Each valid did:webvh (the URL) can be easily and deterministically translate to a did:web. Just delete the “vh” in the method segment and the subsequent segment (the “SCID” — self-certifying identifier), and you have a valid did:web identifier.  Resolve that the way you would any other did:web, and you get a valid did:web DID Document. Easy-peasy, 100% interoperability with the most widely-deployed DID method other than did:key. Verifiability is not dependent on DNS. While DNS is used for discovery and retrieval of the DID Log file that contains a did:webvh history, the cryptographic verifiability of that history is not dependent on DNS. The DID Log can be retrieved from other locations/caches and fully verifies the history of its corresponding DID to date. Verifiability is independent of its DID Documents. A did:webvh DID Doc can contain anything the DID Controller wants it to have — different keys, key types, services and so on. The verifiability of the DID is secured by a more complex resolution mechanism, not the contents of the document. That allows the specification to be very opinionated on how the DID is secured (making interoperability easy and reliable), without limiting the purpose of the DID. Secure DID generation can now happen off-server. This allows DID doc hosting to be separate from key management, de-risking malicious hosts and allowing for “dumber” (e.g. key-oblivious) hosting pipelines, e.g. without access to /.well-known. “Common sense” DID URL Handling. The simple DID URL syntax <did>/path/to/{file} maps to just what one would expect: the "/path/to/" subdirectory where the <did> document is stored. Cross-Host Portability. A DID on one host, if configured from inception to allow it, can migrate to a host on another domain and still keep its long-lived and globally-unique “SCID” and verifiable history. This further decouples web hosting from the long-term value of the SCID as a stable, host-independent UUID, which (webvh-aware) consumers can use to link and deduplicate DIDs which migrate across multiple hosts. Optional extensions allow stronger guarantees for historical resolution and duplicity/forgery detection on the part of malicious hosts. Certificate Transparency-style “witnessing networks” (a web-of-trust approach to gossiping DID document histories, also used in the KERI key management/ID system) and/or aggressively cache-busting trusted resolvers can detect host malfeasance of various kinds. The origins of did:webvh

From the start, did:web had some production implementations that were too load-bearing to break… and many unhappy implementers. When a given website/domain only needed one collective DID (common for issuers already highly-trusted and with highly-secured websites, e.g. institutional issuers of credentials), or even in cases where the entirety of a DID’s lifecycle could be assumed to live on the same unchanging domain (e.g., employees or appointees or departments of an institutional issuer), did:web was great, and no one needed to consider upgrades.

Corner-cases and unhappy implementers kept popping up over time, though: 

What about historical resolution of old signatures after rotations and de-activations? What if a merger or acquisition or rebrand needs to update a domain name, but keep all its old DIDs resolving after the change? What if students want to keep their DIDs (or at least keep old signatures validating) after they graduate and they get incrementally locked out of their university’s IT systems?  What if anti-trust regulations forced portability (they way they do for phone numbers) and users wanted to keep connections after switching identity providers (or “hosts” in webvh terminology? What if hosts are perfectly willing to issue did:webs to hundreds of users, but cannot directly automate /.well-known/ documents for each, whether due to their JS framework, choice of contractor/service-provider, security policies, etc? What if consumers can’t fully trust all did:web hosts to faithfully produce the same document for all callers? What duplicity-detection mechanisms or historical guarantees could be offered to those consumers? Similarly, some use-cases are too high-stakes to trust all hosts equally; some consumers were demanding baseline guarantees about how securely DID documents get hosted over DNS-addressed HTTP beyond what is required for did:web

A backwards-compatible “version 2” of did:web was imagined but effectively deferred for years, before work started in earnest on an heir to did:web that focused primarily on improving its trust model. As the idea kept iterating and different requirements and design goals were debated by willing co-designers, the portability guarantees came to the fore as another key goal.

This portability goal dovetailed nicely with the requirements of the Open Wallet Foundation’s ACA-Py framework, which requires DID “agents” (a kind of general-purpose “backend” for many DID and VC operations) to be smoothly swappable over time by simply updating the services property in a DID document. As ACA-Py had originally been developed for blockchain-based did:indy DIDs, there was some amount of portability, key-management, and trust-model parity that needed to be achieved, which brought a lot of other difficult design problems in tow since websites are not stateful or append-only in the way blockchains are! 

Another parity goal was in achieving many of the host-independent “microledger” properties central to the KERI identifier system (and Sidetree before it), whereby the entire history of every DID can be walked to detect duplicity or other malice in any part of the system. Another KERI feature which the designers wanted to achieve was KERI’s “witnessing” properties, inspired by Certificate Transparency, which keeps hosts honest with tamper-detection and caching. This survives in optional extensions, i.e. as a distinct mechanism that consumers and hosts can opt into for stronger guarantees in resolution.

The did:webvh method's requirements informed the design and its original name, “Trust did:web”. Namely, the eponymous goal was to achieve an HTTP-published but ultimately host-independent trust layer for DID-based interactions, without a common blockchain and with more conventional cryptography and tooling than KERI requires. Trust nerds can think of this as a move from the “CA-grounded” web trust model, the only one in which did:web is usably trustworthy and secure, to one usable in a more modern “Let’s Encrypt” trust model that the web has largely moved to in recent decades.

Playing Nice in a Multi-DID World

As mentioned above, a formative goal in the design of did:webvh was smooth interoperability with did:web and did:key, as well as properties that would normally require an append-only VDR to achieve. Contrary to many DID methods which strive to stand out from the crowd by offering unique features enabled by their particular VDR or low-level mechanisms, this method sought instead to achieve a superset of features already in production today, to offer existing DID-based and DID-like systems a beter option to migrate to or translate to. Anything permitted in the DID Core specification is available in did:webvh — no restrictions! If the early phase of DID design was one of innovation and experimentation, did:webvh strives to consolidate and unite in this phase marked by convergence and productionization.

A good example of this is how did:webvh appends a digest (which can be used as a checksum) of the DID log entry to the id property of each DID document. In the HTTP context, this makes each DID document “tamper-evident” thus reducing many of the opportunities for a malicious did:web host to produce inaccurate or adulterated DID documents on behalf of a DID controller. This also makes all did:webvh DIDs “self-certifying,” in the sense that their documents’ integrity can always be checked against this checksum. Similarly, the provenance back to inception of updated DID documents can be integrity-protected and proven by the same “SCID” (Self-Certifying Identifier) mechanism.

SCIDs are a powerful design pattern, which undergird early IPFS-based DID methods like did:ipid and the Sidetree methods like did:ion and did:element, both of which were incubated in the DID Sidetree WG and used the IPFS identifier scheme to address each DID document by its hash.  Older mechanisms of hash-identification include RFC 6920, which underwrites the recent did:ni method; newer examples include did:iscc and the growing body of SubResource Integrity specs at W3C that allow location-independent, self-certifying identifiers for slow-moving, cacheable web resources. KERI pushed SCIDs to the fore and influenced a lot of design work at ToIP, but zooming out a little they can be seen across modern web development as a powerful counterbalance to the “same-domain principle” of modern web security.

SCIDs are also crucial to lots of ongoing design work at ToIP and elsewhere, such as the First Person Project which mandates the portability and self-certifying properties described above for ANY DID used across a wide patchwork of infrastructures and tooling. Does that mean any SCIDs will do? Can a did:webplus, used in the OCI ecosystem, be translated to a did:webvh, or a did:scid to let a controller of a did:webplus be part of such a bridged network? 

Similarly, the optional witness and watcher capabilities defined in the did:webvh specification were defined to be open-ended building-blocks as well. The specification defines a clean and simple technical mechanism for each capability to ensure interoperability between different applications of these, while leaving use-cases and ecosystem-specific governance questions outside the specification, where they belong. One application of these has been to backfill the did:indy concept of "endorsers" in a web-based VDR, but many other trust mechanisms or policy engines could be built combining DID logs and witnesses and/or watchers. The did:webvh Work Item at DIF welcomes implementors with divergent use-cases and policy assumptions to collaborate on profiling these capabilities and specifying them in more detail in ongoing work.

These broader questions of interoperability beyond the boundaries of any one DID method are increasingly being tackled in other workstreams at DIF. One of these isthe DID Traits evaluative specification (which also hit V1.0 recently!) which defines shared or equivalent/translatable properties across methods to facilitate bridging and converging DID systems. Similarly, the DID Methods WG is trying to draw attention to how well DIDs can be combined or translated, as well as how “production-grade” their implementations can be tested (or even audited) to be. WebVH is the first method being evaluated for production-readiness and documentation completeness, literally setting the bar for others!

Where to from here?

Having reached v1.0, the focus will now turn to refinements and adoption, working with implementers and deployers to gather feedback and grow their numbers.

Innovators are encouraged to review the didwebvh.info website and walk through the basic step-by-step "Understanding did:webvh". Implementors should check out the Implementations page to find the did:webvh tooling you need for your initiatives, including three mature and complete implementations: Python TypeScript Rust Standards developers can jump directly to the did:webvh spec

The Python and Typescript implementations were both incubated by the Government of British Columbia, Canada.

The Rust implementation was recently contributed to DIF by Affinidi. We're excited by this contribution as it brings the performance, memory safety, and systems-level capabilities that make Rust increasingly popular for performance-critical applications.

These three implementations serve as mutual validation tools, continuously testing interoperability and compliance against each other to ensure robust spec maturity and cross-platform reliability.

Whether you prefer leveraging existing tooling or building from scratch, did:webvh makes both paths straightforward. The clean specification and comprehensive interoperability test suite provide clear guidance for new implementations, while the mature tooling ecosystem offers battle-tested components you can integrate immediately.

We welcome contributions from the community - whether that's enhancing existing implementations, creating new language bindings, improving documentation, or developing deployment tools that make it even easier to adopt did:webvh DIDs in production environments!

Want to learn more and get involved? Join DIF or contact us at contact@identity.foundation.

Wednesday, 06. August 2025

Internet Safety Labs (Me2B)

2022 K-12 Edtech Benchmark Revisited: Unvetted Off-the-Shelf Apps Outnumber Licensed Apps 2-to-1

A new school year is upon us and ISL wants to remind educators and school technologists that they need to take as much care scrutinizing safety risks in off-the-shelf (OTS) technologies recommended to students as they do for technologies licensed by the schools.  We recently went back over the data from our 2022 benchmark and […] The post 2022 K-12 Edtech Benchmark Revisited: Unvetted Off-t

A new school year is upon us and ISL wants to remind educators and school technologists that they need to take as much care scrutinizing safety risks in off-the-shelf (OTS) technologies recommended to students as they do for technologies licensed by the schools. 

We recently went back over the data from our 2022 benchmark and confirmed that most technologies pushed to K-12 students in the US are recommended (not required), and unvetted off-the-shelf technologies that students use completely independently of school privacy controls or oversight.   

Schools Recommend Too Many Technologies to Students 

One of the most striking findings from our 2022 US K12 EdTech Benchmark was seeing just how many technologies schools were pushing students to use.  

We ended up counting apps used in each school in three different ways. We initially looked for a single list of all the technologies that a school was either recommending or requiring. Ideally, we were looking for discrete lists of each type—recommended or required. However, most schools did not have clean, singular lists of apps, which meant we needed to hand count the number of all apps mentioned by the school and/or district websites as being used by students (“manual app count”). We hand-counted all 663 schools.  Some schools or more frequently school districts had full app lists maintained by their IT departments, and many districts maintained lists of technologies in the Student Data Privacy Consortium (SDPC) database. We called these lists “simple aggregated lists”. Finally, for many of the schools that had simple aggregated lists, they also indicated if an app was “approved” or not. We called these lists of approved apps “approved technology lists”. To summarize the types of app list counts: 

(1) Manual app count: Researchers hand counted the number of apps found across school and district websites1. This was                    performed for all 663 schools. 
(2) Simple aggregated list: For 222 schools we found simple aggregated lists of apps that were larger than the manual count.
(3) Approved technology list: For the 222 schools that had a simple aggregated list [larger than the manual count], 153 of them          distinguished approved from unapproved apps on those lists.


As can be seen in Table 1, the manual count yielded an average of 19 apps per school, but schools with simple aggregated lists averaged 191 apps, and strangely, the subset of schools with approved apps averaged 214 apps. Yikes.  

Table 1   LIST TYPE AVERAGE NUMBER OF APPS Manual app count (n=663) 19 Simple aggregated list (n=222) 191 Approved technology list (n=153) 214


Do schools really need to recommend 200 different apps and websites for student use? 

Recommended Versus Required Apps 

As mentioned earlier, in our manual counting of technologies we designated an app as either “required” or “recommended”. Apps were deemed “required” due to prominent presence on school websites, often with a login.2 Similarly, custom apps that were clearly branded for the school or district were also designated as required. Thus, required apps in our research were always licensed by schools. As such, these technologies were held to greater scrutiny and vetting, and student accounts were generally provisioned and managed by the schools3. In this way, the school had “joint data controller” responsibilities alongside the app developer.4

“Recommended” technologies, however, were always off-the-shelf (OTS) technologies, which students would access or download at their own discretion, creating their own accounts independent of the school. 

We knew that the percentage of required apps was small compared to the recommended apps. But what was the breakdown of “required” versus “recommended” apps? We thought the distribution might follow the 80-20 Pareto principle: that 20% of the apps were required, and 80% were recommended. We decided to go back and run the numbers.  

Table 2 below shows the numbers for the different types of app count lists. The manual app count method failed to account for the sometimes massive, aggregated lists. Similarly, the aggregated list numbers distorted the overall data set. The bottom-line row in the table, “Manual + Approved list”, combines the manual counts for schools without simple aggregated lists with the approved technology counts [for schools that had them] to best provide a number for the national results.  

As can be seen, it was closer to 70-30 than 80-20. On average, schools were pushing nearly 58 technologies to students, with 28.9% of them being required, and 71.1% being recommended.5 The vast number of apps schools are pushing on students are merely recommended unvetted off-the-shelf apps. Despite the apps being “approved” by the schools in the approved technology lists, we know that in many cases, the only vetting is whether or not a Privacy Policy exists. This is not a sufficient form of vetting to ensure student data privacy. Schools are subjecting students to unvetted and ungoverned technologies—sometimes more than 200 such technologies. Recall also that nearly 30% of the recommended technologies for students are neither strictly educational apps nor apps designed for children, and in the latter case, they are not covered by COPPA compliance. 

Table 2  Type of App List #
Schools Average Total # of Technologies Average # of Required / Licensed Technologies Required / Licensed Techs Average % of All Tech  Avg # of Recommended / OTS technologies  Recommended / OTS Techs Average % of All Tech  Max # of Technologies Manual App Count 663  19.0  5.1  33.4%  13.9  66.6%  106  Simple Aggregated List 222  190.7  5.5  9.7%  185.2  90.3%  1411  Approved Technology List 153  214.3  5.3  6.3%  209.0  93.7%  1411  Manual + Approved List 663  57.7  5.1  28.9%  57.8  71.1%  1411 
Conclusions  

When we consider “edtech” as the combination of licensed and OTS technologies as in our 2022 benchmark, a primary risk for students is the high—sometimes exceedingly high—number of unvetted off-the-shelf technologies that schools are recommending they use.6 Until technology is reasonably safe for children ISL recommends schools undertake the following:

Minimize the number of technologies recommended to students. 
a. Especially minimize the number of apps that are not specifically for children.
    i. ISL doesn’t propose (or support a paradigm of) age-gated versions of commonly recommended mixed-audience apps like         news, museum, zoo, and reference apps. These apps must be made safer for children of all ages (i.e. for all of us). Screen all OTS technologies recommended for student use. Ideally, these should be vetted as carefully as licensed technologies, though we know that’s not practical for all schools. 
a. Use ISL’s App Microscope to learn more about privacy risks in commonly recommended apps. Can’t find your app? Contact us        at schools@internetsafetylabs.org.  
b. Recommend only apps that are COPPA certified. This won’t stop all commercial surveillance and data sharing, but it at least             minimizes some data sharing.
c. Put in place Data Privacy Agreements (DPAs) for all technologies recommended to students, i.e. for both licensed and OTS                technologies. This requires some dedicated personnel to administer, but Access4Learning’s Student Data Privacy Consortium          has agreement templates readily available here https://privacy.a4l.org/.  
d. Annually audit DPAs against the actual technology behavior. This is a service that ISL has provided for one US state school board      and is more than happy to provide at reasonable rates. Contact us schools@internetsafetylabs.org.  

 

Footnotes: Note that the apps found via the manual count were the apps that were audited in the research. Due to the volume of listed apps, ISL did not audit all of the apps found in the simple aggregated lists. More discussion can be found in the first findings report. “2022 K-12 EdTech Safety Benchmark National Findings – Part 1”, Internet Safety Labs, December 13, 2022, Section 7.2.1, p. 89, https://internetsafetylabs.org/wp-content/uploads/2022/12/2022-k12-edtech-safety-benchmark-national-findings-part-1.pdf These required apps also more narrowly align with traditional “EdTech” categories, whereas the recommended technologies included a large percentage of apps not intended for children.  We wrote about this in a blog post from 2023 called, “Data Controller Confusion in EdTech”,   https://internetsafetylabs.org/blog/insights/data-controller-confusion-in-edtech/. NOTE that the average percentages shown in Table 1 reflect an average of each school’s percentage of required/recommended apps. Licensed technologies are also risky, especially the Community Engagement Platforms which shared data [on purpose] with the most third party entities and data brokers, like this app, no longer available on the app store: https://appmicroscope.org/app/1579/

The post 2022 K-12 Edtech Benchmark Revisited: Unvetted Off-the-Shelf Apps Outnumber Licensed Apps 2-to-1 appeared first on Internet Safety Labs.


Next Level Supply Chain Podcast with GS1

How Armored Transport is Going Digital

The way cash moves through the supply chain is evolving.  What was once a paper-heavy process is now embracing digital transformation. In this episode, Robert Skitt, Senior Manager at Axiom Armored Transport, joins hosts Reid Jackson and Liz Sertl to discuss the shift from traditional, manual methods to more secure, efficient, and accountable digital solutions.  Financial instituti

The way cash moves through the supply chain is evolving. 

What was once a paper-heavy process is now embracing digital transformation.

In this episode, Robert Skitt, Senior Manager at Axiom Armored Transport, joins hosts Reid Jackson and Liz Sertl to discuss the shift from traditional, manual methods to more secure, efficient, and accountable digital solutions. 

Financial institutions and the Federal Reserve are demanding greater transparency and control over cash movement, and armored transport teams are under increasing pressure to adapt.

Robert walks us through the process of how Axiom is modernizing armored transport, replacing handwritten logs with barcode scans and eManifests. In this episode, you’ll learn:

How GS1 standards are helping digitize cash logistics

What “cash visibility” looks like in practice

Why early adoption gave Axiom a seat at the table and a competitive edge

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(04:52) What cash visibility looks like

(07:23) How GS1 standards changed the workflow

(11:32) Turning paper-heavy processes into data

(13:32) Barcode tech in armored car logistics

(15:14) How digitization improves accuracy and trust

(19:25) Advice for starting your visibility journey

(20:44) Robert’s favorite technology today

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Robert Skitt on LinkedInCheck out Axiom Armored Transport


Blockchain Commons

2025 Q2 Blockchain Commons Report

Not only did Blockchain Commons close out its major Zcash project in Q2 and start work on a big, new FROST push, but we also did continued work on many other initiatives: Working with Partners Zcash Zingo Labs FROST Ethereum Articles & Presentations Interoperability Provenance Marks Post-Quantum URs Permits in Gordian Envelope Thinking about Identity HackerNoon No Phone Home XID Core Concepts F

Not only did Blockchain Commons close out its major Zcash project in Q2 and start work on a big, new FROST push, but we also did continued work on many other initiatives:

Working with Partners Zcash Zingo Labs FROST Ethereum Articles & Presentations Interoperability Provenance Marks Post-Quantum URs Permits in Gordian Envelope Thinking about Identity HackerNoon No Phone Home XID Core Concepts Fair Witness Bitcoin Policy Summit Web Updates Updated Projects Page New Envelope Seeds Page CLI Updates dCBOR-CLI Envelope-CLI Library Updates Mass Crate Update Argon2ID envelope-pattern dcbor-parse Working with Partners

Blockchain Commons is supported by patronage and by grants. (If you want to become a major patron and partner with us on a project, let us know, and if there are grants that you think would allow us to fulfill our Gordian Principles, that we may not be applying to, again drop us a line.) In Q1, some of our major projects were closely related to grants that we’d applied for last year.

Zcash. The Zcash ZeWIF project took up the majority of the our Q1. The goal was to create an interchangeable wallet format that would make Zcash wallets more interoperable and so give users more freedom to move their funds to an app of their choice. We continued that work in April and then closed it out in May.

Our work product for Q2 of the ZeWIF project included the final drafts of our best practices for importing & exporting wallet data and our doc on using Envelope attachments for ZeWIF. We also held our fourth and final (to date) ZeWIF meeting, which included a demo of our zmigrate-cli tool. Our final two reports from May give all the details on the apps, crates, and docs that we published as we closed out the project.

Zingo Labs. We were proud to do our Zcash work with Zingo Labs, who provided us with some of the Zcash-chain knowledge we needed to extend our interoperability expertise into the Zcash community. (We also got lots of support from other experts in the community through meetings, which is the same way we advance standards in all the ecosystems we work with.) We hope to continue that partnership in the future, and to support that we offered a presentation to Zingo Labs in Q2 highlighting our technologies, how they work, and why they’re useful. We focused on some of the low hanging fruit such as SSKR, which allows for the secure backup of secrets, and OIB, which makes it easier for users to see what they’re doing. We’ll let you know if anything comes of this!

FROST. As soon as we closed out work on our Zcash grant, we began work on a new FROST grant that we received from HRF. This grant’s work will come in three parts: creating new FROST signing tools; writing “Learning FROST from the Command Line”; and holding FROST meetings. We’ve been pushing hard on this work in July and August, so we’ll be writing more about it in the Q3 report.

Ethereum. Though most of Blockchain Commons’ work has traditionally been on the Bitcoin blockchain, our principles of independence, resilience, privacy, and openness apply to all blockchains. Our recent work with Zcash proved that, and so in Q2 we also had some talks with a variety of parties in the Ethereum ecosystem about possibly doing work with them on securing secrets at the level zero of their stack. We’re still waiting to see if anything gels, but generally: if you know of a blockchain that might be looking for interoperability or resilience support, let us know!

Articles & Presentations

Blockchain Commons’ major articles and presentations demonstrate our fundamentals and highlight our newest work. Here’s what that included in Q2.

Interoperability. We talk a lot about ecosystem “openness” and user “freedom”, or more generally “interoperability.” This is a pretty important foundation of Blockchain Commons’ work, so in Q2 we explored it more with the article “Interop, What Is It Good For?” and slides and video at our May Gordian Meeting. We encourage you take a look at the article or the meeting presentation to gain some more insight into one of the core principles of Blockchain Commons’ work.

Provenance Marks. One of our newest innovations is “provenance marks,” which allow for the creation of a cryptographically-secured chain of marks. We gave a presentation at our June Gordian meeting and also have a research paper on the technology. We additionally presented in provenance marks to the W3C Credentials Community Group, who is forming a working group on provenance technology of this type: Blockchain Commons’ provenance marks are one of three possibilities under consideration.

Post-Quantum URs. Post-Quantum Cryptography support was one of our most exciting expansions in Q1. We’ve now published a new research paper on integrating PQC with URs.

Permits in Gordian Envelope. Gordian Envelope is a more mature technology at Blockchain Commons, but we’re still exploring its fullest capability. Part of that capability is the “permit,” which is a way to lock your Gordian Envelope. The great thing about permits is that you can apply multiple permits to an Envelope, so that it can be opened by different people in different ways! We wrote a research paper on “Permits in Gordian Envelope” to offer more insights into the possibilities.

Thinking about Identity

Christopher Allen has been long associated with digital identity, dating back at least to his authorship of “The Path to Self-Sovereign Identity” and his founding of the Rebooting the Web of Trust workshops. Blockchain Commons did a variety of scattered work on identity in Q2.

HackerNoon. Christopher talked to Hackernoon about how “We’ve Lost the Privacy Plot”, which generally discusses privacy and the internet.

No Phone Home. Digital identity is closely associated with digital credentials, which detail who and what an identity represents. Unfortunately, credential design is growing problematic because much of it is phoning home: alerting issuers when and where credentials are used. That’s why Blockchain Commons recently signed on to the No Phone Home initiative, to try and bring attention to this fundamental problem in digital identity.

XID Core Concepts. Blockchain Commons has its own answer for self-sovereign identity: the XID, or extensible identifier. We’ve been working on a tutorial course to show everything about how XIDs work. So far, we’ve developed a set of core concepts docs, which not only overview how XIDs work, but also how they link into Blockchain Commons’ larger architecture. This is still a work in progress. (We’re about to begin work finalizing the linked tutorials.) But, if you want to take an early look, the core concepts files have all been closed out as revised drafts.

Fair Witness. Some of Blockchain Commons’ work is advocacy (like our discussion with Hackernoon and our signing on to No Phone Home) and some is pragmatic (like our XID work). But we also try to be future-looking. That’s what Christopher’s “Fair Witnessing” Musings was about. It’s a new look at Verifiable Credentials that focuses on the limitations of what we actually can perceive.

Bitcoin Policy Summit. We’re thrilled to see some of our thinking about identity starting to have an effect on the larger world. A few of Christopher’s identity articles were referenced in the Bitcoin Policy Institute’s recent white paper on “Building a Trustworthy Digital Future” and as a result, Christopher was asked to talk at the Bitcoin Policy Summit in Washington D.C. this June. (More on the results of that in the coming quarters!)

Web Updates

Our web pages are intended as a resource for developers so that they can understand and implement our technologies. Here are some of the updates we made this quarter:

Updated Projects Page. Our projects page has always been a central index to our most important work, but that type of thing gets out of date as priorities change, so we’ve done a big update to align it with our most recent iteration of our developer pages and to otherwise highlight important recent work like our meetings for FROST. Take a look at what we consider our most relevant work as of early 2025!

New Envelope Seeds Page. We also released a new page on Seeds in Gordian Envelope: how and why you’d want to store seeds in envelopes, complete with examples of how to use envelope-cli for experimentation. (This was also the heart of the demo we made to Zingo Labs, so you can take a look at several of our easiest-to-implement technologies here.)

CLI Updates

As usual, we’ve been making updates to our apps and libraries. In Q2 that included two CLI releases:

dCBOR-CLI. We have a new CLI for dCBOR! It validates dCBOR input (using CBOR diagnostic as its default input) and produces output in several formats (using hex as its default output).

Envelope-CLI. Our Envelope CLI now has new pattern matching to make it easier to find specific leaves.

Library Updates

There were also lots of updates to our libraries, focusing on our Rust stack.

Mass Crate Update. The vast majority of our Rust crates have been updated to support new developments that occurred while we were working on ZeWIF. This includes:

- bc-rand 0.4.0 - bc-crypto 0.9.0 - bc-shamir 0.8.0 - dcbor 0.19.0 - bc-tags 0.2.0 - bc-ur 0.9.0 - sskr 0.8.0 - bc-components 0.21.0 - known-values 0.4.0 - bc-envelope 0.28.0 - provenance-mark 0.8.0 - bc-xid 0.8.0 - bc-envelope-cli 0.14.0 - gstp 0.8.0

Argon2id. Argon2id support has been added to bc-crypto and bc-components, as well as the EncryptedKey type.

envelope-pattern. The new envelope-pattern crate is a pattern matcher and text syntax pattern parser for Gordian Envelope, allowing you to match specific structures within envelopes.

dcbor-parse. The new dcbor-parse crate parses and composes the CBOR diagnostic notation into dCBOR (deterministic CBOR) data items.

Here are our major dcbor crate updates:

- dcbor-parse 0.1.1 (NEW) - dcbor-cli 0.7.1 (heavy update) - dcbor 0.19.1 (minor changes)

Coming up, we have work on FROST, XID, and more. Sign up for the Gordian Developer Meeting announcement-list to be informed of our upcoming presentations, and please consider becoming a patron of Blockchain Commons or talking with us about partnering on a specific project.

Tuesday, 05. August 2025

Digital ID for Canadians

The Digital ID and Authentication Council of Canada (DIACC) Written Submission for the Fall 2025 Pre-Budget Consultations

Submitted by: Joni Brennan, President List of recommendations: Introduction Thank you for the opportunity to provide input in advance of Budget 2025. In a time…

Submitted by: Joni Brennan, President

List of recommendations:

Recommendation 1: That the government fund and deploy an interoperable, reusable digital credentials login solution for federal services modeled after widely-used single sign-on tools in the private sector. Recommendation 2: That the government invest in Canadian-based trust infrastructure, including domestic cloud and data centres, to support AI-readiness, digital sovereignty, and economic resilience. Recommendation 3: That the government advance interoperability to unleash digital trade and labour mobility.

Introduction

Thank you for the opportunity to provide input in advance of Budget 2025. In a time of economic, technological, and geopolitical uncertainty, Canada must act with urgency to reinforce the foundation of a strong, secure, and competitive digital economy: trust.

Whether enabling interprovincial labour mobility, reducing fraud in real estate and finance, or ensuring AI tools are used responsibly, verifiable trust infrastructure is central to our countryʼs economic stability and resilience. Trust is not just a principle, it is the experience citizens have when interacting with government services that are as seamless, secure, and intuitive as private-sector platforms. Without secure and scalable identity verification, Canadian businesses face rising fraud costs, compliance burdens, and lost consumer confidence. Citizens and professionals are delayed in accessing critical services or moving where they are needed most. And governments are challenged to keep pace with accelerating threats in an AI-driven world.

Now is the time to deliver trust through experience by investing in practical tools and Canadian infrastructure that protect citizens, unlock innovation, and future-proof our economy.

About DIACC

The Digital Identification and Authentication Council of Canada (DIACC) is a non-profit public–private coalition created following the federal Task Force for the Payments System Review. DIACCʼs mission is to accelerate digital trust adoption by enabling privacy-respecting, secure, and interoperable identity systems.

DIACC is the steward of the Pan-Canadian Trust Framework (PCTF) — an industry-developed, standards-based, technology-neutral framework designed to enable scalable, certifiable digital trust infrastructure that meets the needs of governments, businesses, and individuals.

The PCTF has been developed in collaboration with experts from federal, provincial, and territorial governments as well as industry and civil society. It supports verifiable credentials, authentication services, fraud prevention, and information integrity across the Canadian digital economy.

Canadaʼs Urgent Trust Deficit

Canada faces a growing trust deficit that threatens economic growth, competitiveness, and national resilience. Three converging challenges demand action:

AI-accelerated misinformation and identity theft – Generative AI tools are enabling the rapid creation and dissemination of fake identities, fraudulent documentation, and disinformation. Without robust authentication systems and verifiable credentials, the authenticity of people, data, and services becomes harder to determine—eroding consumer confidence and legal certainty. Rising fraud and its impact on the economy – Many of Canadaʼs key sectors are increasingly subject to fraud. The real estate sector, for example, is increasingly experiencing impersonation and illicit financial flows, especially in transactions involving unrepresented parties. Mortgage fraud, title theft, and manipulated documentation are rising, yet identity verification practices remain outdated and fragmented. Barriers to labour mobility and seamless trade – Many professionals face long delays and duplicative processes when seeking to work across provinces or internationally. Businesses struggle to comply with evolving regulatory requirements and to compete globally without recognized, verifiable credentials.

Recommendations

DIACC offers three core recommendations to address these threats and seize the opportunity to lead globally in trusted digital innovation.

Recommendation 1: That the government fund and deploy an interoperable, reusable digital credentials login solution for federal services modeled after widely-used single sign-on tools in the private sector.

The Government of Canada should develop and implement a digital credentials login solution that enables citizens to access federal services with one secure, consistent experience — similar to how they use Google or Apple sign-in options across the internet. For example, with a trusted credential, Canadians could log into a real estate registry, file taxes, or access health records using one verified identity, reducing friction and fraud risk while improving convenience and access.

These credentials should be certified against open standards such as the PCTF, enabling individuals to verify their identity once and reuse it securely across services. The government is also encouraged to take a longer-term view by building compatibility across federal, provincial, and municipal digital credentials systems.

Recommendation 2: That the government invest in Canadian-based trust infrastructure, including domestic cloud and data centres, to support AI-readiness, digital sovereignty, and economic resilience.

Verification and authentication tools are essential infrastructure in an AI-driven economy. As AI-generated content, synthetic identities, and manipulated documents become increasingly sophisticated, the ability to verify the provenance and traceability of information and data becomes even more vital.

DIACC recommends that the government:

Recognize authentication and verification tools as critical components of Canadaʼs AI strategy and cybersecurity agenda. Fund the adoption and certification of privacy-respecting, standards-based solutions, such as the PCTF. Prioritize collaborative development of tools that verify identity, documentation, and information authenticity while preserving user privacy. Ensure data residency through investment in Canadian-based private cloud and hardware services.

A proactive, standards-aligned approach will support:

Responsible AI deployment across sectors. Secure digital service delivery. Reduced liability for businesses and professionals relying on verified information. Greater resilience against misinformation and fraud in elections, commerce, and public discourse.

Recommendation 3: That the government advance interoperability to unleash digital trade and labour mobility.

Interoperability is key to reducing friction, unlocking economic opportunity, and ensuring Canada remains globally competitive. DIACC recommends that the government:

Support cross-government and cross-border interoperability by recognizing frameworks such as the PCTF in legislation, procurement, and policy. Advance mutual recognition of trust frameworks with international partners (e.g., between PCTF and the EUʼs eIDAS 2.0 framework). Enable the use of verified credentials for regulatory compliance, licensure, and interprovincial labour mobility. Accelerate digital transformation across public services using certifiable trust services.

This approach will help:

Enable professionals and skilled workers to move between provinces without redoing verification processes. Simplify cross-border regulatory compliance for Canadian exporters and importers. Allow micro, small and medium enterprises — including in rural, Indigenous and remote communities — to offer services and products across Canada and beyond without prohibitive onboarding costs. Ensure that public sector modernization efforts are secure, accessible, and efficient.

The Road Ahead

Canada is at a turning point. The foundation of trust that underpins our digital and economic systems is under strain, but the tools and standards to reinforce it already exist. Frameworks like the PCTF offer governments and businesses practical, scalable solutions that:

Meet privacy, security, and accessibility requirements. Support inclusive digital access for underserved communities. Complement formal standards and enable rapid deployment. Preserve Canadaʼs digital sovereignty. Budget 2025 offers a strategic moment to invest in these tools, not just as a technical fix but as a long-term economic, national security, and democratic priority.

Conclusion

Trust is Canadaʼs most valuable economic asset in the digital age. Whether enabling a small business to sell across borders, a citizen to access services securely, or a hospital to verify a clinicianʼs credentials during a crisis, trust infrastructure is the connective tissue of our digital society. DIACC welcomes further collaboration with federal partners to ensure Canadians can interact, transact, and innovate with confidence in a digital-first world. Thank you once again for the opportunity to provide our input in advance of Budget 2025 and as we collectively move forward on the path to a digitally and economically prosperous Canada.

Monday, 04. August 2025

We Are Open co-op

Just talk to people: Why we love community calls

Community calls have been a cornerstone of my engagement and community building practice for well over a decade. I started a couple community calls at Greenpeace International, one of which continues a good 8 or 9 years later. Regular readers have probably heard of the Open Recognition is for Everybody

Community calls have been a cornerstone of my engagement and community building practice for well over a decade. I started a couple community calls at Greenpeace International, one of which continues a good 8 or 9 years later. Regular readers have probably heard of the Open Recognition is for Everybody (ORE) call. Now, we’re spinning one up with Amnesty International UK.

In this blog post, I reflect on why community calls hold such significance to me. For me, community calls are vital hubs where connections, ideas, and growth converge.

Being open and inclusive cc-by-nd Bryan Mathers for WAO

We all stand on the shoulders of giants. I learned about community calls way back when I was working at Mozilla. At Mozilla, we had community calls for communities, sub-communities, projects and procedures. The majority of our meetings were simply open – if you knew about it, you could join. Sometimes we promoted calls and asked for participation, sometimes we just waited to see who showed up. If you showed up to a call, you were included, whether anyone had specifically invited you or not.

Working openly is a transformative way to bring people together and create safe spaces where people can share challenges, celebrate successes, and co-create solutions. The regularity of designated community calls helps build trust and camaraderie, turning strangers into collaborators. By creating space for diverse voices, we enrich our problem-solving approaches and ensure that decision-making reflects a wide range of perspectives. These inclusive practices have strengthened communities, making them more resilient and adaptable.

Strengthening the community cc-by-nd Bryan Mathers for WAO

If I think about all the different community calls I’ve been a part of, and all the people I’ve met because of open community calls, I’m reminded of the power those calls have to shape things. These calls aren’t presentations, webinars or regular meetings, they are spaces where people have the power to shape the agenda and talk about issues that matter to them.

Part of building successful community calls is to let go of trying to control conversations. While we might put together a loose agenda to guide a community call, we strive to make sure that everyone in attendance feels like their being there matters. Community calls are not transactional, they are spaces that help us be part of a community, find ways to amplify more voices and work together.

Just talk to people cc-by-nd Bryan Mathers for WAO

Part of what makes a community thrive is helping people find a place of belonging. Community calls help by providing a flexible space where people can just talk to each other. They are not just about discussing details or progress in a project, but also about celebrating one another and figuring out what a collective future for a project or an idea might be. They’re great spaces to figure out problems, while also getting to know the people you’re collaborating with.

Community calls have been instrumental in shaping open projects and building meaningful connections. Their role in driving innovation, inclusivity, and adaptability is immeasurable. As I look forward to building open calls for the AIUK community, I am reminded that these calls are not just meetings—they're milestones in a journey towards a more connected and collaborative world.

If you are looking for a way to encourage a group of people to co-create and collaborate, check out 11 steps to running an online community meeting. It’s a resource that I wrote almost a decade ago and continue to return to over and over as I work to connect people working to make the world a better place.

Friday, 01. August 2025

FIDO Alliance

MobileIDWorld: Google Chrome Enhances Security with Mandatory Biometric Authentication for Password Autofill

Google has implemented significant enhancements to biometric authentication and security features in Chrome and Google Workspace, marking the latest step in the company’s broader push toward stronger authentication methods. These […]

Google has implemented significant enhancements to biometric authentication and security features in Chrome and Google Workspace, marking the latest step in the company’s broader push toward stronger authentication methods. These updates build upon previous Chrome security improvements while addressing critical vulnerabilities in desktop password management.

Wednesday, 30. July 2025

Hyperledger Foundation

Staff Corner: Celebrating Ethereum’s 10-Year Anniversary

This summer marks ten years since the Ethereum network launched, forever transforming the blockchain space and introducing the world to smart contracts, decentralized applications, and programmable trust. At LF Decentralized Trust (LFDT), we’re proud to join the global community in celebrating this milestone and spotlighting the critical role our projects and contributors play in Ethere

This summer marks ten years since the Ethereum network launched, forever transforming the blockchain space and introducing the world to smart contracts, decentralized applications, and programmable trust. At LF Decentralized Trust (LFDT), we’re proud to join the global community in celebrating this milestone and spotlighting the critical role our projects and contributors play in Ethereum’s growth and evolution.

Tuesday, 29. July 2025

Hyperledger Foundation

Developer Showcase Series: Antonio Mota, Vice President, Applications Developer and Lead Analyst, DLT Centre of Excellence, Citi Innovation Labs

Back to our Developer Showcase Series to learn what developers in the real world are doing with LF Decentralized Trust technologies. Next up is Antonio Mota, Vice President, Applications Developer and Lead Analyst, DLT Centre of Excellence, Citi Innovation Labs.

Back to our Developer Showcase Series to learn what developers in the real world are doing with LF Decentralized Trust technologies. Next up is Antonio Mota, Vice President, Applications Developer and Lead Analyst, DLT Centre of Excellence, Citi Innovation Labs.


The Engine Room

PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA

How can technology serve as a tool for justice, not harm? Our recent six-month Matchbox partnership with TechHer Nigeria helped us explore exactly that.  The post PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA appeared first on The Engine Room.

How can technology serve as a tool for justice, not harm? Our recent six-month Matchbox partnership with TechHer Nigeria helped us explore exactly that. 

The post PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA appeared first on The Engine Room.


Digital Identity NZ

Open spaces, open hearts, open minds…time to add Open data to the mix!

The time has come to put open data at the heart of the New Zealand story.  By this I mean the deployment of digital public infrastructure to secure our data … Continue reading "Open spaces, open hearts, open minds…time to add Open data to the mix!" The post Open spaces, open hearts, open minds…time to add Open data to the mix! appeared first on Digital Identity New Zealand.

The time has come to put open data at the heart of the New Zealand story.  By this I mean the deployment of digital public infrastructure to secure our data so it can flow smoothly and safely with the correct permissions. Change is hard. System change is even harder. But that doesn’t mean we shouldn’t be aspirational.

It is encouraging to see Hon Judith Collins‘ “we’ll have a Government app by Christmas” programme take a significant step forward with the appointment of two exceptional New Zealand tech companies.

Congratulations to both Dave Clark NZ and MATTR teams. This is exactly the kind of partnership between government and local tech that strengthens our digital economy and showcases New Zealand innovation on the world stage – DIA announcement here.

The structural separation of digital identity from the traditional tech stack represents a seismic shift for the industry and for how systems are designed and developed. Despite extensive consultation as the technology evolved over the past few years, the long-awaited reference architecture and design for government implementation are now a priority.

As we build our profile, we are grateful that so many accomplished changemakers are stepping up to speak at our highly anticipated Digital Trust Hui on 12 August:

Matthew Evetts – Partner – Digital & Cyber, KPMG Tim Ransom – Product Manager – Public, Community and Consumer Health, Te Whatu Ora Joel Foster – Chief Commercial Officer, Lumin  Helen Littlewood – Senior Product Manager,Worldline Contactless Silona Bonewald – President, LeadingBit Solutions | Open Source Evangelist & Standards Expert Kristy Phillips – Chair, Hospitality New Zealand Anna Curzon – Chair, B416 Don Christie, Managing Director, Catalyst IT Myles Ward – Deputy Government Chief Digital Officer, DIA Maria Roberston, Chair, Digital Identity New Zealand

At the Hui, it will be valuable to see the DIA spell out the work that the Department has underway to resolve any policy and regulatory barriers for accredited providers, issuing credentials (driving, education, travel, age assurance), along with DIA issued credentials under DISTF accreditation.

It will be beneficial at the Hui to see DIA outline the work underway to address policy and regulatory obstacles for accredited providers, specifically regarding the issuance of credentials such as driving, education, travel, and age assurance, as well as DIA-issued credentials under DISTF accreditation.

Have you secured your spot at the Digital Trust Hui Taumata?

Join us on 12 August in Wellington for a full day of keynotes, panels, roundtables, and exhibits, expertly guided by MC Ngapera Riley. Hear from leaders including Ministers Judith Collins and Scott Simpson, James Monaghan (MISSION), Myles Ward (DIA), Liz MacPherson (Privacy Commissioner), Helen Littlewood (Worldline), Christopher Goh (Austroads), and Andrew Weaver (Payments NZ). With 30+speakers and support from sponsors including Payments NZ, Worldline, IMB, KPMG, Lumin, Ping Identity, NEC, DIA, Westpac, MATTR, Middleware Group, JNCTN, and MinterEllisonRuddWatts, this is a must-attend event shaping Aotearoa’s digital trust future. View the Programme and Register now!

DINZ Strategy Refresh

New Zealand stands at the frontier of a digital future where open, trusted data unlocks better services, richer insights, and empowered citizens.

There is an increasing consensus among government, industry, Iwi, communities, and citizens to collaboratively undertake a bold, values-led transformation of our data systems. This transformation will be underpinned by transparent and auditable public infrastructure.

The upcoming digital identity strategy refresh will be looking at, amongst other things, how to accelerate credential uptake, how to unlock productivity with digital identity and the role of identity solutions in the world of AI and agentic systems.

I’m looking forward to this session with our Executive Council in the coming weeks as we work to focus our energy on making real change to support our vision of a country where people can express their identity using validated and trusted digital means in order to fully participate in a digital economy and society.

In Other News
 

Dave Clark NZ and MATTR secure major government contract
Dave Clark NZ and digital identity firm MATTR have won a significant New Zealand government contract to develop a new app focused on secure digital identity services. Read more.
  Digital ID developments won’t replace wallets just yet
Despite progress in digital identity systems, experts say physical IDs and wallets will remain necessary for the foreseeable future due to infrastructure and adoption challenges. Read more.
  New Zealand launches world-first deepfake experiment to build public trust
New Zealand is leading a groundbreaking deepfake detection trial aimed at boosting public awareness and resilience against synthetic media manipulation. Read more.
  Department of Internal Affairs (DIA) announces updated rules for the DISTF
The feedback has been thoroughly analysed and presented to both the Trust Framework Board and the Minister for Digitising Government. As a result, the updated rules went into force on 24 July 2025. For a comprehensive overview of the updated rules and a summary of the feedback received, please visit the Trust Framework for Digital Identity Legislation page.

With a realistic government timeline now established, we must collectively prepare for launch. There is a significant amount of work ahead, and no time to lose.

Fortunately, our Digital Identity NZ membership possesses world-class capabilities, ideally suited to assist with consultation and bridge delivery gaps as we move forward.

Ngā mihi nui,

Andy Higgs
Executive Director, Digital Identity NZ

Banner image credit: Rocket Lab

The post Open spaces, open hearts, open minds…time to add Open data to the mix! appeared first on Digital Identity New Zealand.

Monday, 28. July 2025

Digital ID for Canadians

Building Trust at the Frontlines – Client ID Verification for Canada’s Auto Finance Sector

As fraud, regulatory scrutiny, and consumer expectations evolve, Canadian Automotive Retail and Finance Sectors are under pressure to modernize identity verification (IDV) practices. This report…

As fraud, regulatory scrutiny, and consumer expectations evolve, Canadian Automotive Retail and Finance Sectors are under pressure to modernize identity verification (IDV) practices. This report summarizes insights from DIACC’s recent industry forum, where finance leaders, dealership executives, and regulators explored the impact of digital IDV solutions on reducing fraud, improving operational efficiency, and fostering consumer trust.

Download the report here.

2025-Spring-Plenary-Client-IDV-Forum-Summary_ENG

MyData

Saving Journalism with Sectoral Data Bargaining

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. High-quality journalism is critical […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. High-quality journalism is critical […]

DIF Blog

DIF Newsletter #52

August 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Partners with MIT's Project NANDA

August 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Partners with MIT's Project NANDA for Packed Event

DIF partnered with MIT's Project NANDA for a well-attended event at Circuit Launch on July 18, 2025, where Ramesh Raskar shared his vision for architecting an Internet of AI Agents. DIF Technical Steering Committee chair Andor Kesselman served as co-host, with robust discussion focusing on the relevance of decentralized identity for the agentic web.

During and after the event, discussion covered how DIF's decentralized identity standards can provide the cryptographic foundation for Project NANDA's agentic web, enabling persistent, verifiable identifiers for AI agents operating autonomously across organizational boundaries. This collaboration addresses the critical challenge of maintaining human oversight and trust as AI systems gain greater agency, ensuring that robust identity infrastructure underpins the future of AI collaboration.

DIF co-hosted event with Project NANDA in Mountain View, CA on July 18, 2025

Global Digital Collaboration Conference in Geneva

DIF staff and community had a strong presence at the Global Digital Collaboration (GDC) conference in Geneva on July 1-2, 2025, with our community delivering key presentations across multiple tracks. Highlights included our opening session on Agentic AI and Digital ID, addressing critical trust challenges as AI agents become more autonomous. Additional DIF-led sessions covered identity foundations for Industry 4.0 transformation, decentralized identifiers for global interoperability, and digital human rights.

DIF community at GDC

The conference demonstrated the maturity of decentralized identity solutions, with presentations spanning from privacy-enhancing technologies and trust management for wallets to enterprise blockchain implementations and regulatory compliance frameworks. The diverse roster of sessions highlighted how decentralized identity is being deployed across sectors—from government digital infrastructure projects to enterprise security applications—while fostering meaningful dialogue between technologists, policymakers, and standards bodies globally.

Visit the GDC website for session videos and stay tuned for the upcoming Book of Proceedings featuring detailed insights from all presentations.

DIF Labs Beta Cohort 2 Projects Launch

Following the selection process completed in June, DIF Labs Beta Cohort 2 is now in full swing with three innovative projects that push the boundaries of decentralized identity technology. The selected projects focus on:

Privacy-preserving verifiable credentials using advanced cryptographic techniques Anonymous multi-signature protocols for group credentials Novel approaches to privacy-preserving revocation mechanisms.

Each project team is receiving dedicated mentorship from DIF's expert community and is committed to delivering open-source implementations that will benefit the entire ecosystem. The projects are scheduled to run through September 2025, with regular check-ins and a final showcase event planned.

See the DIF Labs web site for more information.

DIF Showcases Decentralized Identity for Seamless Travel Experience at HITEC Conference

DIF presented at HITEC 2025 in Indianapolis, where hospitality technology veterans mapped out how self-sovereign identity can align data privacy with friction-free travel experiences. The session featured Douglas Rice (Managing Director, Hospitality Technology Network), Nick Price (former CIO, Mandarin Oriental Hotel Group & citizenM), Kim Hamilton Duffy (DIF Executive Director), and Bill Carroll (CEO, Marketing Economics; retired Cornell professor). The speakers demonstrated how traveler-controlled digital wallets can enable seamless journeys—from AI-powered trip planning with auto-completing visa forms to face-scan boarding, NFC room keys, and verified guest reviews.

The presentation highlighted real-world momentum with examples including EU "Seamless Travel" pilots across 27 member states, Cathay Pacific's Hong Kong-Tokyo trial packing seven credential types into passenger wallets, and Bhutan's nationwide digital identity program built on decentralized identifiers. The speakers emphasized that centralized data silos are incompatible with the hyper-personalization guests demand, while self-sovereign identity delivers immediate benefits including reduced breach risks, lower cyber-insurance premiums, and improved revenue through verified, up-to-date traveler data.

See Alex Bainbridge's article for a detailed recap, as well as the recording and presentation deck. Additional coverage available at PhocusWire.

DIF H&T team at HITEC conference

🛠️ Working Group Updates

Browse our active working groups here

Creator Assertions Working Group

The Creator Assertions Working Group made significant progress on terminology standardization and integration with broader credential frameworks. A key development was the group's decision to transition from "identity assertions" to "attribution assertions," recognizing that "attribution" better captures the essence of content creation claims while being less controversial in industry discussions. The team continued advancing their integration with the C2PA ecosystem and made substantial progress on metadata assertion standards. Work also progressed on media identifier systems and the development of flexible metadata frameworks that can accommodate various content types and use cases.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ team achieved significant milestones in pseudonym generation and post-quantum security considerations. Key developments included the finalization of polynomial evaluation methods for pseudonym generation, addressing potential security vulnerabilities from adversarial users through more robust cryptographic approaches. The team made substantial progress on test vector development for the 0.9 draft release and continued coordination with IETF standardization efforts. Discussions also covered the efficiency implications of different cryptographic commitment schemes and their practical applications in large-scale deployments.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused intensively on W3C standardization efforts and refining the evaluation process for DIF-recommended DID methods. Significant progress was made on the proposed W3C DID Methods Working Group charter, with the team addressing concerns about blockchain inclusion and standardization scope. The group refined evaluation criteria for DID method proposals, emphasizing the need for multiple implementations, significant deployments, and clear compliance with DID traits. Work continued on balancing objective criteria with expert evaluation to ensure high-quality recommendations while maintaining transparency in the assessment process.

👉 Learn more and get involved

Identifiers and Discovery Working Group

Multiple work streams advanced significantly this month. The DID:webvh team made substantial progress toward their 1.0 specification release, with multiple implementations now passing comprehensive test suites and performance analysis demonstrating efficient handling of DID updates. The DID Traits team prepared for their 1.0 release, focusing on key validation capabilities and long-term availability requirements. The group also explored applications in software supply chain contexts and examined compliance with emerging regulations like the EU's Cyber Resilience Act, demonstrating the practical relevance of decentralized identifiers in enterprise environments.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced work on binary encoding support through the CBOR implementation, positioning it as an optional feature for version 2.2 with potential to become the default in future major releases. The team addressed important technical challenges around message encoding detection, MIME type handling, and implementation compatibility. Significant discussions covered privacy considerations and "phone home" concerns in credential verification systems, with the group exploring how verifiable credentials can be presented without requiring direct communication with issuers. The group also examined DIDComm applications in AI agent-to-agent communications.

👉 Learn more and get involved

Claims & Credentials Working Group

The Credential Schemas team launched their community schemas initiative, creating a framework for organizations to contribute verifiable credential schemas to a shared repository for potential standardization. Significant progress was made on aligning their basic person schema with schema.org standards while maintaining compatibility with existing frameworks like OIDC and UK ID assurance. Key developments included extending postal address schemas for banking KYC requirements, refining terminology around personhood verification credentials, and establishing processes for schema synchronization between repositories. The team also began exploring employment credentials and anti-money laundering certifications as future development priorities.

👉 Learn more and get involved

Hospitality & Travel Working Group

The newly launched Hospitality & Travel Working Group hit the ground running with substantial progress on the HAT Pro specification. The team developed comprehensive schemas for food preferences, dietary restrictions, and accessibility requirements, utilizing graph-based models to avoid data duplication and improve cross-referencing capabilities. Key developments included the creation of UML models and JSON schemas for complex preference structures, exploration of AI-assisted data input to simplify user experiences, and the establishment of engagement processes for subject matter experts across various travel sectors. The group is preparing for major presentations at industry events and has launched a dedicated website to showcase their work.

👉 Learn more and get involved

DIF Labs Working Group

DIF Labs Beta Cohort 2 projects are now in active development phase, with three selected projects working on cutting-edge privacy-preserving technologies. The projects focus on legally binding verifiable credentials using Qualified Electronic Signatures (QES), comparative analysis of privacy-preserving revocation mechanisms, and anonymous multi-signature verifiable credentials. Each project team is receiving dedicated mentorship from DIF's expert community and is committed to delivering open-source implementations that will advance the broader decentralized identity ecosystem. The program continues to demonstrate the value of focused, mentored development in advancing the state of the art.

👉 Learn more and get involved

🌎 DIF Special Interest Group Updates

Browse our special interest groups here

DIF Africa SIG

The Africa SIG featured an impressive deep-dive presentation on Ethiopia's national digital identity system, Faida, and its associated digital credential platform FaidaPass. Representatives from Ethiopia's national ID system provided detailed insights into the architecture, features, and implementation of this groundbreaking system, which serves as one of the first full-scale standards-compliant deployments globally. The presentation highlighted the use of decentralized verification models, biometric authentication capabilities, and self-sovereign identity principles, while addressing innovative solutions for non-smartphone users and future monetization strategies. The session demonstrated Africa's leadership in practical digital identity implementation.

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted comprehensive presentations on digital identity solutions in Australia, featuring True Vault's approach to creating a decentralized identity ecosystem. Discussions covered recent regulatory changes including the Digital Identity Act, international standards alignment, and the challenges of achieving interoperability across different jurisdictions. The group explored the evolution from manual to digital identity verification methods and examined the potential for global expansion of digital identity solutions while addressing privacy concerns and user control requirements. The session highlighted Australia's voluntary approach to digital identity and its implications for regional adoption.

👉 Learn more and get involved

DIF Japan SIG

The Japan SIG focused on recent developments in DID and AI agent authentication, with participants sharing updates on their organizations' initiatives and emerging challenges. The group explored the intersection of decentralized identity with AI systems and examined potential applications across various sectors. Key discussions included the unique requirements for AI agent identity management and the challenges of implementing decentralized identity principles in automated systems. The group also considered future meeting formats and potential offline events to enhance community engagement and collaboration.

👉 Learn more and get involved

DIF Hospitality & Travel SIG

The Hospitality & Travel SIG hosted presentations highlighting decentralized identity adoption in the travel industry. Key sessions included discussions with Microsoft on AI-driven cybersecurity solutions for hospitality, analysis of Apple's digital identity announcements and their implications for travel, and exploration of AI agents' potential to revolutionize customer interactions in travel and hospitality. The group examined both opportunities and challenges in implementing decentralized identity solutions across various travel scenarios, from border crossing to personalized service delivery.

👉 Learn more and get involved

📖 DIF User Group Updates DIDComm User Group

The DIDComm User Group explored practical implementations and emerging applications of the DIDComm protocol, with particular focus on AI agent communications. Key discussions included demonstrations of new systems and their communication protocols, exploration of generative AI communication frameworks and their similarities to DIDComm approaches, and examination of security considerations for AI agent interactions.

👉 Learn more and get involved

📢 Announcements H&T Working Group Launches Blog

As Hospitality & Travel activity increases at DIF, Autoura CEO Alex Bainbridge has launched a special H&T focused blog. We encourage you to visit and subscribe for updates.

🆔 Get involved! Join DIF

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

🐦 Follow us on Twitter/X 💻 Join us on GitHub 📺 Subscribe on YouTube 🔍 Read the DIF blog New Member Orientations

If you are new to DIF, join us for our upcoming new member orientations. Find more information on DIF's Slack or contact us at community@identity.foundation if you need more information.

Thursday, 24. July 2025

Oasis Open

LegalXML – Electronic Court Filing V5.01 Errata Public Review

The LegalXML – Electronic Court Filing pleased to announce that ECF V5.01 Errata 01 is now available for public review and comment. The public review is now open and ends August 9, 2025 at 23:59 UTC. Electronic Court Filing Version 5.01 Errata 01Committee Specification Draft 0121 June 2025 https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.docx https://doc

ends August 9th

The LegalXML – Electronic Court Filing pleased to announce that ECF V5.01 Errata 01 is now available for public review and comment. The public review is now open and ends August 9, 2025 at 23:59 UTC.

Electronic Court Filing Version 5.01 Errata 01
Committee Specification Draft 01
21 June 2025

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.docx

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.html

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.pdf

How to Provide Feedback

OASIS and the LegalXML – Electronic Court Filing value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. TC members should send in comments via the TC mailing list. All others should submit to the comment mailing list after following instructions listed here.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the LegalXML – Electronic Court Filing TC can be found at the TC public home page here.

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] http://www.oasis-open.org/committees/legalxml-courtfiling/ipr.php

Intellectual Property Rights (IPR) Policy

RF on Limited Terms Mode

The post LegalXML – Electronic Court Filing V5.01 Errata Public Review appeared first on OASIS Open.


Anthropic, EY, and Johns Hopkins APL Executives Elected to OASIS Board to Drive Open Development and Global Collaboration

Boston, MA, 24 July 2025 – OASIS Open, the international standards and open source consortium, announced the results of its 2025 Board of Directors Elections. Three newly elected members and two re-elected members joined the Board in providing strategic governance and leadership to advance OASIS’s mission of solving global challenges through open development and collaboration.  […] The post

Jay White of Microsoft and Pablo Breuer Secure Additional Terms, Reinforcing Continuity and Strategic Momentum at OASIS

Boston, MA, 24 July 2025 – OASIS Open, the international standards and open source consortium, announced the results of its 2025 Board of Directors Elections. Three newly elected members and two re-elected members joined the Board in providing strategic governance and leadership to advance OASIS’s mission of solving global challenges through open development and collaboration. 

OASIS is pleased to welcome new board directors Jason Clinton of Anthropic, Charles Frick of Johns Hopkins Applied Physics Laboratory (APL), and Sarah Liang of EY. OASIS also congratulates Jay White of Microsoft and Pablo Breuer on their re-election to additional terms. The continuing members of the Board are Jim Cabral, Gershon Janssen, Bret Jordan, Vasileios Mavroeidis, Daniel Rohrer, and Daniella Taveau.

“I’m delighted to welcome Jason, Charles, and Sarah to the Board, and to congratulate Jay and Pablo on their re-election,” said Gershon Janssen of Reideate, OASIS Board President and Chair. “Their expertise will be instrumental as we continue shaping open standards that address global challenges. I also want to thank our departing directors Jason, Daniel, and Omar for their outstanding service and lasting impact they’ve made on the OASIS community.”

New Board Members

Jason Clinton serves as Chief Information Security Officer (CISO) at Anthropic, where he guides security strategy, including detection and response, compliance, physical security, security engineering, and IT. He brings over a decade of experience in infrastructure security, having previously led efforts in defense against advanced persistent threats and contributed to major operating system and payment platform development. Jason also serves on the Coalition for Secure AI’s (CoSAI) Project Governing Board.

“The rapid advancement of AI makes robust, open standards more crucial than ever,” said Clinton. “Through initiatives like CoSAI, OASIS brings together industry leaders to develop frameworks that protect users while enabling innovation. I’m honored to join the OASIS board and work alongside leaders who share this commitment to responsible AI development.”

Charles Frick, a Chief Scientist in the cyber capabilities development group at Johns Hopkins Applied Physics Laboratory (APL), leads multiple research and pilot efforts focused on cybersecurity automation, machine-speed threat information sharing and operational resilience. He chairs the Indicators of Behavior (IoB) sub-project within the Open Cybersecurity Alliance (OCA), guiding development and adoption of new standards for behavior-based threat intelligence.

“As cybersecurity threats continue to evolve, transparency, interoperability and collaboration are essential,” said Frick. “I’m honored to join the OASIS board and contribute to its vital mission at the intersection of open standards and open source. I look forward to advancing standards that support automation, resilience and trust—especially in areas like behavior-based threat intelligence and cyber-physical system security—so that we can better protect critical infrastructure and global digital ecosystems.”

Sarah Liang is a Partner at EY and serves as the Global Responsible AI Leader, where she drives comprehensive AI governance initiatives throughout the firm and for client solutions worldwide. Her expertise encompasses monitoring the regulatory landscape, aligning with legal and compliance standards, designing AI risk management solutions, and developing responsible AI frameworks that operationalize governance without hindering innovation. Sarah actively participates in key standards organizations, including CoSAI, bringing cross-industry insights and collaborative approaches to standards development.

Liang noted, “I’m honored to join the OASIS Board of Directors. OASIS and the EY organization share a vision of driving the beneficial long-term impact of AI use through transparency, security, trust, and scalability. We have a responsibility to act now and define global standards for AI development and deployment. I look forward to working alongside my fellow board members to help transform businesses through sustainable growth and innovation, while contributing to long-lasting positive change.”

Outgoing Board Members

OASIS expressed sincere appreciation to outgoing Board members Jason Keirstead, Daniel Riedel, and Omar Santos for their valuable service during their tenure as directors. Everyone at OASIS extends our heartfelt thanks for their dedicated leadership and lasting contributions to the organization’s mission. To view the current Board of Directors, please visit our website.

Media inquiries: communications@oasis-open.org

The post Anthropic, EY, and Johns Hopkins APL Executives Elected to OASIS Board to Drive Open Development and Global Collaboration appeared first on OASIS Open.


Internet Safety Labs (Me2B)

PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections

ISL had the opportunity to present at USENIX Association’s PEPR 2025 conference with a presentation entitled, “Safetypedia: Crowdsourcing Privacy Inspections”. The full video of the presentation can be viewed below: The post PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections appeared first on Internet Safety Labs.

ISL had the opportunity to present at USENIX Association’s PEPR 2025 conference with a presentation entitled, “Safetypedia: Crowdsourcing Privacy Inspections”. The full video of the presentation can be viewed below:

The post PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections appeared first on Internet Safety Labs.


FIDO Alliance

9to5Mac: Apple @ Work: Passkey portability is finally here in iOS 26 and macOS Tahoe 26

With iOS 26 and macOS Tahoe 26, Apple is solving a key problem, though For the first time, Apple is adding support for true passkey portability. This means you can […]

With iOS 26 and macOS Tahoe 26, Apple is solving a key problem, though For the first time, Apple is adding support for true passkey portability. This means you can move your credentials from Apple Passwords to a dedicated password manager like 1Password, Dashlane, or Bitwarden, and even move them back. The system handles the transfer securely and locally, so you don’t have to worry about exporting plaint text CSV files and crossing your fingers that nothing gets exposed.

Passkey portability is built on a new standard from the FIDO Alliance that lets apps exchange credentials in a private and encrypted way. It uses Face ID, Touch ID, or your device passcode to approve the transfer. From the user’s perspective, it just works. And that’s exactly how it should be.


Reddit Implements Mandatory ID Verification for UK Users Under Online Safety Act

Reddit has implemented mandatory age verification for UK users to comply with the country’s Online Safety Act, which took effect in July 2025. The legislation requires digital platforms to prevent […]

Reddit has implemented mandatory age verification for UK users to comply with the country’s Online Safety Act, which took effect in July 2025. The legislation requires digital platforms to prevent minors from accessing unsafe content, particularly mature or adult material, following Ofcom’s broader push for stricter online age verification across digital platforms.

The platform’s verification system requires UK users to submit either a government-issued ID, such as a passport, or a selfie through Persona, a third-party identity verification company. The approach follows successful implementations by other platforms, including Discord’s recent rollout of facial scan and ID verification in the UK. Persona handles the sensitive data to maintain user privacy, storing uploaded photos or IDs for a maximum of seven days without sharing the information with Reddit.

Reddit retains only the user’s verification status and birthdate, eliminating the need for repeated verification when accessing restricted content. Persona has confirmed it does not access Reddit user data, including subreddit activity. The privacy-focused approach matches emerging standards in digital identity verification, consistent with the principles established by the FIDO Alliance’s certification program for face-based remote identity verification.


The Engine Room

DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE

Over the past three months, The Engine Room and Puentes have been exploring how to nurture connection, online and beyond the screen, in a world where division often seems to be the norm. The post DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE appeared first on The Engine Room.

Over the past three months, The Engine Room and Puentes have been exploring how to nurture connection, online and beyond the screen, in a world where division often seems to be the norm.

The post DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE appeared first on The Engine Room.

Wednesday, 23. July 2025

Project VRM

How CMPs Can Make Hay With Real VRM

By now you’ve seen one of these: Never mind that you’re not running an ad blocker, but merely blocking tracking. Instead, note the small print in the lower right: “VRM by Admiral.” By “VRM,” Admiral means this: What we’re looking at here is the $.5 billion Consent Management Platform business, currently dominated worldwide by OneTrust, […]

By now you’ve seen one of these:

Never mind that you’re not running an ad blocker, but merely blocking tracking. Instead, note the small print in the lower right: “VRM by Admiral.”

By “VRM,” Admiral means this:

What we’re looking at here is the $.5 billion Consent Management Platform business, currently dominated worldwide by OneTrust, with a 40% market share. In the US, Admiral is the leading provider to publishers, giving it a high profile there. In Europe, the leaders are OneTrust, Usercentrics, and CookieYes.

So here is a challenge for Admiral , OneTrust, and the rest of them: make VRM  mean Vendor Relationship Management (like it says in Wikipedia).

Our case: real relationships are based on mutual trust, which can only happen if personal privacy is fully respected as a starting point. Consent management by cookie notice can’t cut it.  For real trust, we need people to bring their own terms to every website’s table, and have agreements to those. This is why we, the ProjectVRM community, through Customer Commons (our nonprofit spinoff) and the IEEE P7012 (aka MyTerms) working group, created the draft standard (on track to become official early next year) for machine-readable personal privacy terms. Three years ago, I called MyTerms The Most Important Standard in Development Today. The CMP business can help make it so, by getting on the Cluetrain.

Here are some opportunities:

CMPs can provide sites & services with easy ways to respond to MyTerms choices brought to the table by visitors. Let’s call this a Terms Matching Engine.The current roster of terms we’re working with at Customer Commons (abbreviated CuCo, hence the cuco.org shortcut) starts with  CC-BASE, which is “service provision only.” It says to a website, “just give me your service, and nothing more.” In other words, no tracking. Yet. Negotiation toward additional provisions comes after that. Those can be anything, but they should be in the spirit of We’re starting with personal privacy here, and the visitor sets the terms for that. There is a whole new business (which, like the VPN, grammar-help, and password management businesses, people would pay for) in helping people present, manage, remember, and monitor compliance with their terms, and what additional agreements have been arrived at. This can involve browser add-ons such as the one pictured  on the ProjectVRM r-button page. CMP companies can make money there too, adding a C2B business to their B2B ones. Go beyond #2 to provide real VRM. Back in the last millennium, Iain Henderson pointed out that B2B relationships tend to have hundreds or thousands of variables over which both parties need to agree. Nitin Badjatia, another CRM veteran (and a Customer Commons board member like Iain and myself), has also pointed out that companies like Oracle have long provided AI-assisted ways for B2B relationships to arrive at contractual agreements. The same can work for C2B, once the base privacy agreement is established. There can be a business here that expands on what gets started with that first agreement. Verticals. There can be strong value-adds for regulated industries or companies wanting to acquire and signal accountability, or look for firmer ways to establish a privacy regime better than the called consent, which doesn’t work (except as thin ass-covering for companies fearing the GDPR and the CCPA). For example: banks, insurers, publishers, health care providers. For people (not just corporate clients), CMPs could offer browser plugins or apps (mobile and/or computer) that help people choose and present their privacy terms, track who honors them, notify them of violations, and have r-buttons mean something. Or multiple things.

Here is what a VRM-friendly person in the UK came up with as a prototypical first by a CMP away from cookie notices:

That was after this post went up.  (Which is great.)

Obviously, we want cookie notices (and other forms of friction) to go away, but we also want CMPs to have a nice way to participate in a customer-led world in which intention-based economies can grow.

And here is an example of r-buttons in a browser:

Real relationships, including records of agreements, can be unpacked when a person (not a mere “user”) clicks on either the ⊂ or the ⊃ symbols. There are golden opportunities here for both VRM and CRM vendors. And, of course, companies such as Admiral and OneTrust working both sides—and being truly trusted.

Give us more. (Like that cookie notice above.)


The Rubric

Verifying a Web of Truth (did:webvh, Part 2)

did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds
did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds Specification https://hyperledger.github.io/anoncreds-spec/  DID...

Verifying a Web of Truth (did:webvh, Part 1)

did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds
did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds Specification https://hyperledger.github.io/anoncreds-spec/  DID...

Next Level Supply Chain Podcast with GS1

Beyond the Box: Navigating EPR in Modern Supply Chains

The packaging around your product now matters just as much as the product itself. For companies navigating Extended Producer Responsibility (EPR) laws, that's quickly becoming the reality. And it’s reshaping how teams think about data, packaging, and compliance. In this episode, Lindsay Savage, Senior Director of Data Governance and Business Platforms at Georgia-Pacific LLC, joins hosts Reid J

The packaging around your product now matters just as much as the product itself.

For companies navigating Extended Producer Responsibility (EPR) laws, that's quickly becoming the reality. And it’s reshaping how teams think about data, packaging, and compliance.

In this episode, Lindsay Savage, Senior Director of Data Governance and Business Platforms at Georgia-Pacific LLC, joins hosts Reid Jackson and Liz Sertl to demystify what EPR means for manufacturers and retailers. 

With legislation ramping up across states, Lindsay explains how brands are preparing for complex reporting requirements, coordinating across departments, and turning sustainability regulations into opportunities for smarter product innovation.

In this episode, you’ll learn:

Why EPR is more than a packaging issue and why it matters now

How Georgia-Pacific is building scalable systems to manage regulatory data

Tips for companies just getting started, from legal teams to logistics

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:21) Lindsay Savage’s background

(03:16) What EPR means for your business

(04:50) Every state has its own rules

(06:15) Data overload and the push for standards

(07:36) Breaking down product, packaging, and pallet

(09:27) Two ways to report EPR data

(11:20) How to get started with EPR compliance

(12:40) Building cross-functional teams for success

(17:55) Embracing AI tools to stay ahead

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Lindsay Savage on LinkedIn Check out Georgia-Pacific

Tuesday, 22. July 2025

MyData

A Human-Centric Roadmap for Europe

MyData Global has submitted A Human-Centric Roadmap for Europe to the European Commission’s public consultation on Data Union Strategy. The Roadmap has incorporated input from the MyData community, and the […]
MyData Global has submitted A Human-Centric Roadmap for Europe to the European Commission’s public consultation on Data Union Strategy. The Roadmap has incorporated input from the MyData community, and the […]

Thursday, 17. July 2025

Origin Trail

Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot!

Microsoft Copilot is becoming the interface for how users work with AI across the Microsoft ecosystem. But what happens when you enhance Copilot with the ability to understand and remember structured, verifiable knowledge? With the integration of the OriginTrail Decentralized Knowledge Graph (DKG) and the Model Context Protocol (MCP), you can build AI agents that reason over live data, contribute

Microsoft Copilot is becoming the interface for how users work with AI across the Microsoft ecosystem. But what happens when you enhance Copilot with the ability to understand and remember structured, verifiable knowledge?

With the integration of the OriginTrail Decentralized Knowledge Graph (DKG) and the Model Context Protocol (MCP), you can build AI agents that reason over live data, contribute to shared memory, and deliver trusted outputs backed by cryptographic proofs.

By extending Microsoft’s AI infrastructure with OriginTrail, you equip Copilot agents with powerful capabilities for knowledge discovery, memory, and collaboration.

What is MCP?

The Model Context Protocol (MCP) is an open standard that defines how language models access and utilize tools and external data sources.

MCP uses a client-server architecture where:

MCP Servers expose tools and data, both local and remote, MCP Clients, such as agents built in Microsoft Copilot Studio, call these tools using a standard protocol.

This architecture makes it easy to build AI systems that are modular, composable, and interoperable across different environments.

What role does the DKG play?

The OriginTrail DKG provides a decentralized layer for structured, verifiable knowledge that AI agents can query, write to, and collaborate over. When connected to an MCP server equipped with DKG tools, agents are empowered to retrieve and build upon interconnected, verifiable knowledge.

AI agents can:

Retrieve semantically rich knowledge, Generate and publish new Knowledge Assets, Collaborate on a shared, verifiable knowledge base.

Each interaction is built with data provenance, version control, and ownership in mind. Knowledge is shared, structured, and trustworthy!

Supercharging Microsoft Copilot with verifiable memory!

Through this integration, builders can now connect OriginTrail DKG with custom agents built in Microsoft Copilot Studio.

Here’s what that enables:

The DKG MCP server runs alongside an OriginTrail DKG Node, Custom actions are registered in Microsoft Copilot Studio to access DKG tools, These actions can be triggered by agents within environments like Microsoft Teams.

This setup allows Copilot-based agents to access interconnected, verifiable knowledge in real time, and contribute new structured information back into the DKG.

Agents can then:

Ask precise questions over a structured knowledge graph, Write their own memory as reusable Knowledge Assets, Store results, update context, and collaborate with other agents.

This integration brings reasoning, verifiability, and memory collaboration directly into Copilot-powered workflows

See it in action!

In the live demo, Jurij Škornik, General Manager at Trace Labs, core developers of OriginTrail, walks us through:

Running the DKG MCP server with an OriginTrail Edge Node, Building a custom agent in Microsoft Copilot Studio, Adding custom actions to enable interaction via Microsoft Teams.

The result is a working Copilot agent with full access to decentralized, verifiable memory. Check it out!

As AI becomes central to enterprise workflows, adding verifiability and structure to its memory is essential. Combining OriginTrail DKG and MCP means your agents are working with knowledge that is:

Structured using open standards (like RDF and schema.org), Interconnected across multiple data sources, Verifiable thanks to cryptographic anchoring, Portable across applications, agents, and ecosystems, such as Microsoft.

This opens the door to new applications in supply chains, research, content management, enterprise collaboration, and more!

Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot! was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Oasis Open

Coalition for Secure AI Marks First Anniversary with New Principles for Agentic Systems and Defender Frameworks

Boston, MA – 17 July 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, celebrates its first anniversary since launching at the Aspen Security Forum in 2024. Over the past year, CoSAI has grown into the industry’s leading collaborative ecosystem for AI security, expanding from its initial founding sponsors to more than […] The post Coalition for Secure AI Marks First Anniversary

Global Participation Expands as the Coalition Releases Essential AI Guidance

Boston, MA – 17 July 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, celebrates its first anniversary since launching at the Aspen Security Forum in 2024. Over the past year, CoSAI has grown into the industry’s leading collaborative ecosystem for AI security, expanding from its initial founding sponsors to more than 45 partner organizations worldwide. Its mission to enhance trust and security in AI development and deployment has resonated widely, attracting premier sponsors EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Protect AI, Snyk, Trend Micro, and Zscaler. Through multiple workstreams, the coalition has produced practical frameworks and research addressing real-world challenges in securing AI systems. Central to CoSAI’s impact this year are the most recent releases of the “Principles for Secure-by-Design Agentic Systems,” which establishes three core principles for autonomous AI, and the “Preparing Defenders of AI Systems” whitepaper. 

Security Principles Help Safeguard Agentic AI Systems

CoSAI’s Technical Steering Committee (TSC) has released the “Principles for Secure-by-Design Agentic Systems,” a foundational document aimed at helping technical practitioners address the unique security challenges posed by autonomous AI. 

The principles offer practical guidance on balancing operational agility with robust security controls, establishing that secure agentic systems should be Human-governed and Accountable, architected for meaningful control with clear accountability, constrained by well-defined authority boundaries aligned with risk tolerance, and subject to risk-based controls ensuring alignment with expected business outcomes. They must be Bounded and Resilient, with strict purpose-specific entitlements, robust defensive measures including AI-specific protections, and continuous validation with predictable failure modes. Finally, they should be Transparent and Verifiable, supported by secure AI supply chain controls, comprehensive telemetry of all system activities, and real-time monitoring capabilities for oversight and incident response. 

This blog post provides additional context on the principles and how they can be applied in real-world environments.

“As agentic AI systems become more embedded in organizations’ operations, we need frameworks to secure them,” said David LaBianca, Project Governing Board co-chair at CoSAI. “These principles provide a technical foundation for organizations to adopt AI responsibly and securely.”

New Defender Frameworks Help Organizations Operationalize AI Security

CoSAI has published another landscape paper, “Preparing Defenders of AI Systems,” developed through our workstream on Preparing Defenders for a Changing Cybersecurity Landscape. The paper provides practical, defender-focused guidance on applying AI security frameworks, prioritizing investments, and enhancing protection strategies for AI systems in real-world environments.

A companion blog post offers additional insights on how this evolving resource bridges high-level frameworks with practical implementation and will continue adapting as AI threats and technologies advance.

“This paper provides defenders with specific guidance on how security frameworks must be adapted to mitigate risks in the AI transformation─pinpointing gaps in current approaches and prioritizing critical investments,” said Josiah Hagen of Trend Micro and Vinay Bansal of Cisco, CoSAI’s Workstream 2 Leads. “As security practices are aligned with AI adoption realities, organizations are empowered to make informed decisions and protect their assets while ensuring innovation doesn’t outpace defenders. This exemplifies CoSAI’s commitment to connecting emerging threats to AI systems with practical security solutions.”

These foundational outputs from CoSAI’s first year set the stage for even greater impact ahead.

Looking Ahead: Building a Secure AI Future

As CoSAI enters its second year, the coalition is positioned to further accelerate AI security innovation through expanded research initiatives, practical tool development, and increased global engagement. With active workstreams producing actionable guidance and a growing community of practitioners, CoSAI continues to drive adoption of secure-by-design AI systems across industries. Its commitment to open source collaboration and standardization remains central to establishing trust in AI technologies. Realizing this vision requires continued collaboration across the AI security community.

Get Involved

Technical contributors, researchers, and organizations are invited to join CoSAI’s open source community and help shape the future of secure AI. To learn more about how to get involved, contact join@oasis-open.org.

One year in: What CoSAI members are saying about our impact

Premier Sponsors: 

EY:
“At the EY organization, we believe it is our responsibility to shape the future and not leave it to chance, so that the next generation inherits a world improved by AI, not made worse by it. It has been a privilege to serve as a founding member of CoSAI, a powerful platform for EY teams to collaborate with global technology leaders in shaping secure and responsible AI. As we enter the exponential age, we remain committed to leading with clarity, confidence and purpose.”
– Sarah Liang, EY Global Responsible AI Leader
Google
“It’s been great to see CoSAI grow with so many new partners and instrumental frameworks since we first introduced it last year. Google is proud to have been a co-founder for this initiative and we look forward to seeing more work from CoSAI’s workstreams, specifically across critical areas like agentic security.”
– Heather Adkins, VP of security engineering, Google
IBM: 
”From establishing critical work streams to launching innovative initiatives around Security Principles of Agentic AI, AI model signing and attestation, and MCP Security, CoSAI has built real momentum in securing AI at scale—all in just one year. It’s been rewarding to co-chair the Technical Steering Committee and collaborate with this talented, cross-industry community to tackle the evolving challenges of AI security and help shape industry standards.”
– J.R. Rao, IBM Fellow and CTO, Security Research, IBM
NVIDIA: 
“As AI becomes increasingly integral to critical infrastructure and enterprise operations, security must be foundational at every stage of development and deployment. As an industry enabler of AI for both hardware and software, NVIDIA is proud to support CoSAI’s collaborative efforts to advance practical, open standards across industries to democratize and scale AI for the entire market.”
— Daniel Rohrer, Vice President of Software Product Security, NVIDIA
Palo Alto Networks:
“As public and private organizations increasingly integrate advanced and agentic AI models into critical networks, the development of industry-driven AI security frameworks, such as CoSAI’s ‘Principles for Secure-By-Design Agentic Systems,’ will be vital for the security of our digital ecosystem. CoSAI’s initiatives over the past year are commendable, and we eagerly anticipate continuing our contributions to their mission.”
– Munish Khetrapal, VP of Cloud Management, Palo Alto Networks
Trend Micro: 
“As AI continues to reshape how businesses operate, we see tremendous value in collaboration that drives open standards and innovation across the industry. Over the past year, our work with CoSAI has reflected a shared commitment to raising the bar for security. We’re proud to stand alongside CoSAI in helping lead the way to a more secure and resilient digital future.”
– Kevin Simzer, COO at Trend Micro

General Sponsors: 

Adversa AI:
“At Adversa AI – an Agentic AI Security startup – we are proud to be a COSAI sponsor and a co-lead of the Agentic AI Security workstream. As pioneers of AI security and continuous AI red teaming, we believe Agentic AI demands a new security paradigm—one that goes beyond traditional guardrails to test cognition, autonomy, and context. COSAI’s Agentic AI Security Principles mark a pivotal step forward, and we’re committed to shaping the future of secure Agentic AI systems.”
— Alex Polyakov, Co-Founder of Adversa AI
Aim Security:
“CoSAI is building what the industry urgently needs: clarity and collaboration in securing AI systems. As pioneers in AI security, we at Aim are excited to work alongside this diverse community to help define the future of AI defense –  for agentic systems and beyond.”
– Matan Getz, CEO and Co-Founder, Aim Security
Amazon:
“The first year of CoSAI highlights how industry collaboration can advance AI security. As a founding member, Amazon supports the coalition’s mission to develop open standards and frameworks that benefit the entire AI ecosystem. Together, we look forward to strengthening the foundation of secure AI.”
– Matt Saner, Sr. Manager, Security Specialist Solution Architecture; CoSAI Governing Board and Executive Steering Committee member
Anthropic: 
“Safe and secure AI development has been core to our mission from the start. As AI models become more autonomous, CoSAI’s work is increasingly vital for ensuring that AI systems remain secure, trustworthy, and beneficial for humanity. We’re proud to continue this important work alongside other industry leaders.”
– Jason Clinton, Chief Information Security Officer, Anthropic Cisco: 
“As AI systems become more agentic and interconnected, securing them is now more important than ever. During the last year, CoSAI’s workstreams helped empower defenders and innovators alike to advance AI with integrity, trust, and resilience. We’re proud to help shape industry frameworks with this global coalition; uniting leaders across disciplines to safeguard the future of AI. Together, we’re ensuring that security is foundational to every phase of AI’s evolution.”
– Omar Santos, Distinguished Engineer, Advanced AI Research and Development, Security and Trust, Cisco
Cohere: 
“We’re proud to support CoSAI and collaborate with industry peers to ensure AI systems are developed and deployed securely. Over the last year, these collective efforts have built an important foundation that helps drive innovation while protecting against emerging threats. Our shared commitment to secure-by-design principles is increasingly important as AI adoption accelerates.”
-Prutha Parikh, Head of Security, Cohere
Fr0ntierx:
“CoSAI has united a global community around one of the most critical opportunities of our era: advancing safe, responsible, and innovative AI. At Fr0ntierX, we’re proud to contribute to this mission by helping build an infrastructure foundation rooted in trust, interoperability, and privacy. As AI continues to evolve, we remain committed to ensuring that innovation goes hand in hand with alignment and meaningful user control.”
– Jonathan Begg, CEO, Fr0ntierX
GenLab: 
“Over the past year, CoSAI has brought clarity to securing AI across the AI supply chain. The Six Critical Controls give leaders something concrete to work from, and GenLab has been proud to support that work from the start. As AI adoption accelerates, these frameworks are going to be essential—not just for safety, but for trust across sovereign systems.”
– Daniel Riedel, Founder & CEO, GenLab Venture Studios
HiddenLayer:
“As one of the earliest members of CoSAI, HiddenLayer recognized the urgency of securing AI from the outset. CoSAI’s work over the past year has provided much-needed clarity in a rapidly evolving space, offering actionable frameworks that empower organizations to operationalize AI security and governance. Its mission has reinforced our belief that trust must be embedded into AI systems by design. As threats become more advanced and the AI attack surface expands, our continued collaboration with CoSAI remains essential to ensuring that AI innovations are safe and secure.”
— Malcolm Harkins, Chief Security & Trust Officer, HiddenLayer Intel:
“CoSAI’s first year has been marked by strong momentum—from the release of landscape papers of technical workstreams to the timely initiation of the Agentic AI Systems workstream. These milestones reflect the coalition’s ability to anticipate and act on emerging security needs. At Intel, we’re proud to partner with CoSAI members to ensure that secure-by-design principles are embedded early in the AI system design and deployment.”
– Dhinesh Manoharan, VP Product Security & GM of INT31, Intel
Lasso Security
“At Lasso, we believe secure-by-design must be the foundation of AI innovation. CoSAI has played a critical role in turning complex AI security challenges into practical, actionable guidance—from agentic systems to defender frameworks. As proud contributors to this effort, we’ve seen firsthand how CoSAI is helping shape a more trustworthy AI future and laying the groundwork for secure, enterprise-grade solutions.”
– Elad Schulman, CEO & Co-Founder, Lasso Opal Security: 
“Opal is a proud early supporter of CoSAI—because Opal helps customers track agents and other NHIs in our platform, we’re deeply invested in securing AI’s future. CoSAI assembles leading minds to set standards for a world in which every employee calls on multiple agents. Opal is honored to contribute to this organization and learn from luminaries at trailblazing member organizations. We look forward to future consensus-building, standards setting, and insights.”
–Umaimah Khan, CEO, Opal Security
Operant:
“In a world where AI is rapidly reshaping everything from infrastructure to decision-making, collaboration is our best defense. I’m proud to have joined the board of Coalition for Secure AI as it brought together industry leaders, researchers, and policymakers under one roof, filling a major gap in the evolution of Responsible AI that is now more urgent than ever. CoSAI represents the kind of cross-industry partnership that will shape how we build a more secure and trustworthy AI ecosystem for everyone. A secure AI future is only possible if we build it together.”
– Priyanka Tembey, CTO, Operant AI
Red Hat:
“Security is the foundation of trustworthy AI, not an afterthought. At Red Hat, we believe security-first principles and processes must be woven into the fabric of every platform from day one, including AI, which the recently released CoSAI Principles for Secure-by-Design Agentic Systems helps to address. We are proud to have been part of CoSAI for the past year and look forward to helping further advance the foundational components of the community.”
– Garth Mollett / Product Security Lead Architect, Red Hat
TrojAI: 
“CoSAI’s collaborative and transparent approach to make AI safer and more secure for everyone closely reflects TrojAI’s own mission. We’re proud to support this important initiative and celebrate its first year of progress. As AI adoption increases, we believe that security will be integral to the sustainable growth of AI. CoSAI’s efforts to develop best practices and unified methodologies are invaluable for secure AI development and deployment.”
– Lee Weiner, CEO, TrojAI VE3: 
“We joined CoSAI right at the beginning because its mission aligned with our belief that AI must be built securely, responsibly, & transparently. CoSAI insights and frameworks like critical controls, have deeply influenced how we approach AI security and governance at VE3. From shaping internal practices to launching our own AI safety, security and governance whitepaper, CoSAI’s work has been instrumental for us. As AI systems grow more complex and autonomous, this partnership becomes more vital and we’re honored to be part of CoSAI’s journey.”
— Manish Garg, Managing Director, VE3
Wiz:
“AI’s growth echoes the early cloud era, when innovation outpaced security and the industry had to close the gap together. At Wiz, we believe that securing AI takes more than technology — it requires collaboration among industry leaders. Over the past year, CoSAI has driven these critical conversations, and Wiz is proud to stand with this coalition as new AI security challenges emerge, from autonomous AI agents to MCP.” 
– Alon Schindel, VP of AI & Threat Research, Wiz About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide.

CoSAI operates under OASIS Open, the international standards and open source consortium. www.coalitionforsecureai.org

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries:
communications@oasis-open.org

The post Coalition for Secure AI Marks First Anniversary with New Principles for Agentic Systems and Defender Frameworks appeared first on OASIS Open.

Wednesday, 16. July 2025

Trust over IP

TOIP EGWG 2025-07-10: Kyle Robinson, Digital Trust Ecosystems. Why they don’t make sense.

Learn why Kyle's practical experience with the Canadian Province of British Columbia's digital trust initiative has led him to focus on specific, high-impact digital credentials over broad "ecosystems." Documenting these well fosters trust and enables organic growth and unpredictable efficiencies, naturally building interoperable digital trust networks. The post TOIP EGWG 2025-07-10: Kyle Robins
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/TOIP-EGWG-2025-07-10_-Kyle-Robinson-Digital-Trust-Ecosystems_-Why-they-dont-make-sense_.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Excerpt

Learn why Kyle’s practical experience with the Canadian Province of British Columbia’s digital trust initiative has led him to focus on specific, high-impact digital credentials over broad “ecosystems.” Documenting these well fosters trust and enables organic growth and unpredictable efficiencies, naturally building interoperable digital trust networks.

Briefing Document: A Smarter Approach to Digital Credentials and Ecosystems

Date: July 10, 2025

Sources:

“Digital Credentials Presentation” (Presentation Excerpts) “GMT20250710-145520_Recording.cc.vtt.txt” (Meeting Transcript – VTT) “GMT20250710-145520_Recording.m4a” (Meeting Audio – M4A) “GMT20250710-145520_Recording.transcript.vtt.txt” (Meeting Transcript – VTT) “GMT20250710-145520_RecordingnewChat.txt” (Meeting Chat Log) Executive Summary

The prevailing approach of focusing on broad “ecosystems” for digital credential development is inefficient and limits opportunities. Instead, a more effective strategy involves starting with specific, high-impact credentials, rigorously documenting them in a trusted and public manner, and then allowing organic growth and adoption to naturally form interoperable networks. The Province of British Columbia (BC) is a leading example of this approach, leveraging foundational identity credentials and promoting their open use, which has led to unpredicted and valuable use cases and significant administrative efficiencies. The discussions highlight the “fractal” nature of ecosystems and the critical role of strong governance and transparency in building trust in digital credentials.

Key Themes and Most Important Ideas/Facts 1. The Flaws of an Ecosystem-First Approach Too Many Credentials to Tackle: Attempting to develop digital credentials for an entire industry or “ecosystem” simultaneously (e.g., healthcare or finance) is “a massive undertaking, resource-intensive, hard to coordinate, and risks spreading us too thin, leading to weak, untrusted credentials.” (Digital Credentials Presentation). As Kyle Robinson notes, “there’s just too many different types of credentials and different authorities for those credentials to really get a good handle on.” (Meeting Transcript). Constrains Opportunities: Pre-planning an entire ecosystem creates a rigid scope, preventing the discovery of “unexpected opportunities that could arise organically.” (Digital Credentials Presentation). Eric Drury echoes this, stating that “building a use case for an ecosystem is much more complicated than building a use case for a single credential.” (Meeting Transcript). 2. Recommendation: Start with Specific, High-Impact Credentials Build Trust and Quality: The core recommendation is to “focus on a few high-impact credentials, like a digital CPA certification. By putting all our effort into making them robust and trustworthy, we create a gold standard that people rely on. Trust drives adoption.” (Digital Credentials Presentation). Foundational Credentials as Catalysts: BC’s strategy focuses on “foundational credentials,” such as “identity of a person, identity of a business.” These are “foundational credentials which… kind of start right at the core of everything and then other credentials are built on top of those.” (Meeting Transcript). This layering allows for credentials like a “licensed doctor” to build upon a verified personal identity. Open More Doors Through Ripple Effects: Strong, well-executed credentials “create ripple effects.” (Digital Credentials Presentation). BC has observed this with their “lawyer credential,” where “all these what we call verifiers, or relying parties, started popping up, saying, oh, we could use that too, we could use that too, we could use that too.” (Meeting Transcript). This organic growth leads to “unpredicted” opportunities. 3. Credential Documentation: The Foundation of Trust Trusted Location: “Documentation must reside in a secure, reputable platform to ensure credibility.” (Digital Credentials Presentation). The BC Gov’s Digital Trust Toolkit is highlighted as a model for “transparent, trusted documentation that stakeholders can rely on.” (Digital Credentials Presentation). This toolkit serves as a “source of truth of governance documentation for credentials that are in production.” (Meeting Transcript). Active Promotion and Public Visibility: Documentation is insufficient without visibility. It “needs to be public, publicly exposed. So that a verifier can look at that document and have enough information to read it to be able to trust it.” (Meeting Transcript). This active promotion through various channels (webinars, industry forums, social media) “builds awareness and encourage adoption among users and organizations.” (Digital Credentials Presentation). Transparency of Issuance and Revocation: Trust extends beyond the technical aspects of a credential; it requires confidence in the “issuance process that the issuing authority goes through to be able to issue that to the right person, with the right attribute information.” (Meeting Transcript). Furthermore, visibility into “revocation status” is critical. If a ledger were to disappear, the ability to check revocation status would be lost, underscoring the need for robust infrastructure. 4. The Nature of Ecosystems and Organic Growth Fractal Nature of Ecosystems: The discussion introduces the concept of ecosystems as “fractal.” As Carly Huitema explains, “You can zoom in all the way into grains of dirt, and there’s an ecosystem there. And then you can zoom all the way out to the planet scale ecosystem.” (Meeting Transcript). This implies that “person is a microcosm” that can be “observed in other ecosystems,” with “boundaries are always fuzzy.” (Meeting Transcript). Market-Driven Adoption: The “overall drive of all of this is driven by those relying parties and verifiers.” When they “see value in doing something with credentials they will implement it, and they will tell their friends. And that’s sort of how you can see that growth and that adoption happening.” (Meeting Transcript). This organic growth is preferred over top-down, pre-defined ecosystem planning. Savings and Efficiency as Drivers: Digital credentials offer tangible benefits, such as “a ton of administrative time” savings for verifiers. For example, the City of Vancouver is realizing benefits where “nothing needs to be reviewed because the technology’s already trusting the stuff that the province is producing. So they don’t have to, like, have somebody manually reveal a form.” (Meeting Transcript). 5. Trust Beyond BC: Scaling and Interoperability Challenges Establishing “Legitimacy”: A key challenge is distinguishing “legitimate” from “non-legitimate” credentials beyond the issuing authority. BC addresses this by publishing the issuer DID, schema ID, and credential definition ID on the Candy ledger. This allows relying parties to cryptographically verify the origin. Cross-Jurisdictional Interoperability: The question of how this scales beyond BC is raised, particularly when different jurisdictions (e.g., Alberta, Rhode Island, Utah) might make different technical choices for their credentials. This mirrors the non-digital world where regulations define accepted IDs. Role of Trust Registries and Standards Bodies: The idea of “trust registries” is introduced as a potential solution for looking up legitimate issuers and their governance frameworks across jurisdictions. This could be driven by “standards bodies,” which could publish “trust registries published on their websites, or in some type of technology to say, hey, this standards body here, these are all of the organizations that we have audited and are following.” (Meeting Transcript). Government’s Evolving Role: While government’s primary role remains issuing foundational IDs and enforcing laws, its new role in “building and supplying software to citizens” (e.g., the BC Wallet app) is seen as a way to “help adoption.” (Meeting Transcript). The future may see OS manufacturers playing a larger role in built-in wallets. Conclusion

The discussion reinforces that while the concept of a broad “ecosystem” might be a useful descriptive tool, the practical and successful implementation of digital credentials should pivot towards a credential-first approach. By focusing on building trust and quality in individual, high-impact credentials, making their governance transparent and public, and fostering organic adoption through demonstrated value, digital trust networks can emerge and grow naturally, leading to widespread benefits. The BC government’s experience serves as a compelling case study for this evolving strategy.

For more details, including the slides,  meeting recording and transcript, please see our wiki 2025-07-10 Kyle Robinson & Digital Trust Ecosystems. Why they don’t make sense.

https://www.linkedin.com/in/kylegrobinson/

The post TOIP EGWG 2025-07-10: Kyle Robinson, Digital Trust Ecosystems. Why they don’t make sense. appeared first on Trust Over IP.


Project VRM

The Cluetrain Will Run from Customers to Companies

For the good of both. Customers need privacy, respect, and the ability to provide good and helpful information to the companies they deal with. The good clues customers bring can include far more than what companies get today from their CRM systems and from surveillance of customer activities. For example, market intelligence that flows both […]

For the good of both.

Customers need privacy, respect, and the ability to provide good and helpful information to the companies they deal with. The good clues customers bring can include far more than what companies get today from their CRM systems and from surveillance of customer activities. For example, market intelligence that flows both ways can happen on a massive scale.

But only if customers set the terms.

Now they can, using a new standard from the IEEE called P7012, aka MyTerms. It governs machine readability of personal privacy terms. These are terms that customers proffer as first parties, and companies agree to as second parties. Lots of business can be built on top of those terms, which at the ground level start with service provision without surveillance or unwanted data sharing by the company with other parties. New agreements can be made on top of that, but MyTerms are where genuine and trusting (rather than today’s coerced and one-sided) relationships can be built.

When companies are open to MyTerms agreements, they don’t need cookie notices. Nor do they need 10,000-word terms and conditions or privacy policies because they’ll have contractual agreements with customers that work for both sides.

On top of that foundation, real relationships can be built by VRM systems on the customers’ side and CRM systems on the corporate side. Both can also use AI agents: personal AI for customers and corporate AI for companies. Massive businesses can grow to supply tools and services on both sides of those new relationships. These are businesses that can only grow atop agreements that customers bring to the table, and at scale across all the companies they engage.

This is the kind of thing that four guys (me included)† had in mind when they posted The Cluetrain Manifesto* on the Web in April 1999. A book version of the manifesto came out in early 2000 and became a business bestseller that still sells in nine languages. Above the manifesto’s 95 theses is this master clue**, written by Christopher Locke:

MyTerms is the only way we (who are not seats or eyeballs or end users or consumers) finally have reach that exceeds corporate grasp, so companies can finally deal with the kind of personal agency that the Internet promised in the first place.

The MyTerms standard requires that a roster of possible agreements be posted at a disinterested nonprofit.  The individual chooses one, the company agrees to it (or not). Both sides keep an identical record of the agreement.

The first roster will be at Customer Commons, which is ProjectVRM’s 501(c)3 nonprofit spinoff. It was created to do for personal privacy terms what Creative Commons does for personal copyright licenses. (It was Customer Commons, aka CuCo, that the IEEE approached with the idea of creating the MyTerms standard.)

Work on MyTerms started in 2017 and is in the final stages of IEEE approval process. While it is due to be published early next year, what it specifies is simple:

Individuals can choose a term posted at Customer Commons or the equivalent Companies can agree to the individual’s choice or not The decision can be recorded identically by both sides Data about the decision can be recorded by both sides and kept for further reference, auditing, or dispute resolution Both sides can know and display the state of agreement or absence of agreement (for example, the state of a relationship, should one come to exist)

MyTerms not a technical spec, so implementations are open to whatever. Development on any of those can start now. So can work in any of the six areas listed above.

The biggest thing MyTerms does for customers—and people just using free services—is getting rid of cookie notices, which are massively annoying and not worth the pixels they are printed on.  If a company really does care about personal privacy, it’ll respect personal privacy requirements. This is how things work in the natural world, where tracking people like marked animals has been morally wrong for millennia. In the digital world, however, agreements need to be explicit, so programming and services can be based on them. MyTerms does that.

For business, MyTerms has lots of advantages:

Reduced or eliminated compliance risk Competitive differentiation Lower customer churn Grounds for real rather than coerced relationships (CRM+VRM) Grounds for better signaling (clues!) going in both directions Reduced or eliminated guesswork about what customers want, how they use products and services, and  how both might be improved

Lawyers get a new market for services on both the buy and sell sides of the marketplace. Companies in the CMP (consent management platform) business (e.g. Admiral and OneTrust) have something new and better to sell.

Lawmakers and Regulators can start looking at the Net and the Web as places where freedom of contract prevails, and contracts of adhesion (such as what you “agree” to with cookie notices) are obsolesced.

Developers can have a field day (or decade). Look for these categories to emerge

Agreement Management Platforms – Migrate from today’s much-hated consent management platforms (hello OneTrust, Admiral, and the rest). Vendor Relationship Management (VRM) Tools and services – Fill the vacuum that’s been there since the Web got real in 1995. Customer Relationship Management (CRM) – Make its middle name finally mean something. Customer Data Return (CDR) – Give, sell back, or share with customers the data you’ve been gathering without their permission since forever. Talking here to car companies, TV makers, app makers, and every other technology product with spyware onboard for reporting personal activity to parties unknown. Platform Relief –  Free customers from the walled gardens of Apple, Microsoft, Amazon, and every other maker of hardware and software that currently bears the full burden of providing personal privacy to customers and users. Those companies can also embrace and help implement MyTerms for both sides of the marketplace. Personal AI (pAI)– Till and plant a vast new greenfield for countless companies, old and new. This includes Apple (which can make Apple Intelligence truly “AI for the rest of us” rather than Siri in AI drag), Mozilla (with its Business Accelerator for personal AI) , Kwaai (for open source personal AI), and everyone else who wants to jump on the train. Big meshes of agents, such as what these developers are all working on.

In the marketplace, we can start to see all these things:

Predictions made by The Intention Economy: When Customers Take Charge finally come true. New dances between customers and companies, demand and supply. (“The Dance” is a closing chapter of The Intention Economy.) New commercial ecosystems can grow around a richer flow of clues in both directions, based on shared interest and trust between demand and supply. Surveillance capitalism will be obsolesced — and replaced by an economy aligned with personal agency and respect from customers’ corporate partners. A new distributed P2P fabric of personally secure and shared data processing and storage — See what KwaaiNet + Verida, for example, might do together.

All aboard!

†Speaking for myself in this post. I invite the other two surviving co-authors to weigh in if they like.

*At this writing, the Cluetrain website, along with many others at its host, is offline while being cured of an infection.  To be clear, however, it will be back on the Web. Meanwhile, I’m linking to a snapshot of the site in the Internet Archive—a service for which the world should be massively grateful.

**The thesis that did the most to popularize Cluetrain was “Markets are conversations,” which was at the top of Cluetrain’s ninety-five theses. Imagining that this thesis was just for them, marketers everywhere saw marketing, rather than markets, as “conversations.” Besides misunderstanding what Cluetrain meant by conversation (that customers and companies should both have equal and reciprocal agency, and engage in human ways), marketing gave us “conversational” versions of itself that were mostly annoying.  And now (thank you, marketing), every damn topic is now also a fucking “conversation”—the “climate conversation,” the “gender conversation,” the “conversation about data ownership.” I suspect that making “conversation” a synonym for “topic” was also a step toward making every piece of propaganda into a “narrative.” But I digress. Stop reading here and scroll back to read the case for MyTerms. And please, hope that it also doesn’t become woefully misunderstood.

Tuesday, 15. July 2025

FIDO Alliance

White Paper: Addressing Cybersecurity Challenges in the Automotive Industry

Abstract As the automotive industry transitions toward software-defined vehicles, autonomous technologies, and connected services, cybersecurity has become a critical concern. This white paper from the FIDO Alliance outlines key challenges […]

Abstract

As the automotive industry transitions toward software-defined vehicles, autonomous technologies, and connected services, cybersecurity has become a critical concern. This white paper from the FIDO Alliance outlines key challenges and emerging solutions for securing next-generation vehicles. It examines global regulatory frameworks such as UN R155, UN R156, and ISO/SAE 21434 and presents the FIDO Alliance’s standards for passwordless authentication, secure device onboarding, and biometric certification.

Audience

This paper addresses the automotive industry. The audience includes automotive system engineers, automotive IVI product and development managers, automotive networking and in-vehicle cyber security engineers, product managers for in-vehicle services for applications such as purchasing, IT system cyber security managers, engineers seeking to support global regulatory frameworks such as UN R155/R156 and ISO/SAE 21434, manufacturing system engineers, and car-to-cloud connectivity engineers.

Download the White Paper 1. Introduction

The automotive industry is undergoing transformative changes, including the shift to software-defined and autonomous vehicles, advanced IT-like architectures, over-the-air (OTA) updates, and the rise of in-vehicle commerce. While these changes offer new revenue opportunities, they also bring significant cybersecurity threats.

Global cybersecurity legislation, such as UN Regulation 155, UN Regulation 156, and ISO/SAE 21434, aim to protect vehicles from emerging threats. The FIDO Alliance plays a crucial role by providing standards for secure authentication, device onboarding, and biometrics certification.

Utilizing standards helps automotive companies ensure consistent security, leverage collective expertise, and avoid proprietary solutions that have the potential to stymie new markets and revenue. FIDO standards apply to various automotive applications, including consumer services, in-vehicle solutions, workforce authentication, and manufacturing, ensuring robust cybersecurity across the industry.

This paper provides companies within the automotive ecosystem an insight into the standards and services the FIDO Alliance offers together with a review of current and future use cases.

The FIDO Alliance is seeking feedback and partnership with industry experts to help ensure that FIDO’s programs are fit for purpose and successfully help companies meet cybersecurity needs, improve driver experiences, and tap into new opportunities.

2. Evolution of the automotive industry

The automotive industry has 140 years of history and is currently going through changes that affect all aspects of the industry:

Electrification and sustainability Software-defined vehicles and connectivity Autonomous and assisted driving Shifting business models: Mobility-as-a-Service (MaaS) and direct sales Supply chain disruptions and geopolitical risks New revenue streams: data monetization and services Rollout of EV charging infrastructure and its energy grid impacts Changing consumer expectations and digital experiences

These changes bring potential upside to manufacturers in terms of new revenue opportunities and improved vehicles, but they also introduce considerable cyber threats.

Vehicles have evolved from isolated mechanical systems into interconnected cyber-physical platforms (often created by various entities) that integrate complex software, hardware, and communication networks. Manufacturers implement these systems to provide end users with a better vehicle and an enhanced driving experience, but they also bring an increased risk of cyber threats associated with new “attack surfaces”. These potential threats come in many forms, from malicious hackers to state funded actors. To minimize these threats, it is now a fundamental priority for manufacturers, their suppliers, regulators, and other industry stakeholders to focus on cybersecurity.

3. Meet the challenges and seize the opportunity

Automotive cybersecurity professionals have a massive challenge in front of them. On one side they need to react to the rise in threats and account for the associated legislation that has been developed to protect consumers. On the other side they need to be open to supporting new business models such as in-vehicle commerce, value added vehicle features such as subscription services, as well as additional cybersecurity for factories and offices. While there is no one simple solution to meet all of these needs, utilizing standards and certification programs from organizations such as the FIDO Alliance can help greatly.

4. Automotive cybersecurity and global legislation

National governments and international organizations have enacted regulations that require stringent cybersecurity measures throughout the automotive lifecycle, including design, operation, and even end-of-life. These frameworks aim to shield vehicles from emerging threats and establish a baseline for safety and trust across the automotive ecosystem. Major worldwide examples include:

United Nations Regulation 155 and United Nations Regulation 156: mandate that vehicles incorporate a Cybersecurity Management System (CSMS) and a Software Update Management System (SUMS) ISO/SAE 21434: provides the foundation for global automotive cybersecurity engineering, outlining processes for managing cyber risks throughout the entire vehicle lifecycle China’s GB 44495-2024 and GB 44496-2024: regulate the Cyber Security Management System (CSMS) and govern secure software updates in a granular fashion India’s AIS 189 and AIS 190: align with UN R155 and R156, to regulate the cybersecurity of connected vehicles United States: Publication of cybersecurity best practices by the National Highway Traffic Safety Administration (NHTSA) that emphasize secure vehicle development processes, incident response plans, and continuous risk monitoring

Refer to Appendix A to learn more about these standards.

5. The FIDO Alliance and FIDO standards

The FIDO Alliance is an open industry association with a focused mission: reduce the world’s reliance on passwords. To accomplish this, the FIDO Alliance promotes the development of, use of, and compliance with standards for user authentication and device onboarding.

The FIDO Alliance:

Develops technical specifications that define an open, scalable, interoperable set of mechanisms to reduce reliance on passwords for authentication of both users and devices. Tracks the evolution of global regulations and evolves its own standards to help industries satisfy those regulations in a harmonized way, reducing their compliance burdens. Operates industry certification programs to ensure successful global adoption of these specifications. Provides education and market adoption programs to promote the global use of FIDO. Submits mature technical specifications to recognized standards development organizations for formal standardization.

The FIDO Alliance has over 300 members worldwide, with representation from leaders in IT, silicon, payments, and consumer services and features a Board of Directors that includes representatives from Apple, Visa, Infineon, Microsoft, Dell, Amazon, and Google. The Alliance also has a variety of active working groups where like-minded members can develop and advance technical work areas and coordinate on market-specific requirements.

The FIDO Alliance is planning to launch an automotive working group, where leaders in this sector can identify and collaborate on technical, business, and market requirements. To learn more, use the Contact Us form at https://fidoalliance.org/contact/ or email info@fidoalliance.org.

6. FIDO for automotive cybersecurity compliance

Meeting the demands of the primary automotive cybersecurity standard ISO/SAE 21434 and subsequently the most prominent regulation, UN R155, hinges on strong identity management and secure device onboarding. While these standards don’t prescribe FIDO protocols per se, they outline key principles where FIDO offers tangible benefits.

ISO 21434, particularly Clauses 8 and 9 concerning risk assessment and threat mitigation, calls for strategies to prevent unauthorized access. FIDO’s passwordless authentication directly addresses this by eliminating weak credentials and reducing risks from phishing and credential stuffing, common threats to connected vehicle systems. Additionally, Clause 10’s focus on secure software deployment aligns with FIDO Device Onboard (FDO), ensuring only authenticated devices join the ecosystem, mitigating supply chain attacks and unauthorized software injections. This direct mapping of FIDO’s capabilities to specific clauses demonstrates its value in achieving compliance.

Beyond these founding standards, FIDO’s approach has broad applicability to emerging regulations, providing OEMs with a pathway to meeting global compliance demands and bolstering cybersecurity resilience across their connected car ecosystem. Some examples include China’s GB 44495-2024 and India’s AIS 189, which call for regional automotive cybersecurity standards and reinforce the need for features such as secure authentication in the software-defined vehicle (SDV) era. China’s GB regulation, similar to UN R155, emphasizes authenticity and integrity in remote updates, where FIDO’s passkey-based authentication provides a compliant approach to verifying access. India’s regulations, currently still in draft, align with UN R155, highlighting the importance of securing vehicle-to-cloud communications and identity management.

7. Overview of emerging use cases where FIDO standards may apply

FIDO standards can be applied to a wide range of scenarios. These can be customer-facing, embedded within the vehicle, or as part of the manufacturer’s IT infrastructure.

High level overview of some of some of these scenarios include but not limited to:

In-vehicle commerce: This includes payments using credentials stored and managed in vehicle to enable convenient fueling, EV charging, parking reservations, car washes or even in-vehicle marketplaces managed by the car manufacturer. Implementation of passkeys to authenticate the associated car user and biometric component certification are most relevant to these use cases. Authentication to personalized services: These applications include easy access to customized automotive settings (for example, headrest and seat adjustments) as well as to informational and entertainment content. In-vehicle solutions: This segment includes applications such as car-to-cloud connectivity and onboarding of ECUs and zone controllers within the vehicle. Implementation of FIDO Device Onboard (FDO) is most applicable to these applications. Workforce authentication: These applications include controlling workforce access to IT systems whether at a development office, manufacturing site, or dealership. Implementation of passkeys and FIDO USB authentication keys are most applicable to these applications. Manufacturing: Modern manufacturing facilities are moving towards software defined control, AI, and robotic systems. The secure deployment of these solutions is often time consuming and expensive. Implementation of FIDO Device Onboard (FDO) can accelerate deployments and increase security. 8. FIDO Alliance technology overview

In the same way that Ethernet started as an IT networking solution, FIDO standards were not specifically created for automotive applications. However, they are highly relevant in modern vehicles where robust cybersecurity is a critical, foundational element rather than just a desirable feature. FIDO standards, such as passkeys, are being used as is in the automotive world today.

The FIDO Alliance technology portfolio for automotive applications can be broadly grouped into three main areas:

Passkeys: The FIDO Alliance is transforming authentication through open standards for phishing-resistant sign-ins using passkeys. Passkeys are more secure than passwords and SMS OTPs, easier for consumers and employees to use, and simpler for service providers to deploy and manage. Automotive manufacturers leverage passkeys for a wide variety of use cases.

Device Onboarding: The FIDO Alliance establishes standards for secure device onboarding (FDO) to ensure the safety and efficiency of connected devices in segments such as industrial and enterprise. In the automotive sector, manufacturers can apply this standard to the connections between Electronic Control Units (ECUs) and zone controllers or connections between the vehicle itself and the cloud services that facilitate over-the-air software updates. This standard has been adopted by Microsoft, Dell, ExxonMobil, Red Hat and others.

Biometrics certification: The FIDO Alliance offers a certification program tailored to specific applications that uses independent test labs to measure performance of biometric sensors (such as iris or fingerprint sensors). Biometric sensors are becoming an increasingly important component of vehicles. Typical use cases might be to automatically configure the driver’s seat position or as part of a payment system. In these two examples the definition of “good technical performance” can differ greatly. Samsung, ELAN Microelectronics, Thales, Qualcomm, Mitek, iProov, and others have had biometric components certified by FIDO Alliance.

9. FIDO Alliance technology deep dive

To better understand how automotive manufacturers and the FIDO Alliance can work together, this section discusses current FIDO technologies and how they might integrate with automotive applications.

9.1 Passkeys and user authentication

A passkey is a FIDO authentication credential based on FIDO standards, that allows a user to sign in to apps and websites with the same steps that they use to unlock their device (biometrics, PIN, or pattern). With passkeys, users no longer need to enter usernames and passwords or additional factors.

Passkeys are the signature implementation of FIDO authentication standards, and they offer secure yet simplified sign-in to a wide range of services. Passkeys are supported by all major device operating systems and browsers and have been utilized by many industry leaders including Apple, Google, Microsoft, Samsung, Amazon, Walmart, PayPal, and Visa.

The following diagram illustrates how passkeys can be used for in-car applications, such as when a driver signs in to a cloud service.

Figure 1:Sample passkey usage in automotive

Passkeys rely on a technology known as public key cryptography (PKC), in which a virtual key pair is created, one private and the other public. For each private key (stored on the user’s device) there exists a matching public key (stored on the server) that is used to check signatures created with the private key.

In the diagram, a user (the driver in this case) first registers with a cloud service such as a payment service. During the registration process, a private and public cryptographic key is created by the FIDO Authenticator. The private key is stored securely in the infotainment system of the vehicle and is associated with that driver. The public key is stored on the cloud of the service provider.

When the driver wants to sign in to the service, a request is sent from the vehicle to the cloud service. The service then sends an authentication request to the vehicle. This challenge can only be successfully authorized by the user that holds the matching private key. To make sure that request is genuine, the driver is asked to confirm that they want to sign in. This is typically achieved via a biometric sensor such as fingerprint or face. Once this verification is complete, the user gains access to the service. Several FIDO hardware and software components are used for this process.

9.2 Passkey components

Three FIDO Certified components are used in the example:

FIDO authenticator

A FIDO authenticator is a software component or a piece of hardware that can perform FIDO authentication to verify possession and/or confirm user identity. In the example, the FIDO authenticator likely resides in the car infotainment system.

FIDO server

The server provides an application with a programming interface that can be leveraged with a FIDO Certified client to perform strong authentication. The server sits inside the cloud application.

Biometric components

Biometric components can identify an individual and are often used to compliment a FIDO authenticator. These sensors can take multiple forms including fingerprint, iris and face. The FIDO Alliance certifies the efficacy of biometric subsystems including end-to-end performance, differential assessment of demographic groups, and presentation attack detection (PAD).

Although the example is an in-vehicle use case, the same passkey technology can be applied inside a factory, development center, or dealership to ensure that systems are resilient to phishing attacks or other common password attack vectors.

9.3 In-vehicle biometrics

Installation of biometric components in vehicles is expected to increase rapidly over time. The performance needs of these components will vary by sensor type and target application. Today, the FIDO Alliance offers a comprehensive independent certification program for biometric components such as fingerprint and iris sensors. By specifying in a request for quote (RFQ) that products should be FIDO Certified, automotive manufacturers can simplify selection of sensors. For more information on FIDO Certification, visit https://fidoalliance.org/certification/.

9.4 FIDO Device Onboard (FDO)

When a computer device (such as an ECU) first connects to its management platform (the zone controller), it needs to be onboarded and provisioned. A parallel example might be the connection between a vehicle and its cloud. FIDO Device Onboard (FDO) was developed by FIDO Alliance members to meet the automation and high security requirements of such onboarding experiences.

With FDO, a device is first connected to the (wired or wireless) network and then powered up. The device then automatically and securely onboards to the management platform. FDO is based on a zero-trust architecture and therefore offers a high level of security as both the device and the management platform must cryptographically authenticate themselves to each other. FDO also provides resilience to supply chain attacks.

A number of leading technology providers have demonstrated implementations of FDO solutions including Dell, Microsoft, Red Hat, Intel, and ASRock.

10. FIDO technology use cases deep dive

The FIDO Alliance has identified several use cases where FIDO technology can be applied to support the automotive industry. This section discusses possible use cases with the hopes of fostering further conversations.

10.1 Consumer use cases

Historically, many cybersecurity applications have been “behind the scenes”. In modern vehicles there is an increasing number of new applications that directly impact the driver and passenger in-vehicle experience and open new revenue opportunities for manufacturers. One such area is the emergence of in-vehicle commerce.

Several factors are driving in-vehicle commerce:

Technological advancements Software Defined Vehicles (SDVs) allow for continuous updates and new functionality without hardware modifications. Autonomous driving introduces a new use case for vehicles as productivity or leisure spaces. Changing consumer expectations Consumers demand experiences in their vehicles akin to those offered by their smartphones and other digital devices. Revenue opportunities By acting as platforms for digital services, vehicles open new revenue streams for car manufacturers and service providers.

10.2 Identity verification, authentication, and authorization

The growing connectivity and services associated with modern vehicles brings about new requirements for identity verification, authentication, and authorization.

Identity verification: The process of confirming a person’s identity. It can involve comparing information provided by a person with records in a database or with the person’s physical documents such as a driver’s license. Authentication: Confirms that a person is who they say they are when attempting to sign in to systems, services, and resources. Authorization: The step after authentication that determines user access in terms of accessing data or performing actions.

Unlike other computing devices, such as smartphones and wearables, vehicles often have multiple users including family members, friends, co-workers, or renters. Each user may need access to services or to perform transactions tied to their unique identities and credentials. Therefore, vehicular computing resources must be cyber secure and capable of managing secure access and authentication for a diverse user base, including third-party service providers.

10.3 In-vehicle commerce and authentication

Commerce services in vehicles are closely tied to payments, making strong and user-friendly authentication essential. Drivers must trust that transactions are secure, manufacturers aim to minimize liability for unauthorized payments, and financial institutions require robust, standards-compliant authentication mechanisms. In addition, regulatory frameworks, such as Europe’s Payment Services Directive 2 (PSD2), mandate strong customer authentication (SCA) for cardholder-initiated transactions.

SCA requires a combination of at least two out of three factors:

Possession (something the user has, for example, a key, phone, or vehicle) Inherence (something the user is, for example, biometrics like fingerprint or facial recognition) Knowledge (something the user knows, for example, a PIN or password)

If the passkey authenticator is not natively integrated into the vehicle, authentication must be implemented using alternative multi-factor configurations. This can be achieved through software-based approaches, such as combining a PIN (knowledge) with the vehicle as a possession factor, or through hardware-based methods, such as biometric authentication (inherence) via fingerprint sensors or facial recognition, again anchored by the vehicle as the possession factor.

In-vehicle commerce can be broadly categorized into three main areas:

On-demand features

With on-demand features, vehicles now allow users to activate specific functionalities based on their needs. This includes advanced driver-assistance systems, comfort features like heated seats, and performance upgrades. On-demand features can be offered through flexible subscription models or pay-per-use systems. These features enhance customer satisfaction and create additional revenue streams for manufacturers.

Vehicle-related services

Vehicle-related services are seamlessly integrated services that include fueling, EV charging, parking reservations and payments, car washes, and toll payments. To maximize user convenience, the vehicle acts as a payment hub without reliance on a smartphone.

Convenience features

With implementation of convenience features such as shopping, entertainment, education, and even remote work functionalities, the vehicle becomes an extension of the user’s digital ecosystem. Examples include ordering coffee or groceries on the go, streaming movies, or attending virtual meetings during commutes. These categories illustrate that vehicles are no longer just modes of transportation but platforms that enable various service providers to engage with drivers and passengers.

10.4 Driver ID Verification for vehicle access control

Vehicle access requires a high level of authentication and is well suited to biometric sensors.

Keyless entry and ignition: Biometric systems like fingerprint and facial recognition can replace traditional keys to provide secure, biometric-based authentication for vehicle access and ignition. Anti-theft measures: Vehicles can utilize biometric authentication to prevent unauthorized usage or theft, including carjacking. Vehicle and OEM services: Vehicles can use biometric authentication as the first step to assessing a driver’s rights and privileges in determining how vehicle services can be accessed.

10.5 Personalization, fleet management and autonomous vehicles

Vehicles are often shared and the ability to automatically adapt to a specific driver is an important capability. The criteria and threshold for identification and authentication varies greatly depending on the specific application. For example, adjusting a driver’s seat adjustment versus passenger authorization for an autonomous vehicle.

10.5.1 Personalization

Adaptive in-car settings: Biometric recognition can identify drivers or passengers in order to adjust seat positions, climate controls, infotainment preferences, and navigation routes according to stored profiles. Adaptive usage-based services: By seamlessly authenticating the driver, the automaker can provide use-based insurance or leasing and financing options for personal and commercial scenarios. Fleet management Shared vehicles and fleets: Biometric-enabled processes ensure smooth transitions between users in car-sharing or fleet systems, loading personal settings for each verified driver. Compliance tracking: Digital wallets can hold compliance documents (for example, licenses and vehicle inspection reports) to reduce paperwork and enhance audit readiness by asserting compliance attributes to authorized users.

10.5.2 Autonomous vehicles

Passenger authentication and ID verification: In self-driving cars, biometric systems authenticate passengers to ensure authorized use and personalized experiences.

Why key possession is not sufficient authentication

There are several reasons why a physical key is not a sufficient form of user authentication.

A physical key verifies access to the vehicle but does not confirm the identity of the individual using it. In scenarios such as ridesharing, fleet management, or multi-user vehicles, relying solely on key possession fails to distinguish authorized users from unauthorized users. As discussed earlier, for payment use-cases there is a need in some markets to be compliant with SCA regulations. A key only satisfies the possession factor and therefore does not meet the SCA requirements for secure payments. A vehicle key can be lost, stolen, or duplicated allowing unauthorized individuals to gain access. Without additional layers of authentication, transactions made in the vehicle could be fraudulent. Multi-party and platform complexity: In-car commerce involves multiple stakeholders such as Original Equipment Manufacturers (OEMs), service providers, and users. Authentication must ensure that the user is authorized to transact across all platforms and services, necessitating identity verification beyond simple possession.

10.6 Electronic systems and manufacturing use cases

10.6.1 In-vehicle ECU, zone controller, and compute onboarding

As the compute level rises within vehicles, the need for efficient and fast communication becomes increasingly important. In response to this need, cars are increasingly moving to an IT-centric architecture with Ethernet becoming the networking technology of choice to link zone controllers and ECUs inside a vehicle.

In addition to high speed and secure communication, there is a need to ensure that both the device (ECU) and the management platform (Zone controller) are cryptographically authenticated against each other. Although initially developed for IoT and IT systems, the FIDO Alliance team believes that FIDO Device Onboard (FDO) can be a fast and secure way to automate the onboarding process. As FDO is an open standard, automotive manufacturers can benefit from economies of scale savings versus paying for the development and maintenance of proprietary solutions.

In addition to speed and security, FDO also provides resilience to supply chain attacks and grey market counterfeits.

10.6.2 Car to cloud onboarding

As the complexity of car features grows and autonomous driving technology increases, a modern car is essentially a computer on wheels that requires a vast amount of software for all functions to operate.

Most sources agree that a typical modern car is managed by software generated by around 100 million lines of code. The very nature of this complexity confirms that the days when vehicle software can be frozen at vehicle product launch is no longer realistic.

Software updates are now a mandatory feature of modern automobiles and a secure and efficient way of connecting the vehicle to the manufacturer’s cloud is essential.

FDO provides a secure and fast method for vehicles to onboard to their management platforms, making Over the Air (OTA) software updates possible.

Figure 2:FIDO fit for in-vehicle systems

Additionally, new updates to the FDO standard are expected to allow software to securely deploy to bare ECUs or zone controllers, which would greatly simplify dealership repairs and upgrades.

10.7 Workforce authentication (passkeys/FIDO keys)

For many years the IT industry has been using FIDO authenticators to ensure that only authorized staff have access to systems. The risks associated with attacks in this space have been highlighted by the recent challenges faced by some automotive dealers.

Figure 3:FIDO fit for workforce authentication

A cyberattack on a software provider for car dealerships occurred in June of 2024 and disrupted the operations of thousands of dealerships in North America. This attack caused major disruptions, including delays for car buyers and an estimated $1 billion in collective losses for dealerships.

10.8 Manufacturing use cases

Factories are using classic fixed function manufacturing functions such as motion control and PLCs less as they move towards use of far more flexible and intelligent software defined control and AI based vision systems. This transition introduces large numbers of general-purpose computers to the factory floor.

At installation, each server or industrial PC needs to be onboarded to its respective management platform (on-premises or cloud). This onboarding process typically requires that skilled technicians manually configure the credentials or passwords in the devices, a process that is slow, not secure, and expensive.

With FIDO Device Onboard (FDO), a technician can plug in an industrial PC and have it automatically and securely onboard the management server platform.

The following diagram shows how FDO is used to onboard the industrial PCs to the local servers which are in turn onboarded to the manufacturing cloud.

Figure 4:FIDO fit for automotive manufacturing

11. Why using standards helps

Cybersecurity standards, such as those from the FIDO Alliance, offer value in ways that are hard for any single company to achieve. These consensus-based standards represent maturity and provide consistency for the industry, which are crucial for reliable authentication and authorization. FIDO cybersecurity standards are based on diverse expertise, provide clarity in a changing cybersecurity landscape, and offer essential guidance for certification authorities and regulators as they develop new laws.

Although the automotive industry has utilized standards almost since its inception, there are still areas where companies have tried to develop their own proprietary solutions. Such solutions rarely add value for the manufacturers and require engineering talent to develop and time to maintain.

As the automotive computing platform is a system of systems, the automotive industry can benefit from lessons learned by related industries. Open standards supported by certification programs help streamline product and service development.

FIDO’s standards are essentially commoditizing authentication elements that are critical to cybersecurity, but that are not natural areas for competitive differentiation. By leveraging standards, vendors and manufacturers can now focus their resources and development efforts on higher-value services.

11.1 Benefits of partnering with the FIDO Alliance

Diverse expertise: The FIDO Alliance brings together skilled professionals from various companies, including cloud players, credit card companies, and manufacturers.

Ecosystem cohesion: Standards ensure quality, security, and interoperability within ecosystems, which is crucial for applications like payments.

Adapt to emerging threats: The threat landscape is always evolving. As an example, quantum computing represents a significant threat to commonly used encryption techniques. Although quantum computing is in a relatively early stage of maturity, standards groups such as the FIDO Alliance are already defining how to create quantum resilient solutions.

12. FIDO Certification programs for the automotive industry

The FIDO Alliance’s world-class certification programs validate that products conform to FIDO specifications and interoperate effectively and assess security characteristics and biometric performance. With over 1,200 FIDO Certified products from hundreds of vendors around the world, these programs unlock the value of FIDO’s open standards for vendors and buyers. By specifying FIDO Certification in their RFQ’s, manufacturers can be sure that their suppliers will deliver performant, secure, and interoperable products.

Automotive OEMs can seek out and leverage components that are already certified (for example, authenticators or biometric components) and FIDO Alliance’s certification team is also developing an automotive profile with its lab partners that replicates in-car environments for more precise biometric tests. The Alliance seeks automotive sector feedback to help us collectively:

Address gaps in the current certification specifications Update specifications as needed Issue sector-specific policies Implement new testing procedures

For more information on FIDO Certification, visit https://fidoalliance.org/certification/.

13. Conclusion and next steps

The automotive industry and cybersecurity are evolving quickly; the FIDO Alliance’s proven and established standards and certification programs can help with a wide range of automotive industry applications. Applications include in-vehicle services and payment authentication, onboarding zone-controllers, car-to-cloud connectivity, OTA updates, and leveraging biometrics for a better driver experience.

The FIDO Alliance provides a path for automotive manufacturers and their suppliers to simplify their development processes, raise security levels, improve customer experience, reduce costs and tap into new revenue opportunities.

Feedback is welcome on the topics covered within this white paper and the FIDO Alliance encourages interested parties to engage with the Alliance and its members. FIDO Alliance members can learn more about FIDO standards and have opportunities to influence how these standards evolve. Additionally, members get the benefit of being able to engage with a broad range of thought leaders from leading companies within the broader ecosystem.

To get involved visit https://fidoalliance.org/members/become-a-member/ or use the Contact Us form at https://fidoalliance.org/contact/.

14. Appendix A – Global legislation applicable to automotive cybersecurity

National governments and international organizations have enacted regulations that require stringent cybersecurity measures throughout the automotive lifecycle, from design to operation and even end of life. These frameworks aim to shield vehicles from emerging threats and establish a baseline for safety and trust across the automotive ecosystem.

United Nations Regulations 155 and 156: These are the most prominent and clearly defined automotive cybersecurity regulations. Adopted under the WP.29 framework in 2021, UN R155 and R156 are globally recognized and mandate that vehicles incorporate a Cybersecurity Management System (CSMS) and a Software Update Management System (SUMS). These regulations are prerequisites for type approvals in over 50 countries, including most EU nations, Japan, South Korea, and Australia (UNECE, 2021).

ISO/SAE 21434: This standard provides the foundation for global automotive cybersecurity engineering, outlining processes for managing cyber risks throughout the entire vehicle lifecycle. It complements existing regulations and aids manufacturers in complying with mandatory regulations such as UN R155 (ISO, 2021).

China’s GB 44495-2024 and GB 44496-2024: Introduced in the summer of 2024, these regulations mirror UN R155 and R156 but are more detailed in specificity. GB 44495 outlines cybersecurity requirements for connected vehicles, while GB 44496 governs secure software updates. China’s focus on intelligent connected vehicles highlights its ambition to lead in autonomous and connected technologies (Shadlich, 2024).

India’s AIS 189 and AIS 190: India has introduced AIS 189 and AIS 190, standards aligned with UN R155 and R156, to regulate the cybersecurity of connected vehicles. These frameworks emphasize risk management, monitoring, secure communication protocols, and secure software updates, similar to UN R155/R156 (Vernekar, 2024).

United States: While there are no mandated federal regulations for automotive cybersecurity, the National Highway Traffic Safety Administration (NHTSA) has published cybersecurity best practices. These guidelines emphasize secure vehicle development processes, incident response plans, and continuous risk monitoring. They align with ISO/SAE 21434 and offer a proactive approach to mitigating vulnerabilities in connected vehicles (NHTSA, 2022).

Document history ChangeDescriptionDateInitial publicationWhite paper first published.7-2025             15. Contributors

Conor White, Daon, Inc
Richard Kerslake, FIDO Alliance
Andrew Shikiar, FIDO Alliance
Nimesh Shrivastava, Qualcomm Inc
Drew Van Duren, Qualcomm Inc
Jens Kohnen, Starfish GmbH & Co. KG
Tin T. Nguyen, VinCSS JSC
Henna Kapur, Visa

16. References

Harley, M. (2024, March 28). EU Cybersecurity Laws Kill Porsche’s 718 Boxster and Cayman Early. Retrieved from https://www.forbes.com/sites/michaelharley/2024/03/28/eu-cybersecurity-laws-kill-porsches-718-boxster-and-cayman-early/

ISO. (2021). ISO/SAE 21434:2021 Road vehicles—Cybersecurity engineering. International Organization for Standardization. Retrieved from https://www.iso.org/standard/70918.html

Miller, C., &amp; Valasek, C. (2015, July 21). Hackers remotely kill a Jeep on the highway—With me in it. Wired. Retrieved from https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway

National Highway Traffic Safety Administration (NHTSA). (2022, September 7). Cybersecurity best practices for new vehicles. NHTSA. Retrieved from https://www.nhtsa.gov/press-releases/nhtsa-updates-cybersecurity-best-practices-new-vehicles

Shadlich, E. (2024, September 2). China’s New Vehicle Cybersecurity Standard: GB 44495-2024. Retrieved from https://dissec.to/general/chinas-new-vehicle-cybersecurity-standard-gb-44495-2024/

UNECE. (2021). UN Regulation No. 155 – Cyber security and cyber security management system. UNECE. Retrieved from https://unece.org/transport/documents/2021/03/standards/un-regulation-no-155-cyber-security-and-cyber-security

University of Detroit Mercy. (n.d.). Vehicle cybersecurity engineering program. Retrieved from https://eng-sci.udmercy.edu/academics/engineering/vehicle-cyber-eng.php

Vernekar, A. (2024, October 10). Securing The Future Of Indian Automobiles: Understanding AIS-189 And Cybersecurity For Vehicles. Retrieved from https://vayavyalabs.com/blogs/securing-the-future-of-indian-automobiles-understanding-ais-189-and-cybersecurity-for-vehicles/

Walsh College. (n.d.). Bachelor of Science in Automotive Cybersecurity. Retrieved from https://walshcollege.edu/walsh-undergraduate-degree-programs/bachelor-of-science-in-information-technology/bachelor-of-science-in-automotive-cybersecurity/


Blockchain Commons

Musings of a Trust Architect: When Technical Standards Meet Geopolitical Reality

Reflections on recent conversations about digital identity, sovereignty, and the erosion of foundational principles Echoes from Geneva I wasn’t present at the Global Digital Collaboration conference (GDC25), but the observations shared by colleagues who attended have crystallized some issues I’ve been wrestling with for years. I should note there’s a selection bias here: I’m the author of the 10 pr

Reflections on recent conversations about digital identity, sovereignty, and the erosion of foundational principles

Echoes from Geneva

I wasn’t present at the Global Digital Collaboration conference (GDC25), but the observations shared by colleagues who attended have crystallized some issues I’ve been wrestling with for years. I should note there’s a selection bias here: I’m the author of the 10 principles of self-sovereign identity, so my community tends to have strong opinions about digital identity. Still, when multiple trusted voices independently report similar concerns, patterns emerge that are worth examining. And these weren’t casual observers sharing these concerns. They were seasoned practitioners who’ve spent decades building identity infrastructure. Their collective unease speaks to something deeper than technical disagreements.

It’s hard to boil the problems at GDC25 down to a single issue, because they were so encompassing. For example, there was a pattern of scheduling issues that undercut the community co-organizing goal of the conference and seemed to particularly impact decentralized talks. One session ended up in a small, hot room on the top floor that was hard to find. (It was packed anyway!) Generally, the decentralized-centric talks were in bad locations, they were short, they had restricted topics, or they were shared with other panelists.

I think that logistical shuffling of events may point out one of the biggest issues: decentralized systems weren’t given much respect. This may be true generally. There may be lip service to decentralized systems, but not deeper commitments. Its value isn’t appreciated, so we’re losing its principles. Worse, I see the intent of decentralization being inverted: where our goal is to give individuals independence and power by reducing the control of centralized entities, we’re often doing the opposite — still in the name of decentralization.

The Echo Chamber Paradox

The problems at GDC25 remind me of Rebooting the Web of Trust (RWOT) community discussions I’ve been following, which reiterate that this is a larger issue. We debate the finer points of zero-knowledge proofs and DID conformance while missing the forest for the trees. Case in point: the recent emergence of “did:genuineid” — a centralized identifier system that fundamentally contradicts the “D” in DID.

Obviously, decentralization is a threat to those who currently hold power (whether they be governments, corporations, billionaires, or others who hold any sort of power), because it tries to remove their centralization (and therefore their power), to instead empower the individual. But if we can’t even maintain the semantic integrity of “decentralized” within our own technical community, devoted to the ideal, how can we fight for it in the larger world?

The Corpocratic Complication

GDC25 was held in Geneva, Switzerland. 30+ standards organizations convened to discuss the future of digital identity. Participants spanned the world from the United States to China. There was the opportunity that GDC25 was going to be a truly international conference. Indeed, Swiss presenters were there, and they spoke of privacy, democratic involvement, and achieving public buy-in. It was exactly the themes that we as decentralized technologists wanted to hear.

But from what I’ve heard, things quickly degraded from that ideal. Take the United States. The sole representative of the country as a whole attended via teleconference. (He was the only presenter who did so!) His talk was all about Real ID, framed as a response to 9/11 and rooted in the Patriot Act. It lay somewhere between security-theatre and identity-as-surveillance, and that’s definitely not what we wanted to hear. (The contrast between the US and Swiss presentations was apparently jarring.)

And with that representative only attending remotely, the United State’s real representatives ended up being Google and Apple, each advancing their own corpocratic interests, not the interests of the people we try to empower with decentralized identities.

This isn’t just an American problem. It’s a symptom of a deeper issue happening across our digital infrastructure. It’s likely the heart of the inversions of decentralized goals that we’re seeing — and likely why those logistical reshufflings occurred: to please the gold sponsors. In fact, the conference sponsors tell the story: Google, Visa, Mastercard, and Huawei were positioned as “leading organizations supporting the advancement of wallets, credentials and trusted infrastructure in a manner of global collaboration.”

While Huawei’s presence demonstrates international diversity—a Swiss conference bringing together Europe and Asia—it also raised questions about whose vision of “trust” would ultimately prevail. When payment platforms and surveillance-capable tech giants frame the future of identity infrastructure, we shouldn’t be surprised when the architecture serves their interests first.

This echoes my concerns from “Has SSI Become Morally Bankrupt?”. We’ve allowed the narrative of self-sovereignty to be co-opted by the very platforms it was meant to challenge. The technical standards exist, but they’re being implemented in ways that invert their original purpose. Even UNECE sessions acknowledged the risk of “diluting the autonomy and decentralization that SSI is meant to provide.”

The Sovereignty Shell Game

Google was partnered with German Sparkasse on ZKP technology and that revealed a specific example of this co-opting.

Google’s open-sourcing of its Zero-Knowledge Proof libraries, announced July 3rd in partnership with Germany’s network of public savings banks, was positioned as supporting privacy in age verification. Yet as Carsten Stöcker pointed out, zero-knowledge doesn’t mean zero-tracking when the entire stack runs through platform intermediaries. Carsten noted that Google has “extensive tracking practices across mobile devices, web platforms and advertising infrastructure.” Meanwhile, the Google Play API makes no promises that the operations are protected from the rest of the OS.

The Google ZKP libraries (“longfellow-sk”) could be a great building block for truly user-centric systems, as they link Zero-Knowledge Proofs to legacy cryptographic signature systems that are still mandatory for some hardware. But they’d have to be detached from the rest of Google’s technology stack. Without that, there are too many questions. Could Google access some of the knowledge supposedly protected by ZKPs? Could they link it to other data? We have no idea.

The European Union’s eIDAS Regulation, set to take effect in 2026, encourages Member States to integrate privacy-enhancing technologies like ZKP into the European Digital Identity Wallet, but integration at the platform level offers similar dangers and could again invert the very privacy guarantees ZKP promises.

Historical Echoes, Modern Inversions

Identity technology’s goals being inverted, so that identity becomes a threat rather than a boon, isn’t a new problem. In “Echoes of History”, I examined how the contrasting approaches of Lentz and Carmille during WWII demonstrate the life-or-death importance of data minimization. Lentz’s comprehensive Dutch identity system enabled the Holocaust’s efficiency; Carmille’s deliberate exclusion of religious data from French records saved lives. Even when they’re decentralized, today’s digital identity systems face the same fundamental questions: what data should we collect, what should we reveal, and what should we refuse to record entirely?

But we’re adding a new layer of complexity. Not only must we consider what data to collect, but who controls the infrastructure that processes it. When Google partners with Sparkasse on “privacy-preserving” age verification, when eIDAS mandates integration at the operating system level, we’re not just risking data collection: we’re embedding it within platforms whose business models depend on surveillance. Even if the data is theoretically self-sovereign, the threat of data collected is still data revealed — just as happened with Lentz’s records.

The European eIDAS framework, which I analyzed in a follow-up piece to “Echoes from History”, shows how even well-intentioned regulatory efforts can accelerate platform capture when they mandate integration at the operating system level. As I wrote at the time, a history of problematic EU legislation that had the best of intentions but resulted in unintended consequences has laid the groundwork, and now identity is straight in that crosshairs. One of the first, and most obvious problems with eIDAS is the mandate “that web browers accept security certificates from individual member states and the EU can refuse to revoke them even if they’re dangerous.” There are many more — and I’m not the only voice on eIDAS and EUDI issues.

Supposedly self-sovereign certificates phoning home whenever they’re accessed is another recent threat that demonstrates best intentions gone awry. This not only violates privacy, but it undercuts some of our best arguments for self-soveereign control of credentials by returning liability for data leaks to the issuer. The No Phone Home initiative that Blockchain Commons joined last month represents one attempt to push back on that, but it feels like plugging holes in a dam that’s already cracking. It all does.

The Builder’s Dilemma

What troubles me most is the split I see in our community. On one side, technology purists build increasingly sophisticated protocols in isolation from policy reality. On the other, pragmatists make compromise after compromise until nothing remains of the original vision.

The recent debates about did:web conformance illustrate this perfectly. Joe Andrieu correctly notes that did:web can’t distinguish between deactivation and non-existence — a fundamental security boundary. Yet did:web remains essential to many implementation strategies because it bridges the gap between ideals and adoption. It provides developers and users with experience with DIDs, but in doing so undercut decentralized ideals for those users. We’re caught between philosophical purity and practical irrelevance.

In my recent writings on Values in Design and the Right to Transact, I’ve tried to articulate what we’re fighting for. But values without implementation are just philosophy, and implementation without values is just surrender.

The Global Digital Collaboration highlighted this tension perfectly. International progress on digital identity proceeds apace: Europe, Singapore, and China all advance their frameworks, but there are still essential issues that invert our fundamental goals in designing self-sovereign systems. Meanwhile, the U.S. remains even more stalled, its position represented only by the platforms that benefit from the status quo. Alongside this, technical standards discussions proceed in isolation from the policy, regulatory, and social frameworks that will determine their real-world impact.

Where Do We Go From Here?

I find myself returning to first principles. When we designed TLS 1.0, we understood that technical protocols encode power relationships. When we established the principles of self-sovereign identity, we knew that architecture was politics. Ongoing battles, such as those between Verifiable Credentials and ISO mDLs, between DIDComm and OpenID4VC, demonstrate disagreements over these power relationships made visible in technological discussions.

The question now is whether we can reclaim our ideals before they’re completely inverted by the side of centralized power and controlled architecture.

The path forward requires bridging the gaps Geneva revealed:

Between corporate platform dominance and global digital sovereignty Between the promise of decentralization and the reality of recentralization Between technical standards and policy reality Between privacy absolutism and implementation pragmatism A Personal Note

After three decades of building internet infrastructure, I’ve learned that the most dangerous moment isn’t when systems fail, it’s when they succeed in ways that invert their purpose. We built protocols for human autonomy and watched them become instruments of platform control. We created standards for decentralization and see them twisted into new forms of centralization.

This conversation continues in private Signal groups, in conference hallways, in the space between what we built and what we’ve become. The Atlantic Council warns of power centralizing “in ways that threaten the open and bottom-up governance traditions of the internet.” When critics from across the geopolitical spectrum — from sovereignty advocates to digital rights groups — all sense something amiss, it suggests a fundamental architectural problem that transcends ideology.

Perhaps it’s time for a new architecture: one that acknowledges these inversions and builds resistance into its very foundations.

But that’s a longer conversation for another day.

Christopher Allen has been architecting trust systems for over 30 years, from co-authoring TLS to establishing self-sovereign identity principles. He currently works on alternative approaches to digital identity through Blockchain Commons.

Community Responses & Discussion since Publication

This article sparked significant discussion across the digital identity community:

Mailing List Discussion W3C Credentials Community Group Thread (39 messages, July 16-17) Debate between pragmatic incrementalism vs. human rights imperatives Questions about whether current standards help or hinder decentralization Concerns about “death by 1000 compromises” in SSI implementation My own synthesis and response to this CCG thread, including highlighting Utah’s “recognizer not issuer” as an altertive model De-platforming humans sub-thread (19 messages, July 17) Adrian Gropper proposes moving beyond SSI as “anti-pattern” Discussion of Nostr as alternative architecture Debate over whether did:web is truly decentralized given DNS dependencies Response Articles A Pattern of Moral Crisis - Kyle Den Hartog Examines how technologies get co-opted during times of crisis, drawing parallels to historical censorship patterns Centralized SSI - Kyle Den Hartog Analyzes how trust architectures themselves, not just technology, determine whether systems preserve or remove agency Cyber Storm Rising: Designing for the Warzone - Carsten Stöcker Reframes decentralization as urgent cybersecurity necessity, not just privacy concern, citing Ukraine’s experience Choose Love and Joy - Will Abramson Optimistic perspective on using advanced cryptography and blockchain “hardness” to build kinder digital futures Privacy in EUDI - Jaromil (Dyne.org) Technical analysis of European Digital Identity implementation and privacy implications Decentralized Age Verification - Kyle Den Hartog Concrete proposal for privacy-preserving content moderation that shifts roles within the SSI triangle

Join these ongoing discussion or share your perspective.

Monday, 14. July 2025

The Rubric

Didja JWK? We did! (did:jwk, Part 2)

did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...
did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...

Didja JWK? We did! (did:jwk, Part 1)

did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...
did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...

Saturday, 12. July 2025

Human Colossus Foundation

Moving on with Overlays Capture Architecture 2.0: A New Era of Semantic Interoperability

The Overlays Capture Architecture (OCA) is a foundation for Dynamic Data Economy structuring and presenting data in a traceable and verifiable way. Recognising today’s need of interoperability between standards, the release of OCA v2, the architecture takes a major leap forward, introducing a modular and extensible approach in the definition of overlays. Community 2.0 The new specification cente

The Overlays Capture Architecture (OCA) is a foundation for Dynamic Data Economy structuring and presenting data in a traceable and verifiable way. Recognising today’s need of interoperability between standards, the release of OCA v2, the architecture takes a major leap forward, introducing a modular and extensible approach in the definition of overlays.

Community 2.0

The new specification centers around making overlays easier to define, share, and validate — even by non-technical users. At its core, OCA v2 introduces the concept of Community Overlays, empowering ecosystems to create and maintain their own overlay definitions without deep technical barriers.

We used the term Community (i.e. not ecosystem) to stress the need of creating the flexibility to create specific overlays for a given purposes shared by multiple users. For example, in science this approach helps to define the ultra-precise meaning of certain datasets while keeping them in line with common standards. In compliance, the approach enables the definition of data structures that must match specific regulatory constraints in a given jurisdiction. In supply-chain, the approach enables domain specific definitions of data exchanges to be used across the entire chain without requiring a centralised authority.

Definition of Community-Driven Overlay

Enabling Community Overlays required a different approach to the way the overlays are defined. Instead of specifying them in the OCA specification, we created a Domain-Specific Language (DSL), which allows us to create overlay definitions, which we called OVERLAYFILE.

OVERLAYFILE is a text-based file which can consist of one or many definitions of various overlays. If you are familiar with programming, think of it as a *.h (header) file from C++. If you are not familiar with technology, think of it as the exact structure of the dataset your boss, team, department or company validated for use.

Having clearly defined overlays, the relevant tooling support any community-defined overlay and simplify the way we manage overlays in the whole ecosystem. When you use it, you have the cryptographic assurance to use what the community has validated for use.

A new file type, .overlaysfile, allows communities to formally define their own overlays. By separating overlay definitions from usage, ecosystems can establish governance and enable cross-project reusability. Overlay repositories further support this, enabling easy distribution and import of overlays.

Enhanced Modularity and Validation

The new approach allows overlays to define schemas, ensuring proper structure and data integrity. Tooling can now validate authored overlays against community-defined schemas, reducing errors and increasing trust.

OCA Bundle in JSON format

An OCA Bundle is a set of OCA objects serving as envelop for distribution and usage of the semantic. With v2 we finally introduced long waiting new encoding format which replace old .zip file by simple JSON object. This new format was in test since quite some time in reference implementation of OCA and now it is official part of the OCA specification allowing to simplify the tooling and the way how the bundle is transmitted.

OCA Specification v2

Finally we release RC1 of Overlays Capture Architecture Specification v2.0.0 which is signal for readiness of mentioned features and functions. The process of implementation already started in our reference implementation which can be followed at oca-rs.

Below you would find the list of major changes which version 2.0.0 brings:

Sensitive overlay replaces PII's flagging in capture base. This enables an enhanced approach to privacy and other risks flagged directly in the schema

Categories from the Label overlay have move to presentation layer. Thus strictly enforcing the distinction between data inner structure and meaning with data presentation.

Upgrade to “Community Overlays” of certain previously “core overlays”. - all overlays listed below are nominated as community overlays and hosted in an Overlays Repository.

Information

Transoformation

Presentation

Layout

Conditional

Unit mapping

Introduce SemVer for all objects

Support for 639-1 and 639-3

Support for namespacing in overlay name

OCA2.0 build for stronger, verifiable data integrity

All objects of the Overlays Capture Architecture, the Capture Base, Overlays and Bundle are compatible with CESR encoding. This ensures that those objects can be authenticated using DKMS a KERI based decentralised key management.

This is pivotal to establish cryptographically verifiable integrity of data objects. This property enables the implementation of distributed governance models - a topic that will receive much attention at the Human Colossus Foundation

Webinar at DSWG

The Technology Council, who is responsible for maintaining the OCA specification and reference implementation, hosted a webinar during the Decentralised Semantic Working Group where they delved into the details of the design of the new architecture and all the features of version 2.0.

Decentralised Semantic Working Group - Overlays Capture Architecture 2.0

Looking Ahead

With OCA v2, the Overlays Capture Architecture moves closer to its vision of an open, extensible semantic ecosystem where organizations and communities can seamlessly create, validate, and share schemas.

Friday, 11. July 2025

FIDO Alliance

National World: 16 billion passwords leaked: How to protect yourself as cybersecurity experts warn of repeat attacks

Cybersecurity experts are urging internet users to take immediate steps to secure their online accounts, after largest-ever data leak exposed more than 16 billion login credentials including from major platforms […]

Cybersecurity experts are urging internet users to take immediate steps to secure their online accounts, after largest-ever data leak exposed more than 16 billion login credentials including from major platforms like Google, Facebook, Apple, and even government services.

The breach, discovered by researchers at Cybernews, is believed to have been carried out using infostealers that harvested login data and other sensitive credentials from multiple platforms. “This is not just a leak – it’s a blueprint for mass exploitation,” Cybernews said in a statement. “With over 16 billion login records exposed, cybercriminals now have unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing.”


ZD NET: How passkeys work: Your passwordless journey begins here

Over the last few decades, compromised usernames and passwords have typically been at the root of some of the most sensational, damaging, and costly data breaches. An incessant drumbeat of […]

Over the last few decades, compromised usernames and passwords have typically been at the root of some of the most sensational, damaging, and costly data breaches. An incessant drumbeat of advice about how to choose and use strong passwords and how not to fall prey to social engineering attacks has done little to keep threat actors at bay. 


MobileIDWorld: Meta Rolls Out Passkey Authentication for Facebook Mobile Users Globally

Meta has begun rolling out passkey login authentication for Facebook users on iOS and Android mobile devices, marking a significant advancement in the industry-wide movement away from traditional password-based security. […]

Meta has begun rolling out passkey login authentication for Facebook users on iOS and Android mobile devices, marking a significant advancement in the industry-wide movement away from traditional password-based security. The implementation follows similar moves by tech giants Apple, Google, and Microsoft who have been leading the charge toward passwordless authentication.

The new passkey feature will become available to users globally over the coming weeks. To use the functionality, users must have devices that support FIDO2/WebAuthn standards, which are commonly found in modern iOS and Android smartphones. These standards, developed through collaboration between the FIDO Alliance and the World Wide Web Consortium (W3C), provide a secure framework for passwordless authentication that has been widely adopted across the technology industry.


Project VRM

Market intelligence that flows both ways

Look up customer journey or customer experience (aka CX) and you’ll find nothing about what the customer drives, or rides. All results will be for systems meant for herding customers like cattle into a chute that the CX business (no kidding) calls a sales funnel: But let’s stick with the journey metaphor, because there are good people […]

Look up customer journey or customer experience (aka CX) and you’ll find nothing about what the customer drives, or rides. All results will be for systems meant for herding customers like cattle into a chute that the CX business (no kidding) calls a sales funnel:

Do any customers want to go down these drains?

But let’s stick with the journey metaphor, because there are good people in the marketing business who have thought deeply about how people buy and own things. Chief among those people is Estaban Kolsky, of Constellation Research. He visualizes the journey in a way that not only gives weight to the ownership experience, but separates it from the sales experience :

As for our actual experience, we spend 100 percent of our lives with things we own, and just a tiny percentage on buying them. So the real ratio should look more like this:

And yet, as I pointed out several years back in Turning the Customer Journey Into a Virtuous Cycle

…consider the curb weight of “solutions” in the world of interactivity between company and customer today. In the BUY loop of the customer journey, we have:

1. All of advertising, which Magna Global expects to pass $.5 trillion, more than a decade ago

2. All of CRM/CX, which now exceeds $100 billion

3. All the rest of marketing, which has too many segments for me to bother looking up

So, in the OWN loop we have a $0 trillion greenfield.

To enter that greenfield, we need customers to be in charge of their side of these relationships— preferably through means for interaction that customers themselves control—on terms that are agreeable to both sides, rather than the one-sided terms we suffer every time we click AGREE on a cookie notice.

To help imagine how that will work, I volunteer a real-world example from my own life.

A few years back, I bought a pair of LAMO Mens Mocs at a shopping mall kiosk in Massachusetts. Here’s one:

I like them a lot. They’re very comfortable and warm on winter mornings. In fact I still wear them, even though the soles have long since come apart and fallen off. Here is how they looked after a few years of use:

I’m showing this so you, and LAMO, can see what happens, and how we can both use my experience—and those of other customers—to change the world.

See, I like LAMO, and would love to help the company learn from my experience with one of their products. As of today, there are four choices for that:

Do nothing (that’s the default) Send them an email Go on some website and talk about it. (A perfect Leighton cartoon in the  New Yorker shows a couple registering at a hotel while the person behind the counter says, “If there’s anything we can do to make your stay more pleasant, just rant about it on the Internet.”) Get “social” by tweeting to @LAMOfootwear or posting to LAMO’s Facebook page. (For wisdom on “social” relations between brands and presumed fans, see Bob Hoffman‘s talk on the topic.)

So here is a fifth choice: give these moccasins their own virtual cloud, where LAMO and I can share intelligence about whatever we like, starting (on my side) with reports on my own experience, requests for service, or whatever. Phil Windley calls these clouds picos, for persistent compute objects. Picos are breeds of what Bruce Sterling calls spime: persistent intelligence for things. Picos have their own operating system (e.g., Wrangler, which Phil most recently posted about here), and don’t need intelligence on board. Just scan a QR code, and you’ll get to the pico. Here’s the QR code on one of my LAMO moccasins:

Go ahead and scan the code with your phone. You’ll get to a page that says it’s my moccasin.

That’s just one view of a potential relationship between me and Lamo — one in which I can put a message that says “If found, call or text _______.” Another view is on my own dashboard of things in my OWN cycle, and direct connections to every one of those companies. That relationship can rest on friendly terms in which I’m the first party and the company is the second party. (For more on that, see here and here.)

So look at the relationship between me and Lamo as a conduit (the blue cylinder below) that lives in the pico for my mocassin. That conduit goes from my VRM (vendor relationship management) dashboard to Lamo’s CRM (customer relationship management) system. There is no limit to the goodness that can pass back and forth between us, including intelligence about how I use my moccasins.

Let’s look at what can happen at either or both ends of that conduit.

A pico for a product is a CRM dream come true: a standard way for every copy of every product to have its own unique identity and virtual cloud (in which any data can live), and standard way any customer can report usage and other intelligence about any product they own—without any smarts needing to live on the thing itself.

If I scan that QR code, I can see whatever notes I’ve taken. I can also see whatever LAMO has put in there, with my permission. Also in that cloud is whatever programming has been done on it. Here is one example of simple relationship logic at work:

IF this QR code is scanned, THEN send LAMO a note that Doc has a new entry in our common journal.

Likewise, LAMO can send me a note saying that there is new information in the same journal. Maybe that information is a note telling me that the company has changed sole manufacturers, and that the newest Mens Mocs will be far more durable. Or maybe they’ll send a discount on a new pair. The correct answer for what goes in the common journal (a term I just made up) is: whatever.

Now let’s say LAMO puts a different QR code, or other identifier, in every moccasin it sells. Or has a CRM system that is alert to notifications from customers who have turned their LAMO moccasins into picos, making all those moccasins smart. LAMO can then not only keep up with its customers through CRM-VRM conduits, but tie interactions through those conduits to the dashboards of their accounting systems (from Xero or other companies that provide enriched views of how the company is interacting with the world).

This is one huge potential key to the future of customer servicecustomer relationship management (CRM), call centersloyalty programscontinuous improvementcustomer experience (CX)customer engagement (CE) and other complicated ways businesses today try to solve all relationship problems from the maker’s or the seller’s side alone.

Follow the links in the last paragraph (all to Wikipedia), and you’ll find each of them has “multiple issues.” The reason for that is simple: the customer is not involved with any of them. All those entries make the sound of industries talking to themselves — or one hand slapping.

This is an old problem that can only be fixed on the customer’s side. Before the Internet, solving things from the customer’s side — by making the customer the point of integration for her own data, and the decider about what gets done with that data — was impossible. Now that we have the Internet, it’s very possible, but only if we get our heads out of business-as-usual and back into our own lives. This will be good for business as well.

A while back I had meetings with two call center companies, and reviewed this scenario:

A customer scans the QR code on her cable modem, activating its pico. By the logic described above, a message to the call center says “This customer has scanned the QR code on her cable modem.” The call center checks to see if there is an outage in the customer’s area, and — if there is — finds out how soon it will be fixed. The call center sends a message back saying there’s an outage and that it will be fixed within X hours.

In both cases, the call center company sai,d “We want that!” Because they really do want to be fully useful. And — get this — they are programmable.

Unfortunately, in too many cases, they are programmed to avoid customers or to treat them as templates rather than as individual human beings who might actually be able to provide useful information. This is old-fashioned mass-marketing thinking at work, and it sucks for everybody. It’s especially bad at delivering (literal) on-the-ground market intelligence from customers to companies.

Call centers would rather be sources of real solutions rather than just customer avoidance machines for companies and anger sinks for unhappy customers. The solution I’m talking about here takes care of that. And much more.

Now let’s go back to shoes.

I’m not a hugely brand-loyal kind of guy. I use Canon cameras because I like the long-standing 5D user interface more than the competing Nikon ones, and Canon’s lens prices tend to be lower. I use Apple computers because they’re easy to get fixed and I can open a command line shell and get geeky when I need to. I drive a 2017 VW wagon because I got it at a good price. And I buy Rockport shoes because, on the whole, they’re pretty good.

Used to be they were great. That was in the ’70s and early ’80s when Saul and Bruce Katz, the founders, were still in charge. That legacy is still there, under Reebok ownership; but it’s clear that the company is much more of a mass marketing operation than it was back in the early days. Still, in my experience, they’re better than the competition. That’s why I buy their shoes. Rockports are the only shoes I’ve ever loved. And I’ve had many.

So here is a photo I took of wear-and-tear on two pairs of Rockport casual shoes I still use, because they’re damned comfortable:

Shots 1 and 2 are shoes I bought in June 2012, and are no longer sold, near as I can tell. (Wish they were.) Shots 3 and 4 are of shoes called Off The Coast 2 Eye. I bought mine in late 2013, but didn’t start wearing them a lot until early this year. I bought both at the Rockport store in Burlington Mall, near Boston. I like that store too.

The first pair has developed a hole in the heel and loose eyelet grommets for the laces around the side of the shoe. The hole isn’t a big deal, except that it lets in water. The loose eyelets are only a bother when I cross my feet sitting down: they bite into the other ankle. The separating outer sole of the second pair is a bigger concern, because these shoes are still essentially new, and look new except for that one flaw. A design issue is the leather laces, which need to be double-knotted to keep from coming undone, and even the double-knots come undone as well. That’s a quibble, but perhaps useful for Rockport to know.

I’d like to share these experiences privately with Rockport, and for that process to be easy. Same with my experiences with LAMO moccasins.

It could be private if Rockport and LAMO footwear came with QR codes for every pair’s pico — it’s own cloud. Or if Rockport’s CRM or call center system was programmed to hear pings from my picos.

Ideally, customers would get the pico along with the shoe. Then they would have their own shared journal and message space — the conduit shown above — as well as a programmable system for creating and improving the whole customer-company relationship. They could also get social about their dialogs in their own ways, rather than only within Facebook and Twitter, which are the least private and personal places imaginable.

This kind of intelligence exchange can only become a standard way for companies and customers to learn from each other if the code for picos is open source. If Rockport or LAMO try to “own the customer” by locking her into a closed company-controlled system — the current default for customer service — the Internet of Things will be what Phil calls “the Compuserve of things”. In other words, divided into the same kind of closed and incompatible systems we had before the Net came along.

One big thing that made the Internet succeed was substitutability of services. Cars, banks, and countless other product categories you can name are large and vital because open and well-understood standards and practices at their base have made substitutability possible. Phil says we can’t have a true Internet of Things without it, and I agree.

The smartest people working for companies are their customers. And the best way to activate customer smarts is by giving them scale. That’s what picos do.

As a bonus, they also give companies scale. If we can standardize picos, we’ll have common and standard ways for any customer and any company to relate to each other through any VRM + CRM system. Think about how much more, and better, intelligence a company can get from its customers this way, rather than through the ones barely succeeding now, where the company does all the work, and fails to know an infinitude of useful stuff customers could be telling them. Think about how much more products can be improved, an iterated over time. Think about how much more genuine loyalty can be created and sustained with this kind of two-way system.

Then think how much companies can save by not constantly spying on customers, guessing about what they might want, spamming them with unwanted and unnecessary sales messages, maintaining systems try to relate but actually can’t, and herding customers into imaginary funnels that customers would loathe if they could see what’s going on.

It’s a lot.

So let’s start working on growing a sane world of business that’s based on market intelligence that flows both ways, instead of the surveillance-based guesswork and delusional imaginings of marketing that smokes its own exhaust. We can do it, privately, and at scale.

The first ancestor of this post appeared at ProjectVRM on 19 Apri 2014. It was updated a bit on 8 June 2017. The second one was posted here on Medium in 2016. With IEEE P7012, aka MyTerms nearing completion, there is a good chance we can make this dream come true in 2026.

Thursday, 10. July 2025

The Engine Room

Help Us Map Responses to Tech-Facilitated Intimate Partner Violence Affecting Young Women and LGBTIQ+ Activists

We are excited to share that we are starting a new project in partnership with the Numun Fund to map organizations and community responses addressing technology-facilitated gender-based violence (TFGBV), specifically intimate partner violence (IPV) affecting girls, young women and LGBTIQ+ activists in the Majority World. The post Help Us Map Responses to Tech-Facilitated Intimate Partner Violenc

We are excited to share that we are starting a new project in partnership with the Numun Fund to map organizations and community responses addressing technology-facilitated gender-based violence (TFGBV), specifically intimate partner violence (IPV) affecting girls, young women and LGBTIQ+ activists in the Majority World.

The post Help Us Map Responses to Tech-Facilitated Intimate Partner Violence Affecting Young Women and LGBTIQ+ Activists appeared first on The Engine Room.


Weaving Sound Memories: Exploration and Care of the Oír Más Archive

We reconstruct memory through listening. Over the past three months, as we shared in April, we immersed ourselves with our ears wide open and a shared desire to explore a digital sound archive that began taking shape over 20 years ago. The post Weaving Sound Memories: Exploration and Care of the Oír Más Archive appeared first on The Engine Room.

We reconstruct memory through listening. Over the past three months, as we shared in April, we immersed ourselves with our ears wide open and a shared desire to explore a digital sound archive that began taking shape over 20 years ago.

The post Weaving Sound Memories: Exploration and Care of the Oír Más Archive appeared first on The Engine Room.

Wednesday, 09. July 2025

Next Level Supply Chain Podcast with GS1

Why Smart Brands Are Rethinking Supply Chain Resilience

How are brands thinking about their supply chains five years after the COVID-19 crisis? Companies are now leaning more heavily into innovation, from making their operations resilient to market changes to launching sustainability initiatives. In this episode, Stephanie Mehta, CEO and Chief Content Officer at Mansueto Ventures, joins hosts Reid Jackson and Liz Sertl following her keynote at GS1

How are brands thinking about their supply chains five years after the COVID-19 crisis?

Companies are now leaning more heavily into innovation, from making their operations resilient to market changes to launching sustainability initiatives.

In this episode, Stephanie Mehta, CEO and Chief Content Officer at Mansueto Ventures, joins hosts Reid Jackson and Liz Sertl following her keynote at GS1 Connect. They discuss why the supply chain is now at the center of innovation and how companies can stay ahead of changes in the economy and evolving customer demands.

Drawing on her experience leading Fast Company and Inc., Stephanie shares how resilience, sustainability, and data-driven thinking are transforming the business landscape for companies of every size.

 

In this episode, you’ll learn:

How companies are using the supply chain to drive innovation and acceleration

Why executives are rethinking product packaging, automation, and logistics

The impact of data and social media on a company’s operations

 

Jump into the conversation:

(00:17) Introducing Next Level Supply Chain

(02:32) Stephanie Mehta’s journey from business journalist to CEO

(03:27) How GS1 US collaborates with media brands

(04:13) Reaching small businesses with supply chain storytelling

(05:41) What most people miss about barcodes

(07:35) Why innovation should not stop post-crisis

(08:23) How sustainability aligns with consumer demand

(11:16) Solving forecasting and inventory with better data

(12:05) The logistics of HP’s sustainability initiative

(15:56) Social media’s growing impact on supply chains

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Stephanie Mehta on LinkedIn

Check out Mansueto Ventures

Tuesday, 08. July 2025

The Engine Room

WHEN SAFETY IS POLITICAL: DIGITAL SOLUTIONS FOR TRANS YOUTH IN UGANDA

Through our Matchbox Program, we partner with civil society organizations to strengthen their work by harnessing the power of data and technology. Our recent partnership with Trans Youth Initiative-Uganda (TYI-Uganda) offers a lens into how collaborative platform development can respond to pressing social challenges while also building long-term  organizational capacity. The post WHEN SAFET

Through our Matchbox Program, we partner with civil society organizations to strengthen their work by harnessing the power of data and technology. Our recent partnership with Trans Youth Initiative-Uganda (TYI-Uganda) offers a lens into how collaborative platform development can respond to pressing social challenges while also building long-term  organizational capacity.

The post WHEN SAFETY IS POLITICAL: DIGITAL SOLUTIONS FOR TRANS YOUTH IN UGANDA appeared first on The Engine Room.

Monday, 07. July 2025

FIDO Alliance

The Hacker News: Microsoft Removes Password Management from Authenticator App Starting August 2025

Microsoft has said that it’s ending support for passwords in its Authenticator app starting August 1, 2025. Microsoft’s move is part of a much larger shift away from traditional password-based […]

Microsoft has said that it’s ending support for passwords in its Authenticator app starting August 1, 2025.

Microsoft’s move is part of a much larger shift away from traditional password-based logins. The company said the changes are also meant to streamline autofill within its two-factor authentication (2FA) app, making the experience simpler and more secure.

Over the past few years, Microsoft has been pushing for a passwordless future using technologies like passkeys, Windows Hello, and FIDO2-based authentication. These methods offer better protection against phishing and password reuse, which are still major attack vectors. While it may feel like a hassle at first, this change is actually aimed at reducing your risk in the long run.


Oasis Open

Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard – ends September 7th

OASIS Members and other interested parties, OASIS and the Open Document TC [1] are pleased to announce that Open Document Version 1.4 CS01 is now available for public review and comment. The OpenDocument Format (ODF) is an open XML-based document file format for office applications to be used for documents containing text, spreadsheets, charts, and […] The post Invitation to comment on OpenDocum

Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard - ends September 7th

OASIS Members and other interested parties,

OASIS and the Open Document TC [1] are pleased to announce that Open Document Version 1.4 CS01 is now available for public review and comment.

The OpenDocument Format (ODF) is an open XML-based document file format for office applications to be used for documents containing text, spreadsheets, charts, and graphical elements. The file format makes transformations to other formats simply by leveraging and reusing existing standards wherever possible. As an open standard under the stewardship of OASIS, OpenDocument also creates the possibility for new types of applications and solutions to be developed other than traditional office productivity applications.

The TC received three Statements of Use from Microsoft, Allotropia Software GmbH, and The Document Foundation [3].

The candidate specification and related files are available here:

OpenDocument Version 1.4 
Part 1: Introduction
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.odt(Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.pdf

OpenDocument Version 1.4 
Part 2: Packages
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.odt(Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.pdf

OpenDocument Version 1.4
Part 3: OpenDocument Schema
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.pdf

OpenDocument Version 1.4
Part 4: Recalculated Formula (OpenFormula) Format
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.pdf
Schema files are located here.

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at: OpenDocument-v1.4-cs01.zip

Members of the Open Document TC approved this specification by Special Majority Vote [2]. The specification had been released for public review as required by the TC Process.

Public Review Period

The 60-day public review is now open and ends 7 September 2025 at 23:59 UTC.

This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review  we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

========== Additional references:

[1] OASIS Open Document TC

[2] Approval ballot

[3] Links to Statements of Use
Microsoft:
https://groups.oasis-open.org/discussion/statement-of-use-from-microsoft
Allotropia Software GmbH:
https://groups.oasis-open.org/discussion/statement-of-use-for-open-document-format-for-office-applications-opendocument-version-14-cs01
Document Foundation:
https://groups.oasis-open.org/discussion/fwd-statement-of-use-odf-14

[4] https://www.oasis-open.org/policies-guidelines/ipr/
https://www.oasis-open.org/committees/office/ipr.php

Intellectual Property Rights (IPR) Policy

The post Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard – ends September 7th appeared first on OASIS Open.

Friday, 04. July 2025

Human Colossus Foundation

DKMS Alliance Kick-off in Geneva: Building Now the Future of Digital Trust

On June 30th, Geneva became the stage for an informal event: the kick-off meeting of the DKMS Alliance. This gathering marks the official launch of a collaborative initiative poised to reshape digital trust infrastructure for the next generation. At the time Geneva’s international traditions lead the city to host the Global Digital Conference 2025, the DKMS Alliance aims to bring now the open-sou

On June 30th, Geneva became the stage for an informal event: the kick-off meeting of the DKMS Alliance. This gathering marks the official launch of a collaborative initiative poised to reshape digital trust infrastructure for the next generation.

At the time Geneva’s international traditions lead the city to host the Global Digital Conference 2025, the DKMS Alliance aims to bring now the open-source primitives enabling the digital authentication of tomorrow to the enterprises and organisations with the required support, maintance and stability they needs to build digital interoperable solutions at scale.

The DKMS Alliance — is a joint endeavor kicked off by Human Colossus Foundation, Vereign AG and argonAUTHs. Anchored in the architectural roots of KERI (Key Event Receipt Infrastructure), the Alliance unites deep technical expertise and strong operational capabilities to deliver to the world a new paradigm for secure, verifiable, and sovereign digital interactions.

A Shared Vision for Open and Secure Digital Infrastructure

The Alliance goal is to drive adoption, sustainability, and market readiness for digital infrastructure components that are open, secure, and industrial-grade. By combining the power of DKMS, a Rust implementation of KERI, and the Overlays Capture Architecture (OCA) into production-ready resources the Alliance provides its members the first movers advantage to develop enterprise applications that scales. The DKMS Alliance aims to become a community acting as a beacon of dynamic innovation and reliability to deploy resilient solutions build on top of a reliable digital trust infrastructure.

The DKMS Alliance represents today a powerful fusion of complementary strengths. From the Human Colossus Foundation’s pioneering work on the KERI protocol and in data semantics and governance to Vereign’s production deployments in secure communications and ArgonAUTHs’ cryptographic agility and verifiability at scale, the founding organizations bring together decades of experience in building trust frameworks, decentralized identity, secure communication, and verifiable data ecosystems.

Why Now?

In a world increasingly challenged by fragmented approaches to digital trust, vendor lock-in, and fragile ecosystems, the DKMS Alliance provides a unified, community-driven foundation. By offering stable APIs, modular architectures, and predictable release cycles, the Alliance ensures that the core infrastructure remains robust and future-proof.

The Alliance also emphasises transparent governance, which will be build with Alliance members. This commitment to openness and quality positions it to become the go-to stack for governments and enterprise innovators looking to implement decentralised trust layers.

The Kick-off: Setting the Stage for Collective Action

The Geneva meeting brought together a mix of technical leaders, strategists, funders and early adopters to align on shared goals, identify immediate priorities, and announced a roadmap for 2025 and beyond. It marked the transition from vision to execution. Focused action points emerged to accelerate engineering efforts, develop governance structures, and expand outreach to early members.

As an early member of the DKMS Alliance, organizations have the unique opportunity to influence the evolution of specifications and implementations, gain early access to deliverables, and secure premium support from the visionary leaders shaping tomorrow’s digital trust landscape.

Join Us on This Journey

The DKMS Alliance is more than just a technical project — it’s a call to collective stewardship of the global digital commons. By joining, organizations not only protect critical dependencies but also demonstrate leadership in privacy, security, and data sovereignty. As a renegade from the current mainstream, you become within your domain, the reputable authority of tomorrow’s digital architecture.

We invite you to become part of the transformative movement. Check our Prima Vista document on how you can join this journey. Together, we can build the resilient, trustworthy, and open digital infrastructure the world urgently needs.

Stay tuned — more information, updates, and opportunities to engage with the DKMS Alliance will be coming soon!


MyData

Intermediaries in Network-based Ecosystems

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Every day, billions of […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Every day, billions of […]

Thursday, 03. July 2025

Trust over IP

ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI)

India's Digital Public Infrastructure (DPI) and Unified Payment Interface (UPI) revolutionize financial inclusion, bringing over 500 million people into the digital economy. This open-source, mobile-first system builds trust and breaks traditional payment monopolies. The post ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI) appeared first on Trust O
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/ToIP-EGWG-2025-06-26_-Aakash-Guglani-Enhancing-Trust-using-Digital-Public-Infrastructure-DPI.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Briefing Document: India’s Digital Public Infrastructure (DPI)

Date: June 26, 2025
Source: Excerpts from Trust over IP Ecosystems and Governance Working Group Meeting
Presenter: Aakash Guglani from the Digital India Foundation.
Topic: Enhancing Trust using Digital Public Infrastructure (DPI)
Excerpt: India’s Digital Public Infrastructure (DPI) and Unified Payment Interface (UPI) revolutionize financial inclusion, bringing over 500 million people into the digital economy. This open-source, mobile-first system builds trust and breaks traditional payment monopolies.

Executive Summary

This briefing document provides an overview of India’s Digital Public Infrastructure (DPI), focusing on its development, impact, and future direction as presented by Aakash Guglani from the Digital India Foundation. The DPI, particularly the Unified Payments Interface (UPI) and Aadhaar, has been instrumental in formalizing India’s economy, empowering its vast and diverse population, and challenging traditional payment monopolies. It represents a unique, policy-driven approach that blends public and private sector participation, aiming for broad inclusivity rather than just serving the affluent.

I. Core Philosophy and Context Addressing India’s Unique Challenges: India’s DPI was developed to address the needs of its “100 million plus” households, representing “500 million people” who were largely deprived and excluded from the burgeoning digital economy. Traditional private platforms primarily targeted the “top 1%,” while public approaches lacked state capacity and risked centralization. (00:07:39, 00:10:50) A “Digital Public Infrastructure” Approach: India adopted a “distro public infrastructure” model, which emphasizes diversity, choice, openness, and sovereignty. It’s a collaborative effort involving both public and private sectors. (00:11:08) Contrast with Other Models: Unlike purely private platforms or a “totally state-led model” like China’s, India’s DPI is designed to be a democratic solution. “We could not go with private platforms, and we never had the state capacity like China to go for a totally state-led model.” (00:09:30, 00:16:40) II. Key Components and Their Impact A. Aadhaar: Unique Digital Identity Foundation of DPI: Over the last 10 to 15 years, India has established “more than 1.3 billion Aadhaar,” a unique digital identity based on 10 biometrics (fingerprints and iris scan) for “99% of Indians.” (00:11:37, 00:11:50) Enabled EKYC: Aadhaar facilitated “EKYC (Know Your Customer),” drastically reducing the cost of customer acquisition for banks. The cost of KYC dropped from “$25 to $1.” (00:12:02, 00:21:16, 00:21:21) This cost reduction made it economically viable for banks, previously reluctant due to high offline verification costs, to open “zero-cost accounts” for the rural and unbanked population. (00:21:21, 00:21:29) Broad Financial Inclusion: This initiative, known as “JanDhan Yojna,” led to “more than 500 million people [having] bank accounts, which is totally zero cost.” (00:19:53, 00:21:10) B. Unified Payments Interface (UPI) Solution to Payment Friction: UPI addressed significant problems in India’s financial landscape, including limited digital transactions, high costs for credit card usage, reliance on paper receipts, and lack of “information collateral” for poor individuals seeking credit. (00:15:08, 00:16:56) Massive Adoption and Interoperability: As of March 2025, UPI recorded “more than 18 billion transactions.” (00:18:30) Over 200 banks are integrated, and it’s “totally interoperable.” A payment between a customer from Bank A and a merchant from Bank B takes “less than 10 seconds” via a QR code scan. (00:18:37, 00:18:42, 00:18:54) There is “no MDR (Merchant Discount Rate)” for most transactions, which incentivizes small vendors like vegetable sellers to adopt digital payments. (00:19:04, 00:19:15) “Every part of the country is using UPI,” demonstrating “deeper usage” even in poorer regions. (00:31:47, 00:31:52) Economic Empowerment: For retailers, UPI reduces the “cost of business” by allowing them to “service 3-4 people at a time” without handling cash and change. (00:29:21) It empowered women by ensuring that earnings from their stores went directly to their bank accounts, preventing misuse of physical cash. (00:30:13, 00:30:24) Challenging Global Monopolies: UPI has fundamentally reshaped the global payment landscape. It involves a diverse range of participants including “Visa, Mastercard, Google Pay, Phone Pay, American Express.” (00:33:45, 00:33:50) “It is not dominant by two major players,” effectively “break[ing] monopoly of private players.” (00:33:50, 00:33:58) UPI has “crossed the transaction volumes of Visa and Mastercard combined.” (00:34:14) “46% of real-time payments across the world happen out of India.” (00:34:21) Beyond the “Bottom of the Pyramid”: DPI, including UPI and EKYC, has also benefited the middle and upper-middle classes. The reduced KYC cost facilitated the opening of “more than 190 million DMAT accounts” (online stockbroking accounts) between 2016 and 2024, many for “first-time users.” (00:34:43, 00:35:08, 00:35:14, 00:35:52) This formalizes the economy and encourages savings. III. Enablers and Policy Framework Government Subsidies: An ongoing annual government subsidy of “$0.2 billion for MDR” (Merchant Discount Rate) for UPI usage is in place. This is viewed as an investment in “public railroad” or “public highway” infrastructure due to its immense “inclusion value.” (00:40:41, 00:40:54, 00:41:09) Agile Policy: The success of DPI is underpinned by “agile policies.” Examples include: The central bank’s allowance for EKYC (2012-2013). (00:45:04) An IT Act making “digital documents equivalent to offline documents” for digital locker services. (00:45:21) The RBI’s Digital Payments Committee recommending UPI. (00:45:49) Government procurement shifting to “totally digitally” platforms using UPI. (00:46:39) Leadership and Trust: The Prime Minister’s public use of UPI and foreign leaders using it in India “enhances trust in people,” creating a “virtuous cycle.” (00:47:01) Mobile-First, Low-Tech, Multilingual: The platforms are designed to be mobile-first, multilingual, and low-tech, supporting feature phones and even “offline availability.” (00:39:31, 00:39:38) Internet Penetration: Government efforts to provide “free internet” increased coverage from 25% to 48%, a “major contributor for UPI.” (00:33:11) Indigenous Development: All platforms are “indigenously made,” addressing specific Indian problems. (00:39:38) IV. Future Directions and Challenges Next Stage: Credit Enablement: With 500 million bank accounts and 1.2 billion digital identities, the focus shifts to empowering users by providing credit. (00:36:00, 00:36:26) Account Aggregator: A framework allowing individuals to consent to share their financial data between banks, mutual funds, insurance providers, etc., using the EKYC framework, to facilitate consumer and business loans. (00:36:41, 00:36:53) Open Credit Enablement Network (OCEN): This leverages the transaction data from informal vendors (e.g., vegetable sellers) who now have “information collateral” on their transaction frequency and average ticket size, making it “easier for banks to also financialize them.” (00:37:29, 00:37:35, 00:37:49) This moves DPI beyond just a base to “formalize the economy.” (00:37:49) Global Export of DPI: India aims to share its open-source DPI frameworks, including UPI APIs on GitHub, “freely to anyone around the world.” This includes cross-border payment solutions and sharing its vaccination system. (00:38:16, 00:38:34, 00:38:42) Future Open Data Platforms: The “India Stack was the beginning,” with future initiatives including digital commerce, online financial information sharing, online credit availability, digitization of health records, AI-based voice, and digital skills. (00:38:54, 00:39:04) Tech Sovereignty: India advocates for “tech sovereignty” to avoid reliance on foreign private entities that could “cut off my access” due to geopolitical tensions, as seen in the Russia-Ukraine war. The aim is to prevent private companies from engaging in “geopolitical positioning.” (00:48:14, 00:49:06, 00:49:32, 00:49:45) Fraud and Cybersecurity: With massive scale, the system faces “more than a million cyberattacks” annually, including those from state and non-state actors. (00:51:38, 00:51:49, 00:51:53) Common frauds include “digital arrest” scams using AI-based voice to demand UPI transfers. (00:53:00, 00:53:07) RBI is implementing measures like mandating brokers to use “authenticated UPI IDs,” displaying the recipient’s name before transfer, and nudging users not to pay while on a call. (00:55:54, 00:56:03, 00:56:10) The direct debit nature of UPI makes reversing fraudulent transactions difficult once money is transferred. (00:54:47) Ongoing efforts include public advisories and potential future escrow services. (00:53:23, 00:57:00) V. Conclusion

India’s DPI journey, marked by the success of Aadhaar and UPI, demonstrates a powerful model for digital inclusion and economic formalization in a democratic context. By leveraging agile policy-making, strategic government investment, and open-source technology, India has built a robust digital public good that empowers its citizens and offers significant lessons for the Global South. The ongoing challenge lies in mitigating fraud and expanding the system to foster greater financial empowerment through credit, while maintaining technological sovereignty.

For more details, including the slides,  meeting recording and transcript, please see our wiki 2025-06-26 Aakash Guglani & Enhancing Trust using Digital public Infrastructure (DPI) – Home – Confluence

https://www.linkedin.com/in/aakashguglani/ https://digitalindiafoundation.org/

The post ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI) appeared first on Trust Over IP.

Monday, 30. June 2025

Human Colossus Foundation

HCF to Participate in GC25: Advancing Global Digital Collaboration on Wallets & Credentials

We’re excited to announce that Human Colossus Foundation (HCF) will participate in GC25: Global Digital Collaboration on Wallets & Credentials, taking place on July 2, 2025. This important event, hosted by the Swiss Confederation, brings together leading organizations and innovators to shape the future of digital wallets, credentials, and interoperable identity systems. At HCF, we believe tha

We’re excited to announce that Human Colossus Foundation (HCF) will participate in GC25: Global Digital Collaboration on Wallets & Credentials, taking place on July 2, 2025. This important event, hosted by the Swiss Confederation, brings together leading organizations and innovators to shape the future of digital wallets, credentials, and interoperable identity systems.

At HCF, we believe that empowering individuals to control their own digital space is foundational to building a sustainable and trustworthy digital society. This vision is captured in our ongoing work around the concept of the Digital Self, as discussed in our blog post Self Actioning System, a preferred systematic embodiment of “Digital Self”. Central to this idea is the belief that enabling global digital collaboration can only emerge when individuals have sovereignty over their credentials and interactions.

HCF on Governance in Digital Trade

As part of GC25, HCF will also participate in the panel discussion "Governance in Digital Trade – Decentralization as Response to Challenges in a Multi-Polar World." moderated by Stephan Wolf from the Verifiable.Trade Foundation and including the Digital Governance Institute, FIWARE, and the Asia PKI Consortium, we will bring forward the necessity of a digital governance able to consider the sovreignty of peer-to-peer connections in a multi-polar world.

The world is changing rapidly, placing supply chains and financial systems under increasing pressure. In a multi-polar landscape shaped by shifting tariffs and transformative AI, flexibility and speed have become essential. Digitalisation and open networks provide a clear path to inclusive global market participation. However, governance remains a largely overlooked but crucial topic.

This session will explore current initiatives but also deal with challenging questions. By examining these questions from different perspectives, the panel aims to bridge public and private demand to address the complexities of modern global trade.

HCF will contribute its perspective on how decentralized governance models, such as those enabled by the Dynamic Data Economy, can empower organizations and individuals alike.

Digital Self: Enabling Individual Agency

The Digital Self represents a shift from centralized data control to individual empowerment, allowing each person to define, manage, and protect their digital identity and data assets. Events like GC25 are critical because they convene diverse stakeholders to co-create frameworks and standards that make this shift possible — from verifiable credentials to interoperable wallets.

Introducing the Dynamic Data Economy

HCF is contributing to this discussion through the introduction of the Dynamic Data Economy (DDE), a groundbreaking approach outlined in our launch announcement. The DDE offers an infrastructure where data is not merely a static asset but a dynamic, contextual element that individuals can control and share on their terms.

This model supports privacy, promotes innovation, and opens the door to new economic models centered around consent and transparency — all essential ingredients for a human-centric digital future.

Looking Ahead

Our participation in GC25 aligns with our mission to advance data governance standards that prioritize individual autonomy. By collaborating with global leaders at this event, we aim to further the conversation on building infrastructures that respect and reinforce the Digital Self.

We invite all who share this vision to join us in exploring how we can collectively build a more equitable, dynamic, and user-controlled digital ecosystem.

Stay tuned for more updates from GC25 and beyond!

Friday, 27. June 2025

FIDO Alliance

PCmag: This Password Manager Now Lets You Create an Account Without a Password

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication. One of the better-known password managers is now inviting people to try it without having to create […]

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication.

One of the better-known password managers is now inviting people to try it without having to create yet another password. Instead, Dashlane is now inviting people to try opening a new account secured only by a USB security key compliant with the “FIDO2” authentication standard; FIDO being short for Fast Identity Online.

Emphasize “try.” The company’s support page for this “early access” program notes that it supports only Google Chrome and Microsoft Edge, not Dashlane’s mobile apps. For now, it doesn’t let you create an account secured only by a passkey, the form of FIDO2 security more people use. 

The page also highlights a warning that this is an early-days exercise: “Important: Accounts created as part of the early access program are for testing purposes only. We recommend using your primary Dashlane account to store and manage your data.”


ZDNET: Facebook’s new passkey support could soon let you ditch your password forever

For all of us who hate passwords, passkeys represent a simpler and safer way of authenticating online accounts. But adoption has been slow, with many companies and websites still relying on passwords. […]

For all of us who hate passwords, passkeys represent a simpler and safer way of authenticating online accounts. But adoption has been slow, with many companies and websites still relying on passwords. Now the world’s biggest social media platform is jumping on the bandwagon.

On Wednesday, Facebook announced that it’s now rolling out support for passkeys on mobile devices. This means you’ll be able to use one to sign in to Facebook on an iPhone or Android device. But the passkey won’t be limited to your actual Facebook account.


Expert Insights Podcast: #64 – Passwordless Authentication and the Rise of Passkeys

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. […]

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. He explores the challenges of adoption, the importance of identity verification, and how cross-platform interoperability is accelerating passkey use. The conversation also touches on the impact of generative AI on cybersecurity and what the future holds for passkeys in building long-term resilience.


Ars Technica: Coming to Apple OSes: A seamless, secure way to import and export passkeys

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to […]

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to credential phishing and other attacks targeting passwords.

The import/export feature, which Apple demonstrated at this week’s Worldwide Developers Conference, will be available in the next major releases of iOS, macOS, iPadOS, and visionOS. It aims to solve one of the biggest shortcomings of passkeys as they have existed to date. Passkeys created on one operating system or credential manager are largely bound to those environments. A passkey created on a Mac, for instance, can sync easily enough with other Apple devices connected to the same iCloud account. Transferring them to a Windows device or even a dedicated credential manager installed on the same Apple device has been impossible.


Hyperledger Foundation

Building Trust with Code: How DSR, Hedera, and LF Decentralized Trust, Are Advancing Decentralized Identity

A recent LF Decentralized Trust (LFDT) webinar highlighted a significant advancement in self-sovereign identity (SSI) infrastructure: the release of a new plugin enabling integration between Hedera and the ACA-Py framework. This plugin, developed by DSR Corporation, has been open sourced under the OpenWallet Foundation and extends ACA-Py with support for the Hedera network. Complementin

A recent LF Decentralized Trust (LFDT) webinar highlighted a significant advancement in self-sovereign identity (SSI) infrastructure: the release of a new plugin enabling integration between Hedera and the ACA-Py framework. This plugin, developed by DSR Corporation, has been open sourced under the OpenWallet Foundation and extends ACA-Py with support for the Hedera network. Complementing this, a new Python SDK, now part of the LFDT-hosted Hiero project, provides developers with tools to manage Decentralized Identifiers (DIDs) and Hyperledger AnonCreds Verifiable Credentials on Hedera using the Hedera Consensus Service. Together, these contributions—spanning both the OpenWallet and Hiero ecosystems—anchor critical identity metadata to Hedera’s high-throughput, public distributed ledger and expand the reach of open source digital identity tooling.

Thursday, 26. June 2025

Oasis Open

Coalition for Secure AI Welcomes Palo Alto Networks and Snyk, Advances AI Security with New Publication and Workstream

Boston, MA 26 June 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project dedicated to advancing AI security, proudly welcomes Palo Alto Networks and Snyk as Premier Sponsors. Their commitment reinforces CoSAI’s rapidly expanding network of 45 partner organizations united in the mission to advance secure and trustworthy AI. This growth comes […] The post Coalition for Secure AI Welcom

Whitepaper on AI Supply Chain Risks and Agentic Systems Workstream Strengthen the Coalition's Impact on Secure AI Development

Boston, MA 26 June 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project dedicated to advancing AI security, proudly welcomes Palo Alto Networks and Snyk as Premier Sponsors. Their commitment reinforces CoSAI’s rapidly expanding network of 45 partner organizations united in the mission to advance secure and trustworthy AI. This growth comes as CoSAI deepens its technical leadership with the release of a whitepaper focused on securing the AI software supply chain and the launch of a dedicated workstream on secure agentic system design, addressing the security implications of increasingly autonomous AI systems.

Expanding Industry Support

“Securing AI is one of the most urgent challenges facing the industry today,” said Sam Kaplan, Assistant General Counsel for Public Policy & Government Affairs, Palo Alto Networks. “By joining CoSAI, Palo Alto Networks is proud to support open collaboration that empowers developers to embed security from the start.”

Manoj Nair, Chief Innovation Officer, Snyk, added, “As AI transforms the cybersecurity landscape, proactive security standards are essential. Snyk is excited to contribute to CoSAI’s growing efforts to develop practical, open tools for safer AI adoption.”

Palo Alto Networks and Snyk join CoSAI’s distinguished group of Premier Sponsors – including EY, Google, IBM, Microsoft, NVIDIA, PayPal, Protect AI, Trend Micro, and Zscaler – united in accelerating the development of secure and responsible AI across industries.

New Landscape Paper: Addressing Security Risks in the AI Supply Chain

CoSAI’s first landscape paper, Establish Risks and Controls for the AI Supply Chain, explored further in our blog post, was developed through cross-sector collaboration to help teams integrate security at every stage of the AI system lifecycle, from design to deployment. Published by CoSAI’s Workstream 1: Software Supply Chain Security for AI Systems, the paper examines the unique supply chain security risks of AI systems, focusing on data, infrastructure, applications, and models, and highlights the need for specialized safeguards beyond traditional security practices. It also outlines key vulnerabilities, roles across stakeholder groups, and evaluates existing frameworks like SAIF, MITRE ATLAS, and OWASP AI Exchange to identify gaps and guide a more resilient, secure AI development lifecycle.

“This landscape paper is a significant step forward in AI security, offering comprehensive risk overview and practical protection strategies,” said Matt Maloney, Manager of Technical Staff at Cohere and CoSAI Workstream 1 Lead. “It’s an important resource mapping supply chain risks and highlighting where traditional controls fall short,” added Andre Elizondo, Principal Solutions Engineer at Wiz and CoSAI Workstream 1 Lead. Both emphasized the paper’s evolution alongside emerging AI agent technologies.

Introducing a New Workstream on Secure Agentic System Design

To address the growing need for secure-by-design approaches to autonomous AI, CoSAI has launched Workstream 4: Secure Design Patterns for Agentic Systems. This new track focuses on developing security models and architectural guidance for agentic systems, including updates to AI threat modeling, secure infrastructure design, and cross-system integration.

The addition of Workstream 4 complements CoSAI’s three existing workstreams:

Workstream 1: Software Supply Chain Security for AI Systems Workstream 2: Preparing Defenders for a Changing Cybersecurity Landscape Workstream 3: AI Security Risk Governance

“The launch of a workstream focused on Secure Agentic System Design reinforces CoSAI’s commitment to addressing the growing complexity of AI-driven autonomous systems,” said Workstream 4 Leads Sarah Novotny, Independent Consultant, and Ian Molloy, Department Head, Securing AI, IBM Research. “As agentic systems become more capable and pervasive, it becomes critical to ensure they are built on a foundation of security, transparency, and accountability.”

Get Involved 

CoSAI welcomes technical contributors, researchers, and organizations to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

Media Inquiries: communications@oasis-open.org

The post Coalition for Secure AI Welcomes Palo Alto Networks and Snyk, Advances AI Security with New Publication and Workstream appeared first on OASIS Open.


FIDO Alliance

Blog: 2025 FIDO India Working Group Member Meetup and Workshop

The FIDO Alliance hosted its annual India Working Group (FIWG) Member Meetup & Workshop on June 6, 2025, at Google’s Ananta campus in Bengaluru. With over 100 attendees representing leading […]

The FIDO Alliance hosted its annual India Working Group (FIWG) Member Meetup & Workshop on June 6, 2025, at Google’s Ananta campus in Bengaluru. With over 100 attendees representing leading technology companies, financial service providers, telecom operators, government agencies, retailers, and platform providers, the event served as an important forum for advancing phishing-resistant, passwordless authentication efforts in India.

The program began with welcome remarks from FIWG Chair Niharika Arora (Google) and Vice Chair Tapesh Bhatnagar (G+D), followed by a keynote address from FIDO Alliance President Sam Srinivas. Sam’s session offered a forward-looking overview of the global FIDO roadmap, covering recent progress in passkey adoption, certification, and platform interoperability, highlighted by many attendees as one of the most valuable sessions of the day.

Throughout the morning, FIWG member organizations shared implementation case studies drawn from real-world deployments. Christopher Clement Soris of Zoho Corporation presented lessons from integrating passkeys into enterprise workflows, emphasizing developer enablement and user trust. Vishu Gupta, Piyush Ranjan, and Deepak Singal of Times Internet shared insights into their journey deploying passkeys across consumer media platforms, with a focus on UX challenges and account recovery design. Shantanu Shirke of Mastercard showcased its Global Financial Framework (GFF) through a live demo of its FIDO-enabled authentication solution, and Simon Trac Do of VinCSS introduced applications of the FIDO Device Onboard (FDO) protocol for secure and scalable onboarding of IoT devices.

The Google Android and Chrome teams, including Niharika Arora, Eiji Kitamura, and Neelansh Sahai, provided updates on platform support for passkeys, highlighting recent enhancements to Android APIs, Chrome UX flows, and best practices for relying parties. These updates offered implementers concrete guidance on leveraging native OS features to enable seamless, secure sign-ins.

The event concluded with a panel discussion titled “Modern Authentication Meets Legacy Systems,” moderated by Niharika Arora and featuring Amit Mathur (Ensurity), Rooparsh Kalia (Mercari), Rahul Dani (Yubico), Tom Sheffield (Target), and Sam Srinivas (Google). The discussion addressed practical challenges in deploying FIDO-based authentication in environments with legacy infrastructure, including backward compatibility, account recovery, and risk trade-offs. Panelists shared candid reflections and emphasized the importance of phased integration strategies and cross-industry collaboration.

Compared to the 2024 edition, this year’s workshop reflected a clear evolution, from awareness-building to implementation maturity. While last year’s focus was largely on introducing the promise of passkeys and FIDO standards, the 2025 program emphasized operational insights, technical execution, and collaborative solutions.

[Watch the Highlight Video]

Through post-event surveys, participants expressed strong appreciation for the event’s practical focus, noting the value of detailed case studies, direct access to platform teams, and the opportunity to connect with peers tackling similar challenges. Many described the in-person format as especially effective for fostering shared understanding and building momentum.

As passkey adoption continues to accelerate across India, the India Working Group remains a vital platform for aligning implementation efforts, exchanging knowledge, and enabling long-term deployment success. Sincere thanks to all speakers, panelists, and attendees for their contributions, and to Google for hosting us in Bengaluru. We look forward to continuing this important work together throughout 2025 and beyond.
*Read last year’s recap: 2024 FIDO Alliance India Working Group Meetup and Workshop


Velocity Network

South Florida Leaders Gather to Explore the Future of Learning, Hiring, and Innovation

The post South Florida Leaders Gather to Explore the Future of Learning, Hiring, and Innovation appeared first on Velocity.

Wednesday, 25. June 2025

FIDO Alliance

MobileIDWorld: Apple Introduces Cross-Platform Passkey Import/Export Features Across Operating Systems

Apple has announced significant enhancements to its operating systems that will implement secure import and export capabilities for passkeys, building on the company’s ongoing efforts to eliminate traditional passwords. The new […]

Apple has announced significant enhancements to its operating systems that will implement secure import and export capabilities for passkeys, building on the company’s ongoing efforts to eliminate traditional passwords. The new features match standards developed by the FIDO Alliance for cross-platform credential management, joining similar initiatives from Microsoft and Google in the push toward passwordless authentication.

The new implementation will enable seamless and secure transfer of passkeys across platforms, addressing previous limitations in transferring credentials between devices and applications. The system uses a standardized data schema developed by the FIDO Alliance to ensure compatibility between different credential manager apps across iOS, iPadOS, macOS, and visionOS 26. The standardization is particularly significant as password-based attacks continue to rise, pushing the industry toward more secure authentication methods.


Techopedia: FIDO2 & Passkeys: The Future of Passwordless Authentication

Passwordless authentication has picked up in recent years. But the method drawing the most interest in security circles is physical security keys based on the FIDO2 standard. These USB or NFC […]

Passwordless authentication has picked up in recent years. But the method drawing the most interest in security circles is physical security keys based on the FIDO2 standard.

These USB or NFC keys offer something beyond the usual passwordless methods, like synced device passkeys or biometric logins. Here, you’re not relying on cloud-stored credentials or browser memory. Instead, everything depends on holding the key and verifying it with something only you know, like a PIN or fingerprint.

This shift to hardware security keys is gaining momentum across industries. Dashlane, for instance, has just rolled out an update that enables users to make a FIDO2 key their main passwordless login for unlocking credential vaults.

In this article, we explore where passwordless authentication stands today, what makes physical keys different, and how platforms are handling the hard parts like recovery, usability, and long-term security.


Next Level Supply Chain Podcast with GS1

From Red Tape to Resilience: How Regulation Shapes Supply Chains with Maggie Lyons

Confused by the new regulations and a patchwork of state-level policies? With a new administration setting fresh policy priorities, supply chains are facing shifting rules and growing pressure to adapt. Maggie Lyons, Vice President of Government and Regulatory Affairs at GS1 US, joins hosts Reid Jackson and Liz Sertl to decode the changes affecting how products are made, moved, and sold, and w

Confused by the new regulations and a patchwork of state-level policies?

With a new administration setting fresh policy priorities, supply chains are facing shifting rules and growing pressure to adapt.

Maggie Lyons, Vice President of Government and Regulatory Affairs at GS1 US, joins hosts Reid Jackson and Liz Sertl to decode the changes affecting how products are made, moved, and sold, and what businesses can do to stay ahead. From SNAP waivers and red dye bans to extended producer responsibility (EPR) laws and 2D barcodes, this episode breaks down how government decisions are impacting daily operations across food, retail, and consumer packaged goods (CPG).

Maggie’s team works with policymakers and industry leaders to align mandates with existing systems, helping avoid duplication and enabling efficient, standards-based implementation.

In this episode, you’ll learn:

How state-level regulation is influencing national supply chain strategies

Why new ingredient bans could create a ripple effect across CPG brands

What you can do to stay ahead of policy changes impacting your industry

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:07) Why GS1 built a policy team

(04:02) From Capitol Hill to CPG strategy

(06:34) Staying focused amid constant regulatory shifts

(08:48) Government agencies shaping supply chain standards

(10:38) Customs, tariffs, and food assistance priorities

(14:59) How SNAP waivers complicate retail operations

(17:57) What red dye bans mean next

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Maggie Lyons on LinkedIn

Tuesday, 24. June 2025

FIDO Alliance

asmag Security & IoT: Passwordless authentication: From trend to ‘strategic imperative’

For modern IT or Internet users, logging in to a website or app using a password is all too familiar. Increasingly, however, passwords create security concerns as they can be […]

For modern IT or Internet users, logging in to a website or app using a password is all too familiar. Increasingly, however, passwords create security concerns as they can be easily cracked or stolen. This is where passwordless authentication provides a more secure and convenient alternative.

“Password fatigue is real. Users demand faster, frictionless ways to authenticate without remembering complex strings,” said Edwardcher Monreal, Principal Solutions Architect for IAM Consumer Authentication Solutions at HID.


Digital Identity NZ

Ready…Reset…Go!

Matariki is a time for remembrance, celebration, and looking toward the future. Traditionally, Matariki was a time to honour ancestors, celebrate the harvest, and acknowledge the changing seasons. The post Ready…Reset…Go! appeared first on Digital Identity New Zealand.

Kia ora,

Matariki is a time for remembrance, celebration, and looking toward the future. Traditionally, Matariki was a time to honour ancestors, celebrate the harvest, and acknowledge the changing seasons.

Today, we stand at a pivotal moment where two transformative technologies – digital identity and artificial intelligence – are reshaping the fabric of society.  As the marginal cost of knowledge approaches zero, decentralised systems offer abundance creation through co-operation, as opposed to sticking with a more traditional scarcity-based approach. 

For DINZ members, the adoption playbook demands conscious choices: about focus, trade-offs, standards, and sovereignty. As local momentum builds around the world’s first decentralised credential identity ecosystem, the question is no longer whether disruption will occur – but how we collectively shape it, and who it serves. Perhaps Moby put it best in his 2002 song “We are all made of stars.”

It is hard to believe we are still using multi-password sign-on and facsimiles of identity proofs in an era where nearly everything we do is online. For those frustrated by the pace of change, it is important to remember we’re building a system for 30 years, of which the past five years has been very much design and build. The next five years will be about driving adoption.

With a tsunami of digital credentials hitting global markets, the race to be “first to issue” DISTF accredited credentials is on!  At the starting line are the DIA, NZTA, Air NZ, Hospitality NZ, alongside other go-to-market models such as Apple and Google’s online proxies.

Techweek Highlights

Those fortunate enough to attend Digital Public Infrastructure: The Invisible Foundation for NZ’s Digital Future in the General Assembly room in Parliament would have heard Minister Collins deliver the powerful message that “digital identity is the key to unlocking productivity in New Zealand.” 

The Minister’s call to action was compelling, raising the question of whether centralised apps are the answer for a citizen-centric experience at this particular technology inflection point still open for debate.

It is encouraging to see a shared understanding that the implementation of digital public infrastructure (DPI) will provide New Zealanders with an inclusive, future-ready trust layer that enhances privacy, security, provenance and fraud prevention, while preserving Aotearoa’s economic sovereignty. 

What’s more, an interoperable decentralised identity ecosystem will support the profound productivity improvements presented by hyperscale AI, without diminishing human agency by moving our data (and your soul) into someone else’s “cloud”.

Digital Trust Hui Taumata – Update

We have secured the world’s foremost native digital credential architect and builder James Monaghan to deliver the international keynote and host roundtable discussions at the Digital Trust Hui at Te Papa on 12 August, 2025.  

This must-attend identity conference promises to be more “Doey” than Hui. Our valued sponsors will showcase the most exciting identity projects in this space, and show how accelerating adoption of Trust Technology can help mitigate concerns around AI-related risks.

Roundtable discussions are currently being shaped around four key focus areas:

Regulatory barriers to change (public policy enablers such as omnibus bills to allow digital identity implementation) Identity for natural persons (early adopters, Privacy and taking control, proof of age, everyday applications, online verification, ID assurance hierarchy) Identity for legal entities (identity as a service, AML, license to operate, compliance, monitoring, directory enabled and real world asset marketplaces) Identity for machines (delegation, reputation, agents, bots, agentic commerce)

Assuming sufficient interest from government and industry sponsors, we plan to arrange an ecosystem design workshop with James Monaghan while he is in Wellington.

The Census Goes Digital: A Shift Towards Data-Driven Public Infrastructure

Stats NZ has officially retired the traditional census, opting instead for a new approach that leverages integrated administrative data. This marks a significant shift in how the government collects and uses trusted digital information – reinforcing the need for secure, privacy-preserving identity systems to ensure accuracy, inclusion, and transparency. It’s a timely reminder of the critical role digital identity plays in building smarter, citizen-centric public services. Read the full RNZ article.

Welcome to Our New Chair – Maria Robertson

Please join me in welcoming Maria Robertson as the new Chair of DINZ. You can read her introductory statement to the DINZ community on our website.

As we reset the DINZ playbook and hone our focus to accelerate adoption, Maria has already started to elevate our thinking thanks to her extensive experience across the public service, infrastructure and secondary industries.

Suffice to say, the first few weeks have been a baptism by fire, but I’m thoroughly enjoying being back in the identity services world at such a pivotal time. Your feedback is encouraged as we strive to fan the flames of adoption by issuers, holders and relying parties for the empowerment of all New Zealanders.

Mānawatia a Matariki,

Andy Higgs
Executive Director, Digital Identity NZ

Read full news here: Ready…Reset…Go!

SUBSCRIBE FOR MORE

The post Ready…Reset…Go! appeared first on Digital Identity New Zealand.


DIF Blog

DIF Labs Beta Cohort 2 Kick Off! 🚀

DIF Labs Beta Cohort 2 officially kicks off tomorrow, June 24th at 8 AM PST! This cohort brings together three projects that will advance privacy, legal frameworks, and governance in verifiable credentials. Meet Our Beta Cohort 2 Projects Legally-Binding Proof of Personhood for Verifiable Credentials via QES Led by Jon

DIF Labs Beta Cohort 2 officially kicks off tomorrow, June 24th at 8 AM PST! This cohort brings together three projects that will advance privacy, legal frameworks, and governance in verifiable credentials.

Meet Our Beta Cohort 2 Projects Legally-Binding Proof of Personhood for Verifiable Credentials via QES

Led by Jon Bauer and Roberto Carvajal

This project creates a standardized method to anchor Verifiable Credentials to legally recognized, high-assurance proof of an individual's identity through Qualified Electronic Signatures (QES). By leveraging eIDAS-recognized QES technology, this work will enable any W3C Verifiable Credential to carry the same legal weight as a handwritten signature.

labs/proposals/beta-cohort-2-2025/legallybinding-vcs/legallybinding-vcs.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Privacy-Preserving Revocation Mechanism

Led by Kai Otsuki and Ken Watanabe

This research project delivers an analysis of privacy-preserving revocation mechanisms for W3C Verifiable Credentials. The team will catalog real-world revocation scenarios, benchmark cryptographic mechanisms including status lists, dynamic accumulators, zk-SNARK proofs, and short-term credentials, and provide an open-source prototype evaluating computational costs for Issuers, Holders, and Verifiers.

labs/proposals/beta-cohort-2-2025/pp-revocation-mechanism/001_proposal.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Anonymous Multi-Signature Verifiable Credentials

Led by Seohee Park and Lukas Han

This protocol enables Verifiable Credential issuance that requires m-of-n multi-signature approval while maintaining anonymity of individual signers. Using Semaphore, it enables decentralized governance for VC issuance in organizations such as DAOs or government agencies, cryptographically proving sufficient participation without revealing participating member identities.

labs/proposals/beta-cohort-2-2025/anon-multi-sig-vc/anon_multi_sig_vc_proposal.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Gratitude to Our Leadership Team

We extend our thanks to our project leads who will be driving these initiatives forward. Their expertise is essential to advancing the state of verifiable credentials technology.

We're grateful to our mentors who share their knowledge and experience with our cohort participants. You can learn more about our mentor network in our directory.

Recognition goes to our chairs who provide strategic guidance and oversight:

Andor Kesselman Ankur Banerjee Daniel Thompson-Yvetot What's Next?

Tomorrow's kick-off session will bring together all participants to align on project goals, establish collaboration frameworks, and set the stage for three months of research and development. These projects address challenges in legal compliance, privacy preservation, and decentralized governance.

Stay tuned for updates as Beta Cohort 2 progresses.

DIF Labs continues to foster innovation in decentralized identity through collaborative research projects. Learn more about Lab's work at labs.identity.foundation.

Monday, 23. June 2025

Hyperledger Foundation

Blockchain Pioneers: Hyperledger Aries

As Hyperledger Foundation laid out in the Helping a Community Grow by Pruning Inactive Projectspost, there is an important life cycle to well governed open source projects. Through the evolution of the market, Hyperledger Foundation and, now, LF Decentralized Trust has been the home to a growing ecosystem of blockchain, identity, and related projects. 

As Hyperledger Foundation laid out in the Helping a Community Grow by Pruning Inactive Projectspost, there is an important life cycle to well governed open source projects. Through the evolution of the market, Hyperledger Foundation and, now, LF Decentralized Trust has been the home to a growing ecosystem of blockchain, identity, and related projects. 


Digital Identity NZ

Introductory Statement – Chair of the Executive Council, Digital Identity NZ

As the new Chair of the Executive Council of Digital Identity NZ, I am looking forward to leading a passionate and future-focused community working to build a trusted, inclusive, and interoperable digital identity ecosystem for New Zealand, building on the impressive work the Executive Council has done over the past few years .  The post Introductory Statement – Chair of the Executive

As the new Chair of the Executive Council of Digital Identity NZ, I am looking forward to leading a passionate and future-focused community working to build a trusted, inclusive, and interoperable digital identity ecosystem for New Zealand, building on the impressive work the Executive Council has done over the past few years . 

With a background in executive leadership across public service and advisory roles I have advocated for and delivered the transformative potential of digital identity as public infrastructure. Whether enabling seamless access to services, supporting mobility and consent, or underlining trust in our digital economy, identity – in all of its forms – is foundational to a modern, resilient society. 

DINZ plays a crucial convening role across government, iwi, industry, and civil society. As Chair, my focus is to champion practical progress: supporting policy and technical frameworks that uphold te ao Māori perspectives on identity, ensuring identity solutions reflect the needs of all New Zealanders, and advocating for interoperability that positions us as globally connected and locally grounded.

Our mission is urgent and clear: to enable every person in Aotearoa to participate safely and confidently in the digital world. I look forward to working with all our members to realise that vision.

Maria Robertson.

The post Introductory Statement – Chair of the Executive Council, Digital Identity NZ appeared first on Digital Identity New Zealand.

Thursday, 19. June 2025

Trust over IP

How Can We Trust What We See Online? Here’s One Way Forward

In a world where AI can create photos, videos, and even voices that look and sound real, how do we know what to trust? Every day, more content we see online is generated or altered by AI. That’s not always a bad thing. AI can help us create amazing art, get work done faster, or imagine new possibilities. But it also opens the door to misinformation, impersonation, and confusion. When anyone

Source: Original LF Decentralized Trust post

Wenjing Chu, Chair of the AI and Human Trust Working Group at Trust over IP, an LF Decentralized Trust project | Jun 12, 2025

In a world where AI can create photos, videos, and even voices that look and sound real, how do we know what to trust?

Every day, more content we see online is generated or altered by AI. That’s not always a bad thing. AI can help us create amazing art, get work done faster, or imagine new possibilities. But it also opens the door to misinformation, impersonation, and confusion. When anyone can create content that looks authentic, how do we tell what’s actually real?

To enhance human trust in AI systems and explore how AI itself can be used to address complex trust challenges in digital ecosystems, Trust over IP (ToIP), a project of LF Foundation Decentralized Trust, has launched a new AI and Human Trust (AIM) Working Group. It builds on the work done over the past three years by ToIP’s AIM task force.

The recently released white paper from the working group, ToIP Trust Spanning Protocol (TSP): Strengthening Trust in Human and AI Interactions, offers a way forward for building, maintaining and verifying interactions involving AI technologies. It brings together three powerful tools, the Trust Spanning Protocol (TSP)1, the C2PA Specification2, and the work of the Creator Assertion Working Group (CAWG)3, to build a system of authenticity for the digital world.

The key components include:

TSP (Trust Spanning Protocol) provides a strong foundation for online trust between people, platforms, and tools—making sure that when something claims to come from someone, it actually does. (The “Connector”) The C2PA Specification is a growing standard that helps attach a digital “nutrition label” to content—showing when it was made, how it was edited, and by what capture devices or software. (The “How” and the “What”) CAWG (Creator Assertion Working Group at DIF) focuses on making sure that individual and organizational content creators can identify themselves with their content and provide additional information for their audience to understand their content. (The “Who”)

Why do we need all three? Because content authenticity isn’t just about how something is created. It’s also about who made it, and how it gets communicated through public networks while retaining the integrity of actions made to it. C2PA gives us technical metadata about tools and edits. CAWG ensures the human creator is identified and attributed. And TSP makes the entire chain, from camera or AI tool to multiple individual human collaborators to final distribution platform, trustworthy at every step. Together, they provide a complete system covering creation, collaboration, and distribution.

All put together, these can help us answer the most important question about this digital artifact: Can I trust this?

This isn’t just a technical fix. It’s a new way to think about digital truth. And the paper lays out a path toward a future where users can more confidently trust the source and actions made to digital content in a way that’s accountable, verifiable, and respectful of creators.

Read the full white paper here.

We invite technologists, developers, artists, policy makers, and everyday internet users to take a look. It’s about restoring trust in a world where AI has blurred the lines of what is real and what is artificially generated.

1. Trust Spanning Protocol (TSP) is an ongoing work by Trust over IP (ToIP), a project of LFDT: https://trustoverip.github.io/tswg-tsp-specification
2. The C2PA Specification is an ongoing work by The Coalition for Content Provenance and Authenticity (C2PA): https://c2pa.org/specifications/specifications/2.2/index.html
3. The Creator Assertions Working Group (CAWG) is a joint effort by the Decentralized Identity Foundation (DIF) and ToIP. See https://cawg.io

__

Want to dive deeper into ToIP’s work on verifying authenticity? Check out this LF Decentralized Trust Webinar: Verifiable Authenticity—Answering the Threat of AI Deep Fakes

The post How Can We Trust What We See Online? Here’s One Way Forward appeared first on Trust Over IP.

Wednesday, 18. June 2025

Project VRM

Toward a Personal AI Roadmap for VRM

On the ProjectVRM list, John Wunderlich shared a find that makes clear how advanced and widespread  AI-based shopping recommendation has gone so far (and not just with ChatGPT and Amazon). Here it is: Envisioning Recommendations on an LLM-Based Agent Platform: Can LLM-based agents take recommender systems to the next level? It’s by Jizhi Zhang, Keqin Bao, Wenjie […]

On the ProjectVRM list, John Wunderlich shared a find that makes clear how advanced and widespread  AI-based shopping recommendation has gone so far (and not just with ChatGPT and Amazon). Here it is: Envisioning Recommendations on an LLM-Based Agent Platform: Can LLM-based agents take recommender systems to the next level?

It’s by Jizhi ZhangKeqin BaoWenjie WangYang ZhangWentao ShiWanhong XuFuli Feng, and Tat-Seng Chua* and is published in the Artificial Intelligence and Machine Learning section of Research and Advances in Communications of the ACM. So it’s serious stuff.

Here’s one graphic from the piece, with its caption:

Illustration of the Rec4Agentverse. The left side depicts three roles in the RecAgentverse: the user, the Agent Recommender, and Item Agents, along with their interconnected relationships. In contrast to traditional recommender systems, the Rec4Agentverse has more intimate relationships among the three roles. For instance, there are multi-round interactions between 1) users and Item Agents and 2) the Agent Recommender and Item Agents. The right side demonstrates how the Agent Recommender can collaborate with Item Agents to affect the information flow of users and offer personalized information services.

With VRM, however, we need our own personal AI agents, and useful data that’s not limited to what’s gathered by surveillance (which we can assume is most if not all of the Information layer). So I started a conversation with ChatGPT. I’ll divide it into sections: Who wrote the CACM piece The state of recommendation science Who is working on personal AI How one can get started Forms of personal data to manage

There’s a lot here. The meat of it, for ProjectVRM purposes, starts in section 3. (The first two are more about what we’re up against.) Please read the whole thing and help us think through where to go with the challenge facing us. As we see in section 3, we do have some stuff on our side.

1. Who wrote the CACM piece Me: Can you tell me more about the authors here: https://cacm.acm.org/research/envisioning-recommendations-on-an-llm-based-agent-platform/? ChatGPT:
 Jizhi Zhang

Affiliation: University of Science and Technology of China (USTC), Data Science Lab (inferred from USTC page) cacm.acm.org+14data-science.ustc.edu.cn+14x.com+14.
Research Interests: Recommendation systems, LLM-based agent platforms, and interactive intelligent systems—his lead authorship and continued work on Prospect Personalized Recommendation on LLM-Based Agent Platform (arXiv Feb 2024) reinforce this arxiv.org.

 Keqin Bao, Wenjie Wang, Yang Zhang, Wentao Shi, Fuli Feng, Tat‑Seng Chua

Affiliations: While CACM doesn’t list this, Jizhi Zhang and Fuli Feng are part of USTC’s Data Science Lab; Tat‑Seng Chua is a distinguished researcher from the National University of Singapore known for multimedia and information retrieval.
Expertise: A mix of recommender system design, LLM-agent evaluation, multimedia analysis, and human-AI interaction—reflecting a blend of USTC’s strengths and CACM relevance data-science.ustc.edu.cn+8cacm.acm.org+8data-science.ustc.edu.cn+8linkedin.com+4arxiv.org+4researchgate.net+4.

 Wanhong Xu

Affiliation: USTC Data Science Lab (USTC LDS), contributor to the same article .
Role: Emerging researcher whose latest publication is this Rec4Agentverse article, indicating an early-career focus in LLM-based recommender systems linkedin.com+8data-science.ustc.edu.cn+8linkedin.com+8.

Broader Context & Contributions

The group operates at the intersection of Large Language Models, agent-oriented systems, and recommender systems, pioneering a Rec4Agentverse paradigm that treats LLM-powered agents as “recommendable” items collaborating with one another—an evolution of traditional recommendation frameworkscacm.acm.org+5cacm.acm.org+5cacm.acm.org+5.
Their work continues earlier research (e.g., Prospect Personalized Recommendation… in Feb 2024), solidifying their ongoing commitment to expanding intelligent, conversational recommendation platforms arxiv.org.

Why This Matters

Academic Thought Leadership: Jizhi Zhang and colleagues are pushing forward thinking about how recommendations can evolve when agents—not passive items—are central.
Institutional Backing: All deeply connected to USTC’s Data Science Lab, underscoring strong Chinese academic investment in AI-driven recommender innovation.
Collaborative Reach: Including scholars like Tat‑Seng Chua broadens the research to include international perspectives in multimedia and retrieval systems.

In summary: This team is a multidisciplinary cohort—centered at USTC with international collaboration—who are pioneering the idea that LLM agents can become interactive, dynamic items in recommender systems. Their work positions them at the forefront of intelligent agent systems and could reshape how platforms personalize experiences in the future.

If you’re curious about any individual’s past publications, citation impact, or related work (e.g., on Agent4Rec), I’d be happy to dive deeper!

 

3. The state of recommendation science

Me:

How large is the emerging science, within the AI field, of recommendations?

ChatGPT:

The science of recommendations, particularly within the AI field, is not just emerging—it’s a massive, rapidly evolving subfield that is both academically rich and commercially dominant. That said, it’s also entering a new phase, driven by the rise of large language models (LLMs), interactive agents, and multimodal platforms.

Academic Scope 1. Established Field: Recommender Systems

Recommender systems (RecSys) are a well-developed domain within AI and machine learning, with roots in the 1990s.
Annual conferences like:
ACM RecSys (since 2007)
SIGIR, KDD, WSDM, and NeurIPS regularly feature recommendation research.
Thousands of papers annually on topics like:
Collaborative filtering
Matrix/tensor factorization
Deep learning for recommendation
Fairness, transparency, and explainability in RecSys
Reinforcement learning and bandit algorithms for personalization

As of 2024, over 60,000 papers in academic databases (e.g., Google Scholar) are tagged or indexed under “recommender systems”.

Current Trends & Expanding Frontiers 2. LLMs + Recommendation = New Subfield

A major shift is underway: integrating Large Language Models (LLMs) into recommendation systems.
These go beyond “you may also like” to:
Use natural language context and dialogue
Generate explanations, not just rankings
Support conversational recommendation
Coordinate agent ecosystems, e.g., multi-agent reasoning environments

Example paradigms:

RecMind – reasoning-aware recommendation agents
Rec4Agentverse – treating agents as recommendable entities
ChatRecs – integrating LLMs into live user conversations

3. Multimodal & Multitask Integration

Systems increasingly blend text, image, video, behavior, and user feedback.
LLMs allow richer personalization using complex user intent derived from unstructured data.

4. Industry Investment

FAANG companies (especially YouTube, Netflix, Amazon, TikTok, and Spotify) base their core user experience on AI-driven recommendation.
Billions of dollars are spent optimizing:
Ranking quality
Engagement prediction
Real-time inference
Federated and privacy-preserving personalization

Estimate of Scope

Area

Estimate (2024–25)

Academic papers (RecSys + AI)

60,000+ overall; thousands per year

Top conferences with recommender tracks

20+ (ACM RecSys, SIGIR, KDD, NeurIPS, etc.)

Active researchers

Tens of thousands globally

Emerging subfields

LLM-based Rec, conversational Rec, agent-based Rec, explainable Rec

Industry scale

$10B+ annually invested in personalization tech

In Summary

The science of recommendation is:

Not new, but rapidly evolving
Core to AI applications, especially with LLMs and agents
One of the most commercially and socially impactful domains in modern AI
Growing into new subfields (e.g. Rec4Agentverse, LLM Rec, multi-agent Rec)

Let me know if you’d like a visual map of how the recommendation science ecosystem is structured—or who the top authors and institutions are in this space

3. Who is working on personal AI Me: And who, if anyone, is working to provide customers with their own personal AI tools: ones that work for them, to empower their own agency in the marketplace? By which I mean truly personal AI, and not the kind personalized by some external LLM (or similar) system. ChatGPT:

That’s an essential and forward-looking question—especially as most “AI for you” today really means “AI about you, for someone else.”

While Big Tech firms dominate personalized AI, a growing ecosystem is focused on truly personal AI—that is, AI you own, that works for you (not advertisers or platforms), and that enhances your agency in the marketplace and your life.

Here are the most notable people, projects, and initiatives advancing truly personal AI today:

1. Kwaai and pAI-OS (South Africa / Global)

Kwaai is building an open-source, user-sovereign personal AI OS called pAI-OS.
Designed to run locally or at user discretion, with full data ownership, modular agent systems, and plugin capability.
Aligns with IEEE P7012 (“MyTerms”) privacy standards (see below).
Actively exploring intentcasting, customer signaling, and VRM (Vendor Relationship Management).
Site: https://kwaai.ai

2. Indie AI and Personal Agent projects

Several smaller open-source initiatives are trying to build personal agent toolkits that you can host and configure:

Examples:

AutoGen Studio (Microsoft) – allows building agent workflows, some of which can be user-defined. But it’s not yet fully personal or private.
Langroid – open-source framework for local agent-based LLM systems. Designed for extensibility, can be made personal with effort.
OpenDevin, Superagent, Auto-GPT – can be self-hosted, though most are still tool-centric rather than user-centric.
Personal.ai – A commercial startup that says it is personal, but uses centralized storage and fine-tuning, so actual user control is limited.

 Reality check: Most of these still rely on external LLMs (like OpenAI or Anthropic), even if they claim to be “your AI”.

3. Solid Project (by Tim Berners-Lee)

While not AI per se, Solid is a decentralized identity and data storage architecture that allows individuals to store their own data in personal data pods.
AI tools can be layered on top to access only the data you choose to share.
Foundation for personal AI assistants that can work with your data, not on others’ clouds.
https://solidproject.org

4. IEEE P7012 – MyTerms

A standard being developed to flip the notice-and-consent regime—so individuals, not companies, dictate the terms of data use.
Backed by Customer Commons and designed to support intentcasting and personal marketplace signaling.
This enables a personal AI to serve as your agent, enforcing your terms and engaging the marketplace on your behalf.

5. Project VRM (Vendor Relationship Management)

Initiated by Doc Searls to empower customers with tools to manage vendors, not just the reverse.
Envisions personal tools (including AI) to:
Signal intent
Manage service contracts
Retain records of offers, purchases, warranties
Avoid manipulation, surveillance, and lock-in

Project VRM is not a software project per se, but a framework guiding developers to build personal tools that support individual autonomy.

6. Open-source LLM stacks for local use

Several local LLM projects can be combined with personal data layers to build truly personal AI:

LM Studio – runs local models on desktop
PrivateGPT – local inference + local documents
Ollama – macOS-native interface for fast-running models like LLaMA, Mistral
LangChain / LlamaIndex – frameworks for building RAG systems with local documents

These aren’t personal AI systems yet—but they are key ingredients.

Summary

Name / Project

What They Do

Status

Kwaai / pAI-OS

Open, personal AI OS and agent platform

Early but active

Solid

Personal data control layer

Stable; in use

IEEE P7012

Standards for user-enforced data terms

In development

Project VRM

Framework for tools that empower buyers

Thought leadership; toolkits emerging

Local AI frameworks

Tech foundations for personal AI

Available, DIY-friendly

The Gap

Despite this promising landscape, no major platform today delivers a fully integrated, easy-to-use personal AI that you own, configure, and direct—yet. But:

A personal AI revolution is underway, led by privacy advocates, open-source communities, and a few visionary developers—not Big Tech.

Would you like help putting together a DIY stack for your own personal AI toolkit based on this landscape?

Me: Can you name some of the people in the personal AI revolution, and tell me what that DIY stack would contain? ChatGPT:

Absolutely. Here’s a breakdown of key people involved in the personal AI movement, followed by a practical DIY stack you can start building today.

People Leading or Advancing the Personal AI Revolution

These individuals aren’t all working on the same projects, but they’re aligned in trying to build AI that’s owned, run, and controlled by the individual—not centralized platforms.

 Tim Berners-Lee

Project: Solid (stewarded by ODI, commercially implemented by companies including TBLs Inrupt)
Focus: Decentralized web and personal data ownership via data pods
Impact: Infrastructure for privacy-respecting personal AI agents
https://solidproject.org

 Doc Searls

Project: Project VRM, Customer Commons, IEEE P7012
Focus: Customer empowerment, intentcasting, reversing surveillance capitalism
Impact: Philosophy and standards for customer-directed personal AI
https://projectvrm.org

 Tawanda Kembo & Kwaai team

Project: pAI-OS (personal AI operating system)
Focus: Agent-centric open-source operating system for personal AI
Impact: One of the few efforts aiming to create a complete personal AI stack
https://kwaai.ai

 Simon Willison

Project: Datasette, open tools for data management
Focus: Personal data control, lightweight databases, LLM + local tooling
Impact: Key figure in the open data + local AI tooling space
https://simonwillison.net

 Brian Roemmele

Project: Multiplex (personal voice AI)
Focus: Voice-first personal assistants you run locally
Impact: Vocal advocate for true personal AI, often referencing “software you own”

A DIY Personal AI Stack (2025 Edition)

This stack assumes local-first, modular, privacy-respecting tools. You can mix and match based on your needs.

Core Components

Layer

Tools / Projects

Notes

1. OS Layer

Linux, pAI-OS (in dev)

Linux gives flexibility; pAI-OS aims to be a full personal AI OS

2. Local LLM Inference

Ollama, LM Studio, PrivateGPT, LocalAI

Run models like LLaMA 3, Mistral, Phi-3 entirely offline

3. Vector Database

Chroma, Weaviate, Milvus, or SQLite with FTS5

Store and retrieve embedded text data

4. Personal Indexing

LlamaIndex, LangChain, AutoGen, GPTPilot

Feed personal documents and data to your LLM

5. Personal Data Storage

Solid POD, Datasette, Obsidian

Your life’s data, structured and queryable

6. Personal Agent Framework

Superagent, CrewAI, Langroid, AutoGen

Design agents that act on your behalf, modular and scriptable

7. Front-end Interface

Gradio, Streamlit, Tauri app, Electron

Local GUI/UX to interact with your agent

8. Voice Interface (optional)

OpenVoice, Vosk, Whisper.cpp, Coqui TTS

Build voice-in/voice-out agent on your machine

Optional Add-ons

Offline search: Chat over your files, PDFs, photos, and emails with LlamaIndex + PrivateGPT
Calendar/notes: Sync with Obsidian or local markdown-based knowledge base
Browser agent: Auto-control browser via Puppeteer/Playwright for marketplace navigation

What’s Still Missing?

True personal OS integration (like what pAI-OS is trying to do)
Seamless data portability and contract enforcement (IEEE P7012 is working on it)
Trusted personal AI app ecosystem (some apps, like Beeper or Obsidian, are getting close)

Would you like a tailored recommendation for building a personal AI stack based on your technical skill level or use cases (e.g., journaling, research, negotiation, shopping, data querying)?

4. How can I get started? Me: Yes. My technical skill level is modest at best. I can write HTML and do very basic stuff in the command line. I have an extra laptop I can devote to solo work, and Ethernet to a 2.1GB/s symmetrical fiber connection to the Internet with no data limits. ChatGPT:

Great — with your setup (modest technical skills, spare laptop, blazing-fast fiber), you’re in an excellent position to begin building a practical personal AI system without diving too deep into complex dev work. Here’s a recommended stack tailored to your current skills and goals:

YOUR PERSONAL AI STARTER STACK (2025) 1. Platform Base

Use a Linux distribution designed for ease of use and AI tinkering:

Recommended: Pop!_OS or Ubuntu 24.04
Easy GUI, good hardware compatibility, lots of online support
Optional: Use your laptop in “air-gapped” mode for privacy and full local control

2. Run a Local LLM with LM Studio

LM Studio (macOS/Windows/Linux) lets you run LLaMA 3, Mistral, Phi-3, and others without coding.
Built-in UI — chat with models locally, no internet required after download.
You can download 7B or 13B models based on your hardware.
Best for: private journaling, summarizing files, answering questions.

 Why it’s ideal for you:

GUI-based
Just click and download models
No API keys or command-line needed once installed

3. Chat with Your Files (PrivateGPT / LlamaIndex)

PrivateGPT or GPTPilot can let you ask questions across PDFs, emails, and notes stored on your machine.
Start simple: drop files into a folder, let the AI index them.
Optionally install via Docker (one command), or use their prebuilt apps.

Use case:

“Summarize this research PDF”
“What did I write about project X last month?”
“Find the email where someone mentioned a refund”

4. Knowledge Base + Journaling with Obsidian

Obsidian = Markdown-based knowledge base.
Everything you write stays local.
You can point LM Studio to your notes to query your own memory.

Easy way to:

Keep private journals
Index your life
Ask your AI things like “What are my goals for June?”

5. Personal Data Management with Datasette

Datasette by Simon Willison lets you browse and query your own data with simple SQL and CSV files.
Feeds AI with structured personal info (purchases, inventory, plans, etc.)
You can install it with a single terminal command:
bash
CopyEdit
pip install datasette datasette serve your-data.db
datasette serve your-data.db

Combine with LM Studio to get responses like:

“What subscriptions do I need to cancel?”
“When did I last back up my photos?”

6. Optional: Browser Helper Agent (Local, Simple)

Use AutoBrowser or a simpler Puppeteer script to automate browsing tasks (like checking prices, news, account activity).
Start with prebuilt scripts. No coding needed if you copy/paste working examples.

7. Privacy Boosters

Run Tailscale or WireGuard to access your AI remotely if needed.
Use LocalAI or Ollama if you later want to run everything without GUI (more advanced).

Putting It All Together

Task

Tool

Chat privately with an AI

LM Studio

Summarize your own notes or PDFs

PrivateGPT or GPTPilot

Store and reflect on your life data

Obsidian, Datasette

Manage subscriptions and tasks

Datasette + Chat

Ask about your files or history

LlamaIndex or GPT over Obsidian notes

Use voice later (optional)

Whisper.cpp, OpenVoice (easy tools)

Next Steps (in plain order)

Install Pop!_OS or Ubuntu on your spare laptop
Install LM Studio and download a 7B model like Mistral or LLaMA 3
Start using it to answer personal questions and summarize documents
Add Obsidian to start journaling and reflecting
Try PrivateGPT or Datasette to search your past or track your life

Would you like a checklist or an install script to get started with this stack? I can also help you set up a file structure to begin tracking and querying your own data.

5. Forms of personal data to manage Me: The personal data I need to manage are health records (from 20-plus providers), financial records (fewer than health), travel records (including the kind kept by apps that watch my movements), subscriptions and other recurring payments, shows I have watched and on what channels or services (data gathered by my Samsung and Roku TVs and presumably shared with others but to which I don’t yet have access), continuing work and work history with various clients and employers, contacts, archival and current calendars, real estate, personal items bought or acquired from various sources and where they are kept or stored, and business dealings. What would be the file structure for all that, and how would I manage it? Note that I am not talking about shopping yet; that’s a separate concern. ChatGPT:

Thanks for the detailed breakdown — you’re describing the foundation of a true personal data architecture: structured, queryable, and under your control. Let’s build you a file structure and management strategy that matches your needs without overcomplicating the tech.

Recommended Top-Level Folder Structure

This folder structure is designed to be future-proof, machine-readable, and human-usable. Use plain folders, CSV/JSON/Markdown files, and optional SQLite databases to store structured data. You can later layer tools (like LLMs or Datasette) on top.

markdown CopyEdit /PersonalAI_Data

├── Health/
│ ├── Providers/
│ ├── Visits/
│ ├── Prescriptions/
│ ├── Labs_Imaging/
│ └── Insurance_Claims/

├── Finance/
│ ├── Bank_Accounts/
│ ├── Credit_Cards/
│ ├── Investments/
│ └── Taxes/

├── Travel/
│ ├── Trips/
│ ├── Locations_History/
│ └── Maps_GPX_KML/

├── Subscriptions/
│ ├── Streaming/
│ ├── News_Publications/
│ ├── Software_Tools/
│ └── Utilities_Bills/

├── Media_Viewed/
│ ├── TV_Shows/
│ ├── Movies/
│ ├── Channels_Services/
│ └── Watch_History.csv

├── Work/
│ ├── Employers/
│ ├── Clients/
│ ├── Projects/
│ └── Timesheets_Invoices/

├── Contacts/
│ └── contacts.csv

├── Calendar/
│ ├── Archive_ICS/
│ ├── Current_ICS/
│ └── Events_Tags.csv

├── Real_Estate/
│ ├── Properties_Owned/
│ ├── Rentals/
│ └── Maintenance_Records/

├── Personal_Items/
│ ├── Furniture/
│ ├── Electronics/
│ ├── Art_Collectibles/
│ └── Inventory.csv

└── Business/
├── Deals/
├── Agreements/
├── Partners/
└── Notes/
Strategy for Managing It All 1. Use Consistent File Types Data Type File Format Structured entries .csv.json.sqlite Notes or logs .md (markdown) Documents .pdf.docx.txt Images/attachments store in attachments/ subfolder per topic 2. Naming Conventions (Machine + Human Friendly)

Use ISO-like formats to make things sortable and parseable:

yaml CopyEdit 2024-06-16_labresults_providerX.csv
2023_Q4_taxes_federal.pdf
clientA_2022-11_contract.pdf
3. Data Capture Tools

You don’t need to build it all manually:

Use Obsidian for ongoing notes (e.g. travel journal, work logs)

Use Paperless-ngx (optional) to OCR and tag your scanned PDFs (health, finance)

Use GPS History Export from Google and Apple for travel/locations

Use email filters or Zapier to auto-download bills, statements, confirmations

4. Querying and Reviewing

Add one or more of these later:

Datasette → browse and search CSVs/SQLite locally

LM Studio + PrivateGPT → ask, “When did I last visit Dr. Chen?” or “What devices are in storage?”

LlamaIndex → index your file tree and let AI search/summarize it

Summary of Benefits Goal This Setup Delivers Total local control You decide access, storage, retention Future AI-readiness Structured enough for agents to use Easy to back up Plain files, portable databases Modular growth Start simple, layer in AI when ready

Would you like me to generate a starter template with folder structure and some CSV/Markdown file formats filled in with mock data for each domain? That way you can load it directly into your system and start customizing.

Me again:

That’s it for now.

What should my next request be, or what other direction would you like me (or us) to take this? Or, what are the learnings we have so far in this field that barely exists? Oh, and excuse whatever inaccuracies ChatGPT may have floated above. Or give me fixes. Thanks. *The links are mine and based on searches for those names. I might have some of them wrong, however, since searches for those names bring up many scholars with the same names. And, while each name had a link, they all went to an author archive page that was absent (with one exception) of other contributions to the publication.

EdgeSecure

MENTOR Newark Partners with Edge to Find a New Home

The post MENTOR Newark Partners with Edge to Find a New Home appeared first on NJEdge Inc.

NEWARK, NJ, June 18, 2025 – A shared commitment to community and service led to a meaningful partnership between Edge, a nonprofit organization that recently transitioned to a fully virtual workforce, and MENTOR Newark, a local organization dedicated to supporting youth through mentoring and empowerment programs.

When Edge found itself with available office space, it began the search for a subtenant. A mutual business contact introduced Edge’s Executive Director of Human Resources and Administration, Amy Olavarria, to Thomas Owens, Executive Director of MENTOR Newark. Olavarria elaborates, “We had been working with a realtor to sublease the space but couldn’t find a good match, until we were introduced to Thomas and learned about his organization’s mission and their need for a new headquarters,” said Olavarria. “I’ve met several of the young adults in the program, and they’re all so mature, motivated, and well-mannered. It’s inspiring to see, and MENTOR Newark is truly an incredible organization.”

Previously located in a lower-level space in the same building, MENTOR Newark had involved its students in renovating their former office, so the move came with some uncertainty. “Our students were deeply involved in creating the previous space, so I wasn’t sure how they would connect with the new one,” admitted Owens. “But when they walked in, they immediately saw the possibilities and were genuinely excited. They appreciated the new amenities, including the kitchen, private bathrooms, and most importantly, their own dedicated area called the Student Office.”

That sense of ownership was on full display during an open house in April 2025, where community members gathered to tour the space and meet the young leaders of MENTOR Newark. “People were moved, and saw the students taking responsibility for the space, engaging with guests, and leading parts of the event,” shares Owens. “There was dancing, laughter, and a strong sense of pride. When people learned it all came together through a partnership between two nonprofits, it left a lasting impression.”

For Edge, the partnership with MENTOR Newark was a natural extension of its mission and community commitment. “Edge is honored to have Thomas and his organization in this space,” adds Olavarria. “Even though MENTOR Newark and Edge are two nonprofits in different industries, we share a common goal: to help and serve the people in our communities and beyond. Working with MENTOR Newark is perfectly aligned with Edge’s brand promise of CONNECTED. COMMITTED. COMMUNITY. I attended their grand opening, and the atmosphere was one of home, community, and acceptance—regardless of age or background. It was a powerful feeling of unity and peace.”

About Edge

Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.

About MENTOR Newark

MENTOR Newark is the New Jersey affiliate of MENTOR, the National Mentoring Partnership, and connects youth in Newark, New Jersey, to caring mentors who provide guidance, support, and positive role modeling. ​The organization collaborates closely with the Newark Board of Education to offer mentoring professional development to staff and support the implementation of a comprehensive district-wide student mentoring program. MENTOR Newark actively works with more than 40 local mentoring organizations, many of which run programs within Newark Board of Education schools. To learn more, visit www.newarkmentoring.org.

The post MENTOR Newark Partners with Edge to Find a New Home appeared first on NJEdge Inc.


FIDO Alliance

Meta Newsroom: Introducing Passkeys on Facebook for an Easier Sign-In

We’re introducing passkeys on Facebook for mobile devices, offering another tool to safeguard your privacy and security. Passkeys are a new way to verify your identity and log in to […]

We’re introducing passkeys on Facebook for mobile devices, offering another tool to safeguard your privacy and security. Passkeys are a new way to verify your identity and log in to your account that’s easier and more secure than traditional passwords. 

Passkeys will soon be available on iOS and Android mobile devices for Facebook, and we will begin rolling out passkeys to Messenger in the coming months. The same passkey you set up for Facebook will also work on Messenger once this capability launches. 


FIDO Alliance Releases Authenticate 2025 Agenda

Carlsbad, Calif., June 18, 2025 – The FIDO Alliance has announced the agenda for Authenticate 2025, the only industry conference dedicated to digital identity and authentication with a focus on phishing-resistant […]
FIDO Alliance’s flagship event features an expanded agenda to deliver practical strategies for implementing usable, phishing-resistant security across the entire account lifecycle.  Super Early Bird discounts are available through June 20.

Carlsbad, Calif., June 18, 2025 – The FIDO Alliance has announced the agenda for Authenticate 2025, the only industry conference dedicated to digital identity and authentication with a focus on phishing-resistant sign-ins with passkeys. The event will take place October 13–15, 2025 at the Omni La Costa Resort and Spa in Carlsbad, Calif., with options for virtual participation available.

The focus of the program for the Authenticate 2025 conference is achieving phishing-resistant authentication with passkeys and the adjacent considerations required to achieve end-to-end account security with usability in mind.

Visit https://authenticatecon.com/event/authenticate-2025/ to view the full session guide and register ahead of the June 20th Super Early Bird deadline.

Authenticate is built for CISOs, security strategists, enterprise architects, product leaders, UX professionals, and anyone engaged in the identity lifecycle from strategy to implementation. Attendees will gain practical knowledge around deploying phishing-resistant authentication at scale, designing secure user experiences, understanding complementary technologies, and navigating policy and compliance requirements. 

This year’s event will showcase keynotes and sessions led by top executives and industry leaders at the forefront of the passwordless movement. The agenda for 2025 has been revamped to include: longer track sessions for more in-depth presentations, an increased focus on masterclasses for actionable synced and device-bound passkey implementation best practices, and a new solutions theater track to showcase live demonstrations of the latest identity and authentication solutions. This year’s agenda also features more opportunities for networking and exploration of the interactive expo hall to foster collaboration and idea sharing.

With four dynamic stages across four curated content tracks,  Authenticate 2025 will offer sessions on: 

Account onboarding Remote identity verification and proofing Authorization Biometrics Session security Device onboarding and authentication Cybersecurity/fraud threats and detection Digital identity/digital wallets The future of digital identity and authentication

Sponsorship Opportunities Available
Authenticate 2025 offers unique sponsorship opportunities for companies to showcase solutions to an engaged, decision-making audience. With limited availability remaining, prospective sponsors can learn more and apply at https://authenticatecon.com/sponsors/ or contact authenticate@fidoalliance.org

About Authenticate 

Authenticate is the premier conference dedicated to advancing digital identity and authentication, with an emphasis on phishing-resistant sign-ins using passkeys. Hosted by the FIDO Alliance, this event brings together CISOs, security strategists, product managers and identity architects to explore best practices, technical insights and real-world case studies in modern authentication. The 2025 conference will take place from October 13-15 at the Omni La Costa Resort & Spa in Carlsbad, California, and will be co-located with the FIDO Alliance member plenary sessions, which run through October 16. 

Authenticate is hosted by the FIDO Alliance, the cross-industry consortium providing standards, certifications and market adoption programs to accelerate utilization of simpler, stronger authentication with innovations, like passkeys. Signature sponsors for Authenticate 2025 are Google, Microsoft, Visa, and Yubico.

To learn more and register, visit https://authenticatecon.com/event/authenticate-2025/, and follow @AuthenticateCon on X. Register now and get the super early bird discount through June 20, 2025.

Authenticate Contact
authenticate@fidoalliance.org

PR Contact
press@fidoalliance.org

Tuesday, 17. June 2025

DIF Blog

Progress Report from the Creator Assertions WG

v1.1 specification integrates the novel Identity Aggregator mechanism

v1.1 of the 3 specifications includes one powerful new moving part

As CAWG activity picks up steam at DIF, there are a few details the rest of DIF and the broader community of decentralizers might want to be tracking:

The specification includes a powerful new indirection called an Identity Aggregator, which designates an external authority to translate a long-lived identifier embedded in signed credentials (at time of publication) to one or more identifiers with local significance anywhere an asset is used (at time of republication or consumption). Industry-specific identifier schemes are being researched by a distinct task force within the group for prototyping and getting adoption in media verticals. Registering/organizing external metadata standards and DID interop are on-going discussions. Wait, what Working Group is this?

If you're following from a distance, you have a vague sense that CAWG is a DIF working group doing something-something C2PA. If that distance is a long distance, you might know C2PA is a big-name, media-authenticity initiative with many mega-corporations signed on. The reality is actually more decentralized than meets the eye: CAWG is specifying open-world extension points that use Verifiable Credentials to let all kinds of claims and all kinds of actors embed metadata and rights declarations in C2PA documents, not just the big boys.

As the name would imply, "creators" is a capacious term which includes influencers, independents, rights-holders unions, creative agencies, freelancers, and even anonymous social media microcelebrities alike. The extension points and interoperability mechanisms this group is working on bring verifiability at various scales at once, and to various kinds of ecosystems and markets. The "Assertions" that these creators are inserting into signed C2PA manifests embedded in professional media assets are the open-world extension point that let C2PA manifests contain arbitrary metadata (treated in a separate specification being iterated by the group), arbitrary trust signals, and arbitrary attached credentials.

Enter the Aggregator

The Identity Claims Aggregator (often referred to as "aggregator" in the group) names a piece of software (which can be internal or external to an authoring tool) that tracks multiple identifiers enabling verifiable credentials (issued against them) to be inserted meaningfully. It also witnesses proofs of (and later attests to) control of many kinds of external identifiers, and generally organizes the chaos of the world's many overlapping identifier schemes and attestation formats. To the outside observer, this might seem a very complicated translation mechanism, but to the decentralized identity veteran, this is a familiar necessity. Every verifiable credential scheme eventually needs this kind of translator/aggregator role, if it is to be an open system or even if it is "only" going to federate across the tech stacks of multiple existing systems.

Taken from section 8.1.1.2 of the v1.1 Identity Assertion specification

The conversation so far in the working group has been working its way from the general to the concrete: do aggregators only aggregate identifiers and information sources known at the time of authoring/inserting, or can an aggregator add on new attestations at a later time? Must aggregators limit themselves to public identifiers, or can they use an internal/aggregator-specific one? Can an aggregator host a live API for post-facto information to be passed out-of-band, like additional credentials or updated credentials? How tightly, if at all, should this group specify such an API, if so? These are the high-level questions being debated on the back-burner of CAWG meetings this summer.

The Aggregator-Indirection Question

Zooming in a little more, there are further questions being tackled. Aggregators model a great happy-path solution for embedding declarations and strong identifications into each asset, but what about the many unhappy paths? For example, what if a creator’s assertions are scattered across many identifiers and credential types, and embedding those assertions requires all kinds translations and metadata to be legible? What if identifiers change, or new assertions become relevant after publishing– can “placeholder” or indirection identifiers be used to query data sources that continue receiving assertions after publication? Can an indirection or service be used to display more or less assertions depending on audience, or changeable-over-time consent policies? Can assertions and identities be “self-custodied”? What is the “account recovery” story for these increasingly complex use-cases?

While adding the aggregator was the biggest change in v1.1, it will be a long while until the exact scope and limits of this role are decided. It may well be that some advanced features get postponed to a later stage in the roadmap, because of the sheer complexity they entail, but it will definitely be an ongoing topic simmering in the background whenever smaller debates come up.

Separate Work Stream: Industry-Specific Identifiers

In parallel, a subgroup is meeting separately to research and sanity-check the integration of major media-industry identifier schemes and metadata schemes, looking for interop corner-cases and relevant prior art.

Interested parties are encouraged to pop into the subgroup's github issues and meetings if they are working on (or just curious about, or experienced with) industry associations and media archiving best practices. The usual IP caveats apply: if joining a live call as a non-DIF member or commenting on github issues, refrain from going into concrete detail on anything "patentable" like implementation details or solutions.

Other Big Questions between V1.1 and v2.0

These advanced features of the aggregator aren’t the only big questions that we can expect to simmer and percolate across the next few "minor versions". Additionally, the interop issues around existing metadata standards (not just major W3C standards, but real-world ones from industry and library sciences) are potentially inexhaustible, as the group's Metadata specification gives scaffolding for inserting any structured metadata into assertions.

A slightly less vast but still very open-ended interoperability question is which DID methods to recommend or even require of all implementers, and how to manage or triage the remainder of the list of possible current (and future!) DID methods. Intersecting with the ongoing work of the DID Method Standardization Working Group, and older efforts like DID Traits to define equivalence or translatability between DID methods with similar architectures, semantics and guarantees, there is something of a simmering backlog-debate about which forms of DID make how much sense for the CAWG use-cases.

Of course, the evaluation of DID methods for these tiered-accreditation and decentralized reputation use-cases necessarily includes more than just technical analysis; legal and business readiness factor in as well, including competitiveness and market health/structure considerations to keep media authenticity from being a perk in closed ecosystems. Luckily, the new co-chair Scott Perry brings much experience and open lines of dialogue with multiple working groups at the Trust-over-IP Foundation which work on exactly these aspects of DID technology and business processes. In particular, agentic identity is a topic that ToIP generally, and Scott specifically, are bringing into the scope of the WG, so keep an eye out for issues and PRs along those lines in the coming months as well.


FIDO Alliance

MSN: Google Pushes 2 Billion Gmail Users to Adopt Passkeys Over Passwords

Google is making its biggest security push yet. The company strongly urges its 2 billion Gmail users to switch passwords to passkeys. While not mandating immediate changes, Google has made passkeys the default authentication method. They’ve also set […]

Google is making its biggest security push yet. The company strongly urges its 2 billion Gmail users to switch passwords to passkeys. While not mandating immediate changes, Google has made passkeys the default authentication method. They’ve also set a hard deadline for third-party apps. The FBI reported cyber attacks jumped 33% last year. Those attacks cost over $16 billion in damages. Google’s response shows how seriously Big Tech is taking the password problem affecting every internet user.

Monday, 16. June 2025

We Are Open co-op

Gaps in AI Literacy provision

In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC.
A values-based list of barriers faced by 14–19 year olds in the UK

In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. We’ve already published:

What does AI Literacy look like for young people aged 14–19? What makes for a good AI Literacy framework? Core Values for AI Literacy AI Literacy or AI Literacies? Image CC BY-ND Visual Thinkery for WAO

This project has involved both desk research and input from experts in the form of a survey, individual interviews, and a roundtable which we hosted a couple of weeks ago. One area we wanted to ensure we understood were gaps in existing provision around AI Literacies for young people.

The gaps we identified were focused on the 14–19 age range in the UK, with a long list of areas with many themes. We have organised and summarised these around the core values identified in a previous post.

The gaps reflect a pattern seen across education, media, and wider society: provision is uneven. It is often shaped by short-term thinking and competing interests. Overall, it is limited by a lack of clear leadership or coordination.

Unfortunately, many interventions around AI Literacies are focused on technical skills or compliance. These do not connect with young people’s real interests or lived experiences, nor do they address the deeper ethical, social, and cultural questions raised by AI.

As a result of this, many learners — especially those already facing disadvantage — are left with fragmented support and few opportunities to develop genuine agency or critical judgement.

Human Agency and Informed Participation Lack of systemic, rights-based frameworks: There is little structured provision to help young people shape, question, or influence AI, with most education focused on adapting to technology rather than encouraging agency or clarifying institutional responsibilities. Dominance of industry narratives: Commercial interests and tech industry funding often drive the agenda, narrowing the conversation and limiting opportunities for young people to challenge prevailing narratives or understand the political dimensions of AI. Insufficient progression and curriculum integration: There is no standardised, dynamic curriculum or progression framework for AI Literacies, especially for post-16 learners, and limited integration across subjects beyond computing or digital studies. Teacher confidence and support gaps: Many teachers lack confidence, training, and adaptable resources to support the development of AI Literacies, resulting in inconsistent, sometimes contradictory, messaging and limited support for critical engagement. Disconnect between knowledge and action: Awareness of AI bias, manipulation, or power structures does not reliably translate into agency or behavioural change, with motivation and broader social context often overlooked. Equity, Diversity, and Inclusion Persistent digital and social divides: Access to tools and resources to develop AI Literacies is highly unequal, shaped by school policies, family resources, and broader digital divides, with privileged students often able to bypass restrictions. Lack of cultural and global adaptation: Most resources are developed in the global north and do not reflect the needs or realities of diverse cultural, socioeconomic, or linguistic backgrounds, including those from in the global south. Barriers for marginalised groups: AI tools and resources can disadvantage non-native English speakers, students with disabilities, and those with limited digital access, reinforcing existing inequalities. Neglect of visual and multimodal literacy: There is insufficient focus on images, deepfakes, and multimodal content, despite their growing importance for misinformation and manipulation. Resource design and authenticity: Overly polished, anthropomorphised, or inaccessible resources can alienate young people; there is a need to co-design authentic, relatable, and context-driven materials that reflect lived experiences with young people from a range of background Creativity, Participation, and Lifelong Learning Short-termism and lack of sustainability: Funding and interventions are often short-lived, with little focus on long-term, joined-up strategies or progression frameworks. Imbalance between creativity and consumption: Most young people are consumers, not creators, of AI content; there is insufficient emphasis on participatory, creative, and hands-on engagement with AI. Restrictive and risk-averse policies: Overly strict barriers on access to AI tools in schools can limit meaningful learning opportunities and create anxiety or underground use. Missed opportunities for experiential and peer learning: There is underuse of hands-on, constructionist, and peer-led approaches, which are effective for this age group and for a rapidly evolving field like AI. Failure to address entrenched digital habits: Many interventions come too late to shift established digital habits; young people may have high digital skill but lack guidance on purposeful, critical, or participatory use. Critical Thinking and Responsible Use Overemphasis on technical skills: Current provision is skewed towards prompt engineering and functional tool use, with insufficient attention to understanding different kinds of AI, ethical reasoning, systemic impacts, and critical engagement. Insufficient ethical, environmental, and societal focus: Real-world harms, environmental costs, and the broader impact of AI are rarely discussed, leaving gaps in understanding responsible use. Media and information literacy gaps: Algorithmic and data literacy gaps: Young people struggle to understand how data shapes AI outputs, how to assess real versus fake (including deepfakes), and how to evaluate, challenge or seek redress for algorithmic decisions or AI-generated content. Anthropomorphism and mental models: Many young people, particularly younger teens, misattribute human-like qualities to AI, affecting their critical judgement and ability to interrogate outputs. Lack of robust assessment and evidence: There is a shortage of baseline data on AI literacy levels and limited frameworks for evaluating the effectiveness and impact of interventions, especially in terms of behavioural change. Upholding Human Rights and Wellbeing Disconnection from youth interests and lived experience: AI Literacy resources often fail to connect to young people’s real interests (creativity, sports, mental health), focusing instead on employability or compliance. Socio-emotional and privacy risks: Young people may use AI for companionship or advice, sharing sensitive information without understanding privacy or data risks; frameworks rarely address identity, trust, or changing markers of adulthood. Confusion and inconsistency in terminology: There is no consensus on what “AI literacy” means, and inconsistent definitions can intimidate learners or place excessive responsibility on individuals. Unclear responsibility and leadership: It remains unclear who should lead on the development of AI Literacies. Schools, parents, government, industry, and third sector bodies all have a role to play, but the current situation leads to fragmented provision and a lack of accountability. Neglect of digital relationships and boundaries: The role of AI as an “invisible third party” in relationships, and the shifting boundaries of privacy and identity, are rarely addressed in current resources. Next up

We’re still finalising our framework for AI Literacies and will be sharing it soon. Meanwhile, you can follow our work on this topic so far at https://ailiteracy.fyi.

Please do get in touch if you have projects and programmes that can benefit from our experience and expertise in education and technology!

Acknowledgements

The following people have willingly given up their time to provide invaluable input to this project:

Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox​, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager


DIDAS

GC25 – Global Digital Collaboration – July 1–2, 2025

At Global Digital Collaboration on July 2nd, a full day of sessions co-curated by DIDAS and partners will address how privacy-enhancing technologies (PETs) and trustworthy governance models can become core enablers of digital trust across sectors and jurisdictions. The day begins with a high-level update session featuring SPRIND, Google, EPFL, Johannes Kepler University, and others. It wi

At Global Digital Collaboration on July 2nd, a full day of sessions co-curated by DIDAS and partners will address how privacy-enhancing technologies (PETs) and trustworthy governance models can become core enablers of digital trust across sectors and jurisdictions.

The day begins with a high-level update session featuring SPRIND, Google, EPFL, Johannes Kepler University, and others. It will explore the current maturity, post-quantum readiness, and practical deployment of PETs such as BBS+, SD-JWT, and ZK-mDoc. The session aims to establish shared terminology and frameworks for unlinkability and selective disclosure across global credential ecosystems.

In parallel, the e-democracy workshop series (Part 1 & 2), led by the Center for Digital Trust (C4DT) at EPFL, DIDAS, the Human Colossus Foundation, and other civil society actors, will explore how digital services like e-ID and e-collecting e-voting and related challenges which must be redesigned for resilience to protect public trust, prevent fraud, and ensure accountability. The sessions aim to define foundational principles for a trustworthy digital democracy, co-created by experts in law, governance, cryptography, and policy.

Running alongside, a collaborative mapping session by Johannes Kepler University, Orange, Ethereum researchers, DIDAS, EUDI and other Global Ecosystems and pilot teams are invited to identify and classify global use cases where PETs-particularly zero-knowledge proofs-are essential. The session will help align performance and privacy requirements across deployment contexts, feeding into implementation roadmaps and standards discussions.

In the afternoon, a deep dive on unlinkability will be led by experts from Google, SPRIND, EPFL and the Linux Foundation’s decentralized trust initiatives. This session will focus on the risks of issuer–relying party collusion in credential ecosystems, and why unlinkability is non-negotiable for use cases like transport and location-sensitive infrastructure.

Later, a technically grounded session titled “ZKProofs: From Crypto Potential to Regulatory Acceptance” will bring together Google, ETSI, and NIST to map out viable ZKP schemes, their mobile-readiness, and interoperability features. The goal is to bridge the gap between cryptographic innovation and institutional trust, and to align stakeholders around a roadmap for responsible, cross-border adoption and acceptance.

The day concludes with a multi-stakeholder roundtable moderated by DIDAS with invitees from the ITU, the OpenWallet Foundation, LF Decentralized Trust, OECD, UNHCR the Swiss confederation, the EU Commission and other country delegates and potential funding partners to explore long-term collaboration structures. This final session will address how to sustain PET development through ongoing working groups, interoperable governance, and shared funding models.

 

Public Sector & Multilateral Institutions

Swiss Confederation European Commission ITU (International Telecommunication Union) OECD UNHCR SPRIND (Federal Agency for Disruptive Innovation, Germany) EUDI Pilot Teams (various EU member states)

 

Research & Academia

EPFL – École Polytechnique Fédérale de Lausanne C4DT – Center for Digital Trust (EPFL) Johannes Kepler University Linz Ethereum Research Community

 

Civil Society & Ecosystem Actors

DIDAS – Digital Identity and Data Sovereignty Association Digital Society Association (Switzerland) Human Colossus Foundation Other invited civil society contributors

Private Sector & Standards Bodies

Google Orange Linux Foundation – Decentralized Trust Initiative OpenWallet Foundation ETSI – European Telecommunications Standards Institute NIST – U.S. National Institute of Standards and Technology LF Decentralized Trust

Core Themes

Privacy-enhancing technologies (PETs), ZKPs, unlinkability Verifiable credentials, digital identity, selective disclosure Trust infrastructure governance, interoperability, post-quantum security E-democracy, civic trust, institutional resilience Multi-stakeholder collaboration, sustainable funding, global alignment

This collaborative agenda reflects a global commitment to building privacy-preserving, interoperable, and inclusive digital ecosystems with shared responsibility across sectors.

Friday, 13. June 2025

FIDO Alliance

MSSP Alert: authID Integrates with Ping to Spread Passwordless Authentication

authID’s decision this month to integrate its biometric identity verification technology with Ping Identity’s PingOne DaVinci service is a necessary step at a time when humans continue to be the […]

authID’s decision this month to integrate its biometric identity verification technology with Ping Identity’s PingOne DaVinci service is a necessary step at a time when humans continue to be the weakest security link for organizations and bad actors increasingly target passwords to gain access to corporate networks, according to Jeff Scheidel, vice president of operations for the Denver-based company.


Ars Technica: Coming to Apple OSes: A seamless, secure way to import and export passkeys

Apple OSes will soon transfer passkeys seamlessly and securely across platforms. Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the […]

Apple OSes will soon transfer passkeys seamlessly and securely across platforms.

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to credential phishing and other attacks targeting passwords.


FIDO Seminar: Authentication, Identity and the Road Ahead

Overview The FIDO Alliance and host sponsor Thales recently held a one day seminar on authentication, identity and the road ahead. Seminar sessions provided an exploration of the current state […]
Overview

The FIDO Alliance and host sponsor Thales recently held a one day seminar on authentication, identity and the road ahead.

Seminar sessions provided an exploration of the current state of authentication for workforce and consumer sign-ins – with a focus on FIDO and passkeys including adoption status and case studies. The seminar also featured discussions on other relevant topics for IAM professionals, such as the latest in attacks and threats, identity verification technology advances, and post-quantum cryptography. Attendees had the opportunity to engage directly with authentication and identity experts through open Q&A, networking and demos.

View the presentations below:

Thursday, 12. June 2025

Oasis Open

Help Guide the Next Generation of XACML

The OASIS XACML Technical Committee (TC) is currently engaged in an effort to produce a successor to XACML version 3.0, and we’re looking for input from the community. The goals of this new version(s) are: The headline change is the decision to abstract the core language to remove its dependence on XML and XML Schema, […] The post Help Guide the Next Generation of XACML appeared first on OASIS O

The OASIS XACML Technical Committee (TC) is currently engaged in an effort to produce a successor to XACML version 3.0, and we’re looking for input from the community. The goals of this new version(s) are:

To modernize the language to make it more accessible to a wider audience, To add new features to extend the expressibility of the language and support new use cases, and To simplify the language where possible.

The headline change is the decision to abstract the core language to remove its dependence on XML and XML Schema, i.e., to make it syntax-agnostic. This will facilitate the use of other syntaxes for representing the core language. The TC has decided to define representations for at least JSON and YAML, while continuing to support XML. To reflect this expanded scope and modernized approach, the effort is being referred to as “XACML Next Gen” rather than “XACML 4.0.”

The XACML TC intends that implementations will be able to claim conformance with the core language by implementing one or any combination of the possible syntaxes. XML will not be required.

There is a strong desire in the XACML TC to find a new name for the core language to show that it is no longer tied to XML. Various suggestions are on the table, but there is no clear frontrunner at this time. The new name should lend itself to distinct acronyms for each supported syntax for simple identification. Regardless of the new name and version number for the core language, there is consensus in the TC that the XML representation of the core language will be known as XACML 4.0 in recognition of its antecedent. So the moniker “XACML 4.0” will reference only part of the TC’s eventual outputs.

The major structural change to the core language is the merging of PolicySet and Policy into a single construct that will carry the name “Policy.” A new Policy may contain embedded policies, policy references, rules, and variable definitions. This change removes some duplication from the core language where essentially the same thing was defined under separate names for both policy sets and policies, for example, PolicySetId and PolicyId. There is now only PolicyId. There are no longer separate policy combining algorithms and rule combining algorithms, just combining algorithms.

Current Changes In Progress 1. Support for JSON and YAML Policies

One of the most anticipated updates in XACML Next Gen is the addition of JSON and YAML as alternative policy representation formats. While XACML has traditionally been based on XML, these new formats aim to:

Simplify policy authoring by providing more concise and readable structures. Improve integration with modern applications that rely on JSON-based APIs. Reduce verbosity compared to XML, making policies easier to maintain. Improve readability and auditability by allowing the substitution of many URIs with short string names using standardized and user-defined vocabularies. 

Implications for Policy Writers:

Policies can now be expressed in JSON or YAML, reducing the learning curve for those unfamiliar with XML. JSON and YAML formats align with modern infrastructure-as-code practices, allowing policies to be managed like other configuration files. 2. Simplified and More Efficient Policy Structure

To make policies easier to write and maintain, XACML Next Gen is introducing:

A. Flattened Policy Hierarchy The distinction between Policy and PolicySet is being removed, creating a single structure that can contain both rules and sub-policies. This means fewer elements to track and simplifies how policies are nested and combined. B. Common Structure for Obligations and Advice The structure of an obligation or advice instance is practically the same, except for naming. Obligation and Advice will be replaced by a common Notice structure with the semantic differences indicated by a boolean flag. Likewise, for the various structures related to obligations and advice. C. Targets Revised Targets in policies have been changed to Boolean expressions, giving them the same expressive power and flexibility as Conditions in rules. Targets have been removed from rules; a Condition is sufficient. D. Decluttering CombinedDecision, ReturnPolicyIdList, and MustBePresent have been given sensible defaults so that they can usually be omitted, reducing clutter.

E. Global Variables for Reusability

Variables will be defined globally and reusable across multiple policies, reducing redundancy and improving clarity. Policy writers will be able to import global variables without having to redefine them within each policy. F. Composite Functions for Simpler Expressions Policy authors will be able to define custom functions that can be reused across multiple rules, reducing repetition. Example: Instead of repeating the same logical expression in multiple places, writers can define it once and reference it when needed. 3. Improved Policy Efficiency A. Optimized Rule Evaluation New ternary conditional functions (similar to a ? b : c in programming languages) allow policy writers to define logic more concisely. B. Aggregate Functions for Policy Simplification XACML Next Gen will introduce functions like min, max, sum, and average, enabling policy writers to perform calculations on groups of attributes without excessive nesting. C. Shortcuts for Common Operations New functions like empty-bag() and non-empty-bag() make it easier to check for missing attributes without verbose expressions. D. JSONPath Support for JSONPath will be added to the language. Like support for XPath, it will be optional to implement. Support for newer XPath versions will be added. 4. Naming and Structural Changes

To better reflect its expanded scope beyond XML, there are ongoing discussions about renaming XACML to something more format-agnostic. Current discussion can be viewed [here].

Regardless of the naming decision, the XML version will continue to be referred to as XACML, ensuring backward compatibility.

XACML Next Gen is shaping up to be a more flexible, efficient, and modern policy definition framework. By introducing JSON and YAML, flattening policy structures, and adding global variables and composite functions, the new version aims to make policy authoring easier and less error-prone, while the addition of canonical string identifiers will significantly improve the readability and audibility of policy corpora.

We Want Your Feedback

The XACML TC is eager to hear from the broader community as we move forward with this next-generation effort. Whether you’re a longtime implementer, a policy expert, or simply someone with a stake in access control, your input can help shape a modern, more accessible, and flexible standard. We’re especially interested in feedback on new feature requirements, potential use cases, naming ideas for the core language, and thoughts on the move toward syntax-agnostic design. If you’d like to get involved or share your perspective, please contact us via our GitHub project.

Authors

Steven Legg, Editor, OASIS XACML Technical Committee
[LinkedIn Profile]

William Parducci, Co-Chair, OASIS XACML Technical Committee
[LinkedIn Profile

The post Help Guide the Next Generation of XACML appeared first on OASIS Open.


FIDO Alliance

Passwordless Authentication and the Rise of Passkeys: Expert Insights Podcast with Andrew Shikiar

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. […]

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. He explores the challenges of adoption, the importance of identity verification, and how cross-platform interoperability is accelerating passkey use. The conversation also touches on the impact of generative AI on cybersecurity and what the future holds for passkeys in building long-term resilience.

About Expert Insights:

Expert Insights saves you time and hassle by rigorously analyzing cybersecurity solutions and cutting through the hype to deliver clear, actionable shortlists. We specialize in cybersecurity. So, our focus is sharper, our knowledge is deeper, and our insights are better. What’s more, our advice is completely impartial.

In a world saturated with information, we exist to arm experts with the insights they need to protect their organization. That is why over 1 million businesses have used us to inform their cybersecurity research.

Listen to the podcast.


We Are Open co-op

Exploring AI's Role in Education: A Balanced Perspective from Six Think Pieces

Collaborating to respond to UNESCO’s call for think pieces
Collaborating to respond to UNESCO’s call for think pieces

A couple of months ago, after seeing a UNESCO call for contributions, Doug began wrangling a group of thinkers and educators to respond to the call in a collaborative and open way. Naturally, I got involved, and thus we thought this story would be a good one for the WAO blog :)

There were six of us on that stormy night…

Bryan Alexander – an internationally known futurist, researcher, writer with this popular blog and newsletter, AI, academia, and the Future.  ​Helen Beetham – a researcher and consultant in digital education who has edited several standard texts including Rethinking Pedagogy for a Digital Age. Her articles on AI, education, and society can be found at imperfect offerings. ​Doug Belshaw – co-founder of We Are Open Co-op, working at the intersection of systems thinking, digital literacies, and Open Recognition. Doug's writings can be accessed via his website. ​Laura Hilliger – concept architect, open strategist, and co-founder of We Are Open Co-op. Her website contains links to her blog and newsletter. ​Ian O'Byrne – Associate Professor of Literacy Education at the College of Charleston. He maintains an active presence through his website and weekly newsletter Digitally Literate. ​Karen Louise Smith – Associate Professor in the Department of Communication, Popular Culture and Film at Brock University. She teaches courses related to social media, surveillance, and new media policy and her writing can be accessed via her website.

The six of us met up online to chat about how we might collaborate to respond to UNESCO’s call, and decided that we would each write our own think pieces. We then met up regularly to chat about where we were, what ideas were floating around and get inspiration from one another. Once we had drafts, we each read each other's pieces offering comments and suggestions. 

Our finished pieces can be found at this linktree: https://linktr.ee/ai_future_education

After we submitted our think pieces to UNESCO, we decided to do a roundtable event hosted by Doug. Over 100 people signed up, with 50 more on the waitlist. Participants gathered to discuss the transformative potential of Artificial Intelligence (AI) in education. The event brought together educators, tech experts, and policymakers to dissect our six pieces on AI's future role in learning environments.

This blog post summarises our very nuanced discussion surrounding AI's role in education. If you’re interested in this type of thing, we’d recommend watching the full session.

1. Personalized Learning

One of the themes discussed was personalized learning facilitated by AI. While AI can functionally analyze student data to tailor curricula and pacing, we talked about the emotional and social dimensions of learning. The lack of empathy in AI systems hinders holistic education, which requires human interaction.

2. Equity and Access

We talked about the theory that AI could democratize access to quality education, but we highlighted significant challenges. Unequal access exacerbates existing disparities in educational opportunities, and it is not just infrastructure that is inequitable in our education system.

3. The Educator's Role

AI’s impact on educators’ roles was another theme in our conversation. While AI can handle routine tasks like grading, potentially reducing administrative burdens, there are concerns about over-reliance on technology displacing human interaction. We argued for understanding teachers as mentors, facilitators, guardians and creatives to try and make clear that “efficiency” is not a goal in education.

4. Privacy and Bias

We talked a lot about the ethical implications surrounding various aspects of AI. From data privacy concerns, algorithmic bias, and transparency in AI decision-making, we stressed the importance of ethical guidelines and accountability measures.

5. Collaboration

We talked a bit about the potential for collaboration between educators and AI. We tried to think about what a complementary relationship with AI might look like for education. While AI might enhance some of our teaching tools, it cannot be allowed to overshadow the irreplaceable human element.

6. The Future of Education

In conclusion, we know and spoke about the fact that any technology and its implementation in education must be approached with caution and balance. Our conversation underscored that AI is not a panacea but a technology that exists within a particular context, and that like any technology, it’s how we use it that matters.

Go deeper:

This is the video ​Recording for the full session  We made an AI generated ​Chat summary ​All of our think pieces are worth a close read. Jump in individually: Bryan Alexander - Several futures for AI and education Helen Beetham - The implications of ‘artificial intelligence’ for the right to education in equality and dignity Doug Belshaw - Marching Backwards into the Future: AI’s Role in the Future of Education Laura Hilliger - It is not the tool, it is the artist who sparks the revolution: The Importance of Art Education with or without AI Ian O'Byrne - Amplifying Human Cognition: Artificial Intelligence as Mirror and Magnifier Karen Louise Smith: Building warm expert expertise to mitigate against data harms in AI-powered edtech

Many thanks to Bryan Mathers of Visual Thinkery, who provided the illustrations included in this post. To see all of those that he drew based on the session, visit his website

Wednesday, 11. June 2025

The Rubric

Exploring the SOL-ar System (did:sol, Part 2)

did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification.   References Civic https://www.civic.com/  x:  @civickey Cryptid https://github.com/identity-com/cryptid  DID Directory h
did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification.   References Civic https://www.civic.com/  x:  @civickey Cryptid https://github.com/identity-com/cryptid  DID Directory https://diddirectory.com/  did:sol spec  https://g.identity.com/sol-did/ did:sol on diddirectory.com  https://diddirectory.com/sol  did:sol on...

Exploring the SOL-ar System (did:sol, Part 1)

did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification. References Civic https://www.civic.com/ x:  @civickey Cryptid https://github.com/identity-com/cryptid DID Directory https://diddirectory.com/
did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification. References Civic https://www.civic.com/ x:  @civickey Cryptid https://github.com/identity-com/cryptid DID Directory https://diddirectory.com/ did:sol spec https://g.identity.com/sol-did/ did:sol on diddirectory.com https://diddirectory.com/sol did:sol on...

We Are Open co-op

AI Literacy or AI Literacies?

Exploring the plural, context-dependent, and socially-negotiated nature of new literacies
Exploring the plural, context-dependent, and socially-negotiated nature of new literacies

Over the past couple of months, we’ve been working on an ‘AI Literacy’ project with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. We’ve already published:

What does AI Literacy look like for young people aged 14–19? What makes for a good AI Literacy framework? Core Values for AI Literacy

In this post, we want to explore the tension we’ve felt between referring to ‘AI Literacy’ in the singular, versus referring to a plurality of ‘AI Literacies’. Ultimately, although our original brief used the singular form (as do many of our peers) we have decided to take the latter, plural, approach — for reasons we will explain below.

One very practical reason to emphasise ‘AI Literacies’ is that it is difficult to talk about ‘delivering’ a literacy. “Literacy” always begs the question of context: What does literacy mean to this particular person in this particular setting at this particular moment? What it means to be ‘AI literate’ is going to look very different to someone working in a corporate office job, compared to a teenager using AI for a creative project. Additionally, there are multiple literate behaviours when we think about AI — for example, understanding the socio-economic reality of the AI landscape versus knowing how to prompt an LLM to get the kind of information or answer you are looking for.

AI Literacies are therefore both plural and context-specific. They are also socially-negotiated. Literate behaviours depend on the community with which an individual is interacting. This becomes evident through a few examples.

Image CC BY-ND Visual Thinkery for WAO

If you are a parent of teenagers, you will have experienced a time when they respond in a way which makes sense to them and their friends, but not to you. You are likely to have to ask them what they mean or use a resource like the Urban Dictionary. Other behaviours such as using a particular emoji might be hilarious for reasons you cannot quite comprehend.

These “rhetorics of communication” are an important part of literacy practices, especially in the digital realm. They constitute ways of interacting with other people within a techno-social system which itself privileges and foregrounds certain kinds of behaviours, while either explicitly or implicitly discouraging others. For example, contemporary chat apps allow you to see not only that a message has been delivered, but whether it has been read. The act of not reading a particular message may be seen in multiple lights: Is the person ignoring me? Are they mad at me? Are they offline in the forest?

Any time we are communicating in ways which are mediated by technologies, part of literate behaviour involves understanding the “affordances” that the technology provides as well as understanding how that technology might be shaping our behaviour.

If you were, for example, quickly texting your teenager while at work, you might send your text and then open a workplace chat window which looks and feels very much like the social one which you have just been using. However, because the context is different — as well as perhaps both the demographic makeup and number of people in the chat — you act differently. =Your literate behaviours are thus socially-negotiated, meaning that you vary your behaviour in different situations.

As we start to understand AI Literacies, we need to think about the most common way in which people experience generative AI — by prompting a Large Language Model (LLM) through a chat window. The chat window is a familiar technology, but the fact that there isn’t a human on the other side of it is not. Part of AI Literacies therefore involves exhibiting and modifying our behaviours based on our knowledge and experience of factors that surround this particular chat window.

Image CC BY-ND Visual Thinkery for WAO

With personal or workplace chats, what is outside of the frame informs literate behaviours inside the frame. Similarly, when we are interacting with AI, the more we know about what is outside the frame, the more we can develop appropriate literate behaviours inside the frame. Again, these literate behaviours are plural: will others be able to tell that you are using the outputs of an LLM? (will they mind?) They are based on context: should you trust the company behind the technology you are using? And they are socially-negotiated: are there environmental concerns of which you should be aware?

Angela Gunder’s Dimensions of AI Literacies provides a helpful way to think about these issues. Building on my work on the Essential Elements of Digital Literacies, Gunder’s framework sets out a series of overlapping dimensions that shape how people interact with AI. This approach supports the idea that AI Literacies are not a fixed set of skills, but a collection of practices negotiated within communities and shaped by context.

AI Literacies, like Critical Media Literacies, Digital Literacies, Information Literacies, Data Literacies, and a whole host of “new” literacies, should be considered to be fundamentally plural. What counts as “literate behaviours” are socially-negotiated based on context. Words and phrases, however, are important to describe what we mean. And that is why we will be referring to AI Literacies in the project we’re doing with the RIC for the BBC.

Coming soon

We are working on a public version of our landscape setting and framework for AI Literacies. We’ll be sharing that soon. In the meantime, follow our contributions to this space through https://ailiteracy.fyi/ and get in touch if you have projects and programmes that can benefit from our experience and expertise in education and technology.


Next Level Supply Chain Podcast with GS1

Why Your Amazon Ads Aren’t Working (And What the Best Brands Do Differently)

What if you could see what your customers are searching before they even hit 'buy'? In today’s hyper-competitive marketplace, that’s exactly what the best brands are doing, and it’s giving them a serious edge. In this episode, Chris Barnes, General Manager of Retail & Alternative Channels at Jungle Scout, joins hosts Reid Jackson and Liz Sertl to break down how brands are using real-time s

What if you could see what your customers are searching before they even hit 'buy'?

In today’s hyper-competitive marketplace, that’s exactly what the best brands are doing, and it’s giving them a serious edge.

In this episode, Chris Barnes, General Manager of Retail & Alternative Channels at Jungle Scout, joins hosts Reid Jackson and Liz Sertl to break down how brands are using real-time search, product, and shopper data to respond faster to market signals on Amazon and beyond.

Chris shares how companies are identifying market opportunities, protecting category share, and improving performance across pricing, inventory, and advertising, all by knowing what to look for in the data.

In this episode, you’ll learn:

How to use search and shopper behavior to guide product strategy

Why category management looks different in the age of marketplaces

The data brands overlook and how it impacts sales performance

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:07) Chris Barnes on his career journey

(04:38) Transforming strategy with data and intelligence

(05:55) Category management in-store vs. online

(09:23) AI’s impact on search and consumer behavior

(13:04) Dude Wipes’ growth and success

(15:35) Leveraging data to understand consumer needs

(19:17) Power of data analytics in product development

(20:48) Top strategies for maximizing growth

(27:37) The future of agencies and AI in business

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Chris Barnes on LinkedInCheck out Jungle Scout

Monday, 09. June 2025

Velocity Network

Reinventing How Career Records Are Shared

The post Reinventing How Career Records Are Shared appeared first on Velocity.

Friday, 06. June 2025

FIDO Alliance

MobileIDWorld: Mastercard Launches Passkey Authentication in Europe, Achieves 50% E-commerce Adoption

Mastercard has launched advanced payment passkeys across Europe as part of its initiative to enhance online transaction security and replace traditional passwords. The company reports that its tokenization and passkey […]

Mastercard has launched advanced payment passkeys across Europe as part of its initiative to enhance online transaction security and replace traditional passwords. The company reports that its tokenization and passkey implementation has achieved nearly 50 percent adoption in European e-commerce transactions, building on its successful passkey deployment in Latin America earlier this year.

The payment technology company’s new security measures arrive at a critical time, as data shows that one in four business owners in Europe face targeting by scammers. A quarter of these businesses express concern about their ability to recover from potential cyber attacks. The expansion follows the broader industry trend toward passwordless authentication, with the FIDO Alliance reporting significant growth in enterprise passkey adoption.


PYMNTS: OneSpan Acquires Passwordless Authentication Specialist Nok Nok Labs

OneSpan announced Thursday (June 5) its acquisition of Nok Nok Labs, a provider of FIDO passwordless software authentication solutions. OneSpan said joining forces with Nok Nok enables the company to provide customers […]

OneSpan announced Thursday (June 5) its acquisition of Nok Nok Labs, a provider of FIDO passwordless software authentication solutions.

OneSpan said joining forces with Nok Nok enables the company to provide customers worldwide with a comprehensive authentication portfolio, available on-premises or in the cloud. This combined offering now includes support for OTP, FIDO, software, and hardware solutions, such as Digipass, FIDO2 protocols, and Cronto solutions for transaction signing.

Victor Limongelli, CEO at OneSpan, described the acquisition as a “bold step toward providing customers with maximum choice in authentication.” He added that the company is evolving its entire authentication platform to include FIDO standards, believing that passwordless authentication is an important part of the future. With Nok Nok’s technology and FIDO expertise, OneSpan aims to offer a comprehensive and versatile customer authentication solution.

Phillip Dunkelberger, president and CEO at Nok Nok, noted that joining OneSpan allows them to bring their vision, rooted in open standards like FIDO, to a broader audience via OneSpan’s global reach. Andrew Shikiar, executive director and CEO of the FIDO Alliance, said Nok Nok has been a “trailblazer” in the FIDO ecosystem.


PCMag: This Password Manager Now Lets You Create an Account Without a Password

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication. One of the better-known password managers is now inviting people to try it without having to create […]

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication.

One of the better-known password managers is now inviting people to try it without having to create yet another password. Instead, Dashlane is now inviting people to try opening a new account secured only by a USB security key compliant with the “FIDO2” authentication standard; FIDO being short for Fast Identity Online.

Emphasize “try.” The company’s support page for this “early access” program notes that it supports only Google Chrome and Microsoft Edge, not Dashlane’s mobile apps. For now, it doesn’t let you create an account secured only by a passkey, the form of FIDO2 security more people use. 

Wednesday, 04. June 2025

Blockchain Commons

Blockchain Commons Signs No Phone Home Initiative

At Blockchain Commons, we design open infrastructure that prioritizes privacy, autonomy, and human dignity. That’s why I support and personally signed the No Phone Home Initiative. It is not just a position, it’s a call to preserve a foundational principle of decentralized identity: Credentials must be verifiable without enabling surveillance! Why “No Phone Home” Matters The problem is simple: when

At Blockchain Commons, we design open infrastructure that prioritizes privacy, autonomy, and human dignity. That’s why I support and personally signed the No Phone Home Initiative. It is not just a position, it’s a call to preserve a foundational principle of decentralized identity: Credentials must be verifiable without enabling surveillance!

Why “No Phone Home” Matters

The problem is simple: when verifying a digital credential such as a mobile driver’s license or a diploma, many systems require contacting the original issuer. This creates a digital trail of who is verifying what, when, and why. That trail can be used to profile and surveil, allowing issuers to track credential holders without their knowledge.

Manu Sporny, in an email to the W3C Credentials Community Group (June 3, 2025), clarified the stakes:

“Retrieving a status list could be misinterpreted as ‘phoning home’ … but it’s not anywhere near the same level of “phoning home” that contacting the issuer and telling them ‘I’ve got Steve here at booze-hut.com, is he over 21?’ achieves.”

But the threat of direct identity leak is just the tip of the iceberg. That’s because credential presentation isn’t a one-off event. It’s recurrent. Even when identifiers or interactions are pseudonymous, repeated verifications leak sensitive metadata, allowing issuers or third parties to correlate time, location, and use-pattern metadata into behavioral profiles.

The Decentralized Identity Foundation talks about some of this in “Nearly 100 Experts Are Saying ‘No Phone Home’”:

“The risks multiply when applied across domains. Federated protocols developed for use within organizations become surveillance systems when used between different sectors or jurisdictions. Phone home capabilities that seem innocuous within a single domain can become tools for tracking and control when applied broadly without aggressive oversight and fine-tuning.”

Problematically, this is how these systems are designed to work! As Kim Hamilton Duffy says in the first of a series of articles on mDL privacy that she’s currently working on:

“This isn’t an unintended consequence—it’s an architectural feature that can trivially enable persistent record-keeping of when and where you use your credentials, creating patterns that can be analyzed long after the original transaction by unknown third parties.”

This also reveals yet another danger: how “normal” it feels to let credential issuers remain silently in the loop.

Revocation Without Surveillance

Some argue that checking for the revocation of a credential requires phoning home. But that’s a false dilemma. In the same W3C thread, Sporny noted:

“There are a few ways to retrieve a status list without directly contacting the issuer that use commonly deployed web technology.””

Technical mitigations discussed and developed by the community include:

Large, pseudonymous status lists (e.g., Bitstring Status List) Use of CDNs or file mirrors to avoid direct issuer contact Oblivious HTTP (OHTTP) for unlinkable status fetching

There are still issues with these potential mitigations. For example, Kyle Den Hartog (Pryvit NZ) raised concerns about status list misuse:

“An issuer creates a bitstring list of sufficiently large size, but only includes 1 active credential index per bitstring list. All other indexes are spoofed. The rest of the list would look active to the holder/verifiers but could still be a tracking mechanism by the issuer.”

But, these edge-case attacks reinforce why the core architecture must be surveillance-resistant by default.

Privacy isn’t the only issue with revocation checking: it’s also a structural risk. If we tie credential validity to live status checks, we quietly shift power from holders to issuers. It becomes a form of dependency injection, one that contradicts the goal of self-sovereign identity.

Not Just Technical: It’s Ethical

This isn’t just a technical issue. It goes to the ethical heart of self-sovereign identity design.

Daniel Hardman offered this framing in a related thread on edge cases:

“Verifiable credentials verify without issuer coordination; that is what the ‘verifiable’ in ‘verifiable credential’ means.”

Joe Andrieu argued in the same May 2025 thread:

*The identity system that wins is going to be the one we can use in any circumstance by anyone. … It’s my wallet. I expect it to serve me, as a user agent. I do not accept that it might also serve the state as a surveillance platform.

At Blockchain Commons, we agree. The ability to verify credentials offline, without depending on a central service, is essential for resilience and civil liberties.

A report prepared for the American Civil Liberties Union (ACLU) put it clearly by requiring “No Issuer ability to track via phone home mechanism” and saying:

“One way a digital ID can differ from physical ID is that it can enable the issuers of the digital ID to track where, when, and to whom one shows their ID. This tracking can reveal very private and sensitive information about the digital ID holder — namely, when and where, online or off, they present their ID. Standards and technologies should be designed so that the issuer (or any of its agents or contractors) cannot engage in any of these forms of tracking.”

Emergencies Are Not an Excuse

Some use cases such as disaster response or first responder tracking have prompted discussions around consent-based “check-ins.” These are complex and worthy of consideration. But the VC Data Model 2.0 spec is clear:

“Credential status specifications MUST NOT enable tracking of individuals, such as an issuer being notified (either directly or indirectly) when a verifier is interested in a specific holder or subject. Unacceptable approaches include “phoning home …”

But compliance doesn’t equal safety. Even digital credentials that conform to existing standards—such as ISO 18013-5—can still include implementation choices that enable surveillance. Privacy must be baked into system design, not retrofitted through policy disclaimers.

As Alexis Hancock of the Electronic Frontier Foundation (EFF) warned (via State Scoop):

“We have to act now because governments are enthusiastic about digital ID, but if we don’t pin down these basic principles now, it’s going to be a problem later.”

We can build systems that are opt-in, purpose-specific, and out-of-band, without compromising the privacy baseline for everyone else.

Our Commitment

At Blockchain Commons, we believe decentralized identity should empower the individual, not quietly report on them. We are actively designing open standards such as Gordian Envelope and dCBOR to support truly private, verifiable, interoperable credentials.

We support “No Phone Home” because surveillance should never be the default. And we invite others to join us in making sure the future of identity remains decentralized, private, and just.

Tuesday, 03. June 2025

Digital Identity NZ

Introducing Our New Executive Director | May Newsletter

Welcome to the May edition of the Digital Identity NZ newsletter. This month, we're excited to introduce our new Executive Director, share insights from Techweek25’s Foundations for Tomorrow Event and the launch of Air New Zealand’s new passport capture service. The post Introducing Our New Executive Director | May Newsletter appeared first on Digital Identity New Zealand.

Kia ora,

Welcome to the May edition of the Digital Identity NZ newsletter. This month, we’re excited to introduce our new Executive Director, share insights from Techweek25’s Foundations for Tomorrow Event.

Andy Higgs Appointed as New Executive Director

We’re pleased to announce that Andy Higgs has joined Digital Identity NZ as our new Executive Director.

Andy brings over 20 years of experience across digital identity, AI strategy, and innovation in both public and private sectors. His background includes leadership roles at Futureverse and Centrality, where he focused on self-sovereign identity solutions and ecosystem partnerships.

His experience extends to policy development with the Department of Internal Affairs and Ministry of Business, Innovation and Employment, including work on the Digital Identity Services Trust Framework and the Consumer Data Right.

Andy’s collaborative approach will be valuable as DINZ continues to work alongside members to build a trusted digital identity ecosystem for everyone in Aotearoa.

Digital Public Infrastructure: Foundations for Tomorrow Event

During Techweek25, government, industry, and public sector leaders gathered at Parliament’s Legislative Chamber to discuss how digital public infrastructure (DPI) could transform service delivery across New Zealand.

Key takeaways for the digital identity community:

Ministerial vision: Hon Judith Collins KC announced plans for an all-of-government app allowing citizens to store digital credentials, receive notifications, and access services in one secure digital space.
  Economic benefits: Pete Herlihy from AWS highlighted that digital identity is one of four core components of DPI that can deliver significant economic growth—between 1-2% of GDP in developed nations.
  Human-centered approach: Deloitte’s Adithi Pandit emphasised how unified digital infrastructure could enable more joined-up social services and reduce fragmentation.
  Implementation plans: Public Service Commissioner Sir Brian Roche indicated a move toward greater centralisation with prescribed platforms and standards to make digital infrastructure a low-cost utility.
  Industry perspective: Xero founder Rod Drury called for greater urgency in digital identity implementation, suggesting New Zealand could leverage its small size to move quickly and “solve digital identity by Christmas.”

Read the full event recap here.

Member News

Our DINZ community continues to grow! We’re delighted to welcome POLipay as a member and look forward to featuring and engaging them in our ecosystem.

See all organisation members here.

Stay Connected

Thank you for being part of our community. We look forward to sharing more updates next month. 

Ngā mihi nui,

The team at Digital Identity NZ

Read full news here: Introducing Our New Executive Director | May Newsletter

SUBSCRIBE FOR MORE

The post Introducing Our New Executive Director | May Newsletter appeared first on Digital Identity New Zealand.


Blockchain Commons

Musings of a Trust Architect: Fair Witnessing in a Decentralized World

ABSTRACT: “Fair Witnessing” is a new approach for asserting and interpreting digital claims in a way that mirrors real-world human trust: through personal observation, contextual disclosure, and progressive validation. It can be implemented with the decentralized architecture of Gordian Envelopes to allow individuals to make verifiable statements while balancing privacy, accountability, and interpr

ABSTRACT: “Fair Witnessing” is a new approach for asserting and interpreting digital claims in a way that mirrors real-world human trust: through personal observation, contextual disclosure, and progressive validation. It can be implemented with the decentralized architecture of Gordian Envelopes to allow individuals to make verifiable statements while balancing privacy, accountability, and interpretability. At its core, fair witnessing is not about declaring truth, it’s about showing your work.

In the early days of decentralized identity, we referred to what we were working on as “Verifiable Claims.” The idea was simple: let people make cryptographically signed statements and allow others to verify them. But something unexpected happened. People assumed these claims would settle arguments or stop disinformation. They saw the term “verifiable” and equated it with “truth.”

The reality was more modest: we could verify the source of a claim but not its accuracy. We could assert that a claim came from a specific person or organization (or even camera or other object) but not whether that claim was unbiased, well-observed, or contextually complete.

This misunderstanding revealed a deeper problem: how do we represent what someone actually saw and how they saw it, in a way that honors the complexity of human trust?

A Heinleinian Inspiration

Iin Stranger in a Strange Land, Robert Heinlein described a special profession: the Fair Witness. A Fair Witness would be trained to observe carefully, report precisely, make no assumptions, and avoid bias. If asked what color a house was, a Fair Witness would respond, “It appears to be white on this side.”

It is this spirit we want to capture to fulfill the promise of the original verifiable claims.

A Fair Witness in our digital era is someone who not only asserts a claim but also shares the conditions under which it was made, including context, methodology, limitations, and bias:

What were the physical conditions of the observation? Was the observer physically present? Did they act independently? What interests or leanings might have shaped their perception? How did they minimize those biases?

These are not just nice-to-haves. They are necessary components of evaluating a claim’s credibility.

Beyond Binary Trust

Fair witnessing challenges binary notions of trust. Traditional systems ask a “yes” or “no” question: do you trust this certificate? This issuer?

But trust is rarely binary like this in the real world. It is layered, contextual, and progressive. The claim made by a pseudonymous environmental scientist might start out with low trust but could grow in credibility as:

They reveal their professional history. Others endorse their work. They disclose how they mitigated their potential biases.

Trust builds over time, not in a single transaction. That’s progressive trust.

Trust as a Nested Statement

To marry a fair witness claim to the notion of progressive trust requires the nesting of information. As shown in the example of the environmental scientist, the witnessing of an observation gains weight as the context is added: turning the scientist’s claims into a fair-witness statement required collecting together information about who the scientist is, what their training is, and what their peers think of them.

But as noted, progressive trust isn’t something that occurs in a single transaction: it’s revealed over time. We don’t want it to all be revealed at once, because that could result in information overload for someone consulting a claim and could have privacy implications for the witness.

A progressive trust model of fair witnessing requires that you show what you must and that you withhold what’s not needed—until it is.

Privacy and Accountability, Together

This model strikes a crucial balance. On one hand, it empowers individuals (fair witnesses) to speak from experience without needing permission from a centralized authority. On the other hand, it allows others to verify the integrity of the claim without requiring total exposure.

There are numerous use cases:

You can prove you were trained without revealing your name. You can demonstrate personal observation without revealing your exact location. You can commit to a fact today and prove you knew it later. Fair Witnessing with Gordian Envelope

The demands of Fair Witnessing go beyond the capabilities of traditional verifiable credentials (VCs), primarily because VCs can’t remove signed information but maintain its validation—and the ability to do so is critically important if you want to nest information for revelation over time.

Fortunately, a technology already exists that provides this precise capability: Blockchain Commons’ Gordian Envelope, which allows for: the organized storage of information; the validation of that information through signatures; the elision of that information; the continued validation of the information after elision; and the provable restoration of that information.

Any subject, predicate, or object in Gordian Envelope can itself be a claim, optionally encrypted or elided. This enables a deeply contextual, inspectable form of expression.

For example:

Alice could make a fair-witness observation, which would be an envelope. Information on the context of Alice’s assertion can be a sub-envelope. A credential for fair witness training can be a sub-envelope. Endorsements of Alice’s work as a fair witness can be sub-envelopes. Endorsements, credentials, and even the entire envelope can be signed by the appropriate parties. Any envelope or sub-envelope can be elided, without affecting these signatures and without impacting the ability to provably restore the data later.

It’s progressive trust appropriate for use with fair witnessing in an existing form!

Toward a New Epistemology

Being a Fair Witness isn’t about declaring truth. It’s about saying what’s known, with context, so others can assess what’s truth. Truth, in this model, is interpreted, not imposed. A verifier—or a jury—decides if a claim is credible, not because a central authority says so, but because the Fair Witness has provided information with sufficient context and endorsements.

In other words, fair witnessing is not about what is true, but about how we responsibly say what we believe to be true—and what others can do with that.

This is epistemology (the theory of knowledge) that’s structured as a graph. It’s cryptographically sound, privacy-respecting, and human-auditable. It reflects real-world trust: messy, contextual, and layered. By modeling that complexity rather than flattening it, we gain both rigor and realism.

Conclusion

In a world of machine-generated misinformation, ideological polarization, and institutional distrust, we must return to the foundations: observation, context, and human responsibility.

Fair witnessing offers a new path forward—one that is verifiable, privacy-respecting, and grounded in how humans actually trust.

Learn more: [ Progressive Trust Gordian Envelope ]

Monday, 02. June 2025

DIF Blog

Nearly 100 Experts Are Saying "No Phone Home"

After decades of cautiously watching from the sidelines, governments around the world have started investing in, rolling out, and regulating digital identity systems on aggressive timelines. These foundational changes to government infrastructure and the economy are happening largely outside public awareness, despite their generational consequences for privacy. Digital identity systems

After decades of cautiously watching from the sidelines, governments around the world have started investing in, rolling out, and regulating digital identity systems on aggressive timelines. These foundational changes to government infrastructure and the economy are happening largely outside public awareness, despite their generational consequences for privacy.

Digital identity systems implemented by governments today will shape privacy for decades. Whatever ecosystems and technical architectures are established in the coming years could ossify quickly, and it would take enormous political will to make changes at such a foundational level if society develops buyer's remorse once the ripple effects become clear.

That's why nearly 100 experts across technology, policy, and civil liberties have united around one principle: digital identity systems must be built without latent tracking capabilities that could enable ubiquitous surveillance. Thus, the nophonehome.com petition.

Who's Behind This

Civil society groups working on legal advocacy and industry oversight (ACLU, EFF), cybersecurity experts (including Bruce Schneier), privacy-by-design software companies of various sizes (Brave, many DIF members), and experts from university faculties (Brown, Columbia, Imperial College London) all signed on. The list includes authors of collaborative open standards, chief executives, state privacy officers, and other public servants. This is not a coalition of "activists" so much as a broad coalition of experts and policy-watchers sounding an alarm about consequential decisions passing largely unnoticed by the average citizen and end-user.

The breadth of this coalition reflects widespread concern about the technical and policy implications of embedded tracking capabilities.

What "Phone Home" Means

As a general rule, "phone-home" is a shorthand for architectural principles of tracking enablement (just as "no phone-home" refers to tracking mitigation, broadly speaking). When a verifier of credentials interacts directly with the credential's issuer—even if just to check validity or revocation status—they are "phoning" the credential's "home." This opens the subject and/or the holder of that credential to privacy risks, no matter how well the request is anonymized or handled. These API connections create data that can be combined, correlated, and abused, especially when verifiers share information or when issuers abuse their role.

The risks multiply when applied across domains. Federated protocols developed for use within organizations become surveillance systems when used between different sectors or jurisdictions. Phone home capabilities that seem innocuous within a single domain can become tools for tracking and control when applied broadly without aggressive oversight and fine-tuning. Over time, little mismatches and slippages in how these protocols work get exploited and stretched, amplifying glitches.

In the worst-case scenario, some systems enable real-time revocation decisions, giving issuers—potentially governments—immediate control over citizens' ability to access services, travel, or participate in society. A natural tendency to "over-request" foundational documents in situations where such strong identification is unjustified is amplified by familiarity, lack of friction, and other UX catnip; all the SHOULDs in the world won't stop verifiers from doing it. And verifiers over-asking without also providing a fallback or "slow lane" can make a sudden or temporary unavailability of foundational credentials painful or even exclusionary. The side-effects and externalities pile up dangerously in this industry!

Technologists see these kinds of capabilities (phone-home of any kind, remote revocation, low-friction foundational identity requests) like loaded guns in Act 1 of a Chekhov play: "If this capability exists within a digital identity system, even inactively, it will eventually be misused."

The Scale and Timing Problem

Most foundational identity systems being implemented for national-scale deployment include system-wide phone home tracking capabilities, either actively or latently. Many policymakers involved in these rollouts are not even aware of the tracking potential built into the standards they are adopting.

Four factors make this moment critical:

Scale of deployment: These systems will serve billions of users across developed nations, effectively replacing physical credentials. Precedent-setting effects: When one jurisdiction adopts tracking-enabled systems, it influences global practices and standards. Infrastructure persistence: Technical decisions made today will persist for decades, becoming prohibitively expensive to change once embedded. Mission creep inevitability: Capabilities developed for legitimate purposes like fraud prevention naturally accrue new private-sector and/or public-sector use-cases over time due to natural market pressures. Today's private-sector usage is tomorrow's public-sector secondary data market. The Fallacy of "Privacy by Policy"

The fundamental problem with latent tracking capabilities is that policies change, but technical architecture persists. If a system has surveillance capability—even if unused—it will eventually be activated. Emergencies, changing administrations, or shifting political priorities can quickly justify "pressing the button" to enable widespread tracking.

The solution is simple: they cannot press a button they do not have.

Consider AAMVA's recent decision to prohibit the "server retrieval" capability throughout the U.S.—a positive step that we welcome. However, most low-level implementations (e.g. core libraries) will likely implement the entire specification and leave it to the last-mile implementers to honor (or not) this policy. As an incubator of new specifications and prototypes, DIF feels strongly that jurisdiction-by-jurisdiction policies is just "turning off" what the specification still instructs software to implement for later policies to turn back on at the flick of a switch. We believe the underlying ISO specification needs to remove "server retrieval" completely, lest every authority in the U.S. remain one emergency away from activating broad, identity-based surveillance of all citizens.

Privacy-Preserving Alternatives Exist

The choice between security and privacy is false. Offline-first verification operates without server communication—the credential contains cryptographic proofs that can be validated independently. The ISO 18013-5 standard itself includes "device retrieval" mode, a privacy-preserving alternative that functions entirely offline.

Even credential revocation can be implemented without phone home capabilities. Privacy-preserving revocation systems are in production today, proving that security and privacy can coexist.

The technology exists. The standards exist. What has been missing is commitment to prioritize privacy over the operational convenience of centralized tracking.

Moving Forward

Awareness is growing. We welcome developments like AAMVA's prohibition of server retrieval, but more work is needed across the broader digital identity ecosystem to eliminate latent surveillance capabilities entirely.

The Decentralized Identity Foundation develops standards that prioritize privacy, supports implementations that respect user autonomy, and advocates for technical architectures that prevent tracking and add friction to data misuse. Our membership includes many technologists and vendors designing tracking-free alternatives for these and other use cases.

We encourage you to read the full No Phone Home statement at https://nophonehome.com. Whether you are building, deploying, or using these systems, your voice matters at this critical juncture.

The question is not whether we can build privacy-preserving digital identity—it is whether we will choose to do so. Let's build it right.

The Decentralized Identity Foundation (DIF) is an engineering-driven organization focused on developing the foundational elements necessary to establish an open ecosystem for decentralized identity and ensure interoperability between all participants. Learn more at identity.foundation.

Thursday, 29. May 2025

Oasis Open

Data Model for Lexicography Approved as an OASIS Standard

Boston, MA – 29 May 2025 – Members of OASIS Open, the global open source and standards organization, have approved the Data Model for Lexicography (DMLex) Version 1.0 as an OASIS Standard, a status that signifies the highest level of ratification. Developed by the OASIS Lexicographic Infrastructure Data Model and API Technical Committee (LEXIDMA TC), […] The post Data Model for Lexicography Appr

OASIS Members Advance Digital Lexicography with an Interoperable Data Model for Dictionaries

Boston, MA – 29 May 2025 – Members of OASIS Open, the global open source and standards organization, have approved the Data Model for Lexicography (DMLex) Version 1.0 as an OASIS Standard, a status that signifies the highest level of ratification. Developed by the OASIS Lexicographic Infrastructure Data Model and API Technical Committee (LEXIDMA TC), DMLex v1.0 establishes a groundbreaking framework for internationally interoperable lexicographic work, advancing innovation in digital dictionaries, language services, and related industries. 

“We’re incredibly proud of what we’ve achieved with DMLex v1.0. This is a lifechanging milestone for lexicography, paving the way for a new level of digitisation and for truly innovative applications,” said Michal Měchura, Chair of the OASIS LEXIDMA TC. “By providing a common framework for structuring and exchanging lexicographic resources, DMLex empowers language documentors around the world to manage their content more effectively, to collaborate and to build smarter language technologies.”

Lexicography has undergone a profound transformation in the digital age, with dictionaries now being compiled from language corpora and consumed through web platforms, mobile apps, and integrated into search engines, writing tools, and machine translation software. However, legacy data models have been hampering further innovation. DMLex v1.0 addresses these challenges by introducing a modular, IT-friendly, and content-rich data model designed to meet the needs of both lexicographers and technology developers. DMLex has been designed to be easily and straightforwardly implementable in XML, JSON, RDF, NVH, as a relational database, and as a Semantic Web triplestore.

OASIS encourages widespread adoption of DMLex v1.0 and invites feedback from lexicographers, developers, and other stakeholders to further enhance its capabilities. Participation in the LEXIDMA TC is open to all through membership in OASIS. For more information, visit the TC homepage and read Michal Měchura’s blog post exploring the goals and impact of DMLex.

The post Data Model for Lexicography Approved as an OASIS Standard appeared first on OASIS Open.

Tuesday, 27. May 2025

FIDO Alliance

Biometric Update: 10 million passkeys registered for Mercari market app amid phishing crisis

Mercari, the Japanese e-commerce company behind the Mercari marketplace, has surpassed 10 million registered users of passkeys for authentication.

Mercari, the Japanese e-commerce company behind the Mercari marketplace, has surpassed 10 million registered users of passkeys for authentication.


Biometric Update: Yubico simplifies passwordless

Yubico, a provider of hardware authentication security keys, has announced the expanded availability of YubiKey as a Service to all countries in the European Union. This builds upon the company’s […]

Yubico, a provider of hardware authentication security keys, has announced the expanded availability of YubiKey as a Service to all countries in the European Union.

This builds upon the company’s existing reach in markets such as the UK, U.S., India, Japan, Singapore, Australia and Canada. In addition, Yubico has expanded the availability of YubiEnterprise Delivery across 117 new locations around the world.

This now brings the total to 199 locations (175 countries and 24 territories and it more than doubles existing delivery coverage of YubiKeys to both office and remote users in a fast nad turnkey way. “Enterprises today are facing evolving cyber threats like AI-driven phishing attacks,” said Jeff Wallace, senior vice president of product at Yubico.


PYMNTS: Entersekt and Stanchion Team to Enhance Payment Integration

Authentication software company Entersekt has launched a partnership with South Africa-based PayTech solution provider Stanchion. The partnership is aimed at “enhancing payment integration capabilities and delivering cutting-edge solutions to financial institutions worldwide,” the […]

Authentication software company Entersekt has launched a partnership with South Africa-based PayTech solution provider Stanchion.

The partnership is aimed at “enhancing payment integration capabilities and delivering cutting-edge solutions to financial institutions worldwide,” the companies said in a Wednesday (May 21) news release.

The collaboration combines Stanchion’s tools for “modernizing, transforming, and accelerating innovation within payment systems” with Entersekt’s 3-D Secure payment authentication solution, which provides transaction authentication across all three domains: the merchant acquirer domain, the card issuer domain and the interoperability domain.


Info Security Buzz: From Passwords to Passkeys: The Future of Digital Identity Protection

Passwords have been used as the first line of defense in protecting one’s digital identity, but they are fast becoming obsolete due to rampant identity theft. There seems to be […]

Passwords have been used as the first line of defense in protecting one’s digital identity, but they are fast becoming obsolete due to rampant identity theft. There seems to be no value in passwords anymore due to the increase in breaches of security systems on different platforms. This calls for an easier method of suppressing theft.

It is equally important to recognize the rise of passkeys as they help a great deal in bolstering digital identity protection.

Thursday, 22. May 2025

We Are Open co-op

Core Values for AI Literacy

Embedding Ethics for Audiences aged 14–19

WAO is currently working with the Responsible Innovation Centre for Public Media Futures (RIC), which is hosted by the BBC. The project, which you can read about in our kick-off post, is focused on research and analysis to help the BBC create policies and content to help improve the AI Literacy skills of young people aged 14–19.

We’re now at the stage where we’ve reviewed academic articles and resources, scrutinised frameworks, and reviewed input from over 40 experts in the field. They are thanked in the acknowledgements section at the end of this post.

One of the things that has come up time and again is the need for an ethical basis for this kind of work. As a result, in this post we want to share the core values that inform the development of our (upcoming) gap analysis, framework, and recommendations.

Public Service Media Values

Public Service Media (PSM) organisations such as the BBC have a mission to “inform, educate, and entertain” the public. The Public Media Alliance lists seven PSM values underpinning organisations’ work as being:

Accountability: to the public who fund it and hold power to account Accessibility: to the breadth of a national population across multiple platforms Impartiality: in news and quality journalist and content that informs, educates, and entertains Independence: both in terms of ownership and editorial values Pluralism: PSM should exist as part of a diverse media landscape Reliability: especially during crises and emergencies and tackling disinformation Universalism: in their availability and representation of diversity

These values are helpful to frame core values for the development of AI Literacy in young people aged 14–19.

AI Literacy Core Values

Using the PSM values as a starting point, along with our input from experts and our desk research, we have identified the following core values. These are also summarised in the graphic at the top of this post.

1. Human Agency and Empowerment

AI Literacy should empower young people to make informed, independent choices about how, when, and whether to use AI. This means helping develop not just technical ability, but also confidence, curiosity, and a sense of agency in shaping technology, rather than being shaped by it (UNESCO, 2024a; Opened Culture, n.d.). Learners should be encouraged to question, critique, adapt, and even resist AI systems, supporting both individual and collective agency.

2. Equity, Diversity, and Inclusion

All young people, regardless of background, ability, or circumstance should have meaningful access to AI Literacy education (Digital Promise, 2024; Good Things Foundation, 2024). Ensuring this in practice means addressing the digital divide, designing for accessibility, and valuing diverse perspectives and experiences. Resources and opportunities must be distributed fairly, with particular attention to those who are digitally disadvantaged or underrepresented.

3. Critical Thinking and Responsible Use

Young people should be equipped to think critically about AI, which means evaluating outputs, questioning claims, and understanding both the opportunities and risks presented by AI systems. In addition, young people should be encouraged to understand the importance of responsible use, including understanding bias, misinformation, and the ethical implications of AI in society (European Commission, 2022; Ng et al., 2021).

4. Upholding Human Rights and Wellbeing

Using a rights-based approach — including privacy, freedom of expression, and the right to participate fully in society — helps young people understand their rights, navigate issues of consent and data privacy, and recognise the broader impacts of AI on wellbeing, safety, and social justice (OECD, 2022; UNESCO, 2024a).

5. Creativity, Participation, and Lifelong Learning

AI should be presented as a tool for creativity, collaboration, and self-expression, not just as a subject to be learned for its own sake. PSM organisations should value and promote participatory approaches, encouraging young people to contribute to and shape the conversation about AI. This core value also recognises that AI Literacy is a lifelong process, requiring adaptability and a willingness to keep learning as technology evolves (UNESCO, 2024b).

Next Steps

We will be running a roundtable for invited experts and representatives of the BBC in early June to give feedback on the gap analysis and emerging framework. We will share a version of this after acting on their feedback.

If you are working in the area of AI Literacy and have comments on these values, please add them to this post, or get in touch: hello@weareopen.coop

Acknowledgements

The following people have willingly given up their time to provide invaluable input to this project:

Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox​, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager

References Digital Promise (2024). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218 European Commission (2022) DigComp 2.2, The Digital Competence framework for citizens. Luxembourg: Publications Office of the European Union. https://doi.org/10.2760/115376. Good Things Foundation (2024) Developing AI Literacy With People Who Have Low Or No Digital Skills. Available at: https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/ai-literacy Jia, X., Wang, Y., Lin, L., & Yang, X. (2025). Developing a Holistic AI Literacy Framework for Children. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–16). ACM. https://doi.org/10.1145/3727986 Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2(100041), 100041. https://doi.org/10.1016/j.caeai.2021.100041 OECD (2022) OECD Framework for Classifying AI Systems. Paris: OECD Publishing. https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html Opened Culture (n.d.) Dimensions of AI Literacies. Available at: https://openedculture.org/projects/dimensions-of-ai-literacies Open University (2025) A framework for the Learning and Teaching of Critical AI Literacy skills. Available at: https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2025/01/OU-Critical-AI-Literacy-framework-2025-external-sharing.pdf UNESCO (2024a) UNESCO AI competency framework for students. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391105 UNESCO (2024b) UNESCO AI Competency Framework for Teachers. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391104

Wednesday, 21. May 2025

Internet Safety Labs (Me2B)

IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels

ISL presented at ConPro 2025’s 9th Workshop on Technology and Consumer Protection. The conference was a perfect opportunity to showcase our presentation on Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels. Open PDF The post IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels appeared first on Internet Safety Labs.

ISL presented at ConPro 2025’s 9th Workshop on Technology and Consumer Protection. The conference was a perfect opportunity to showcase our presentation on Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels.

Open PDF

The post IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels appeared first on Internet Safety Labs.


DIF Blog

Global Digital Collaboration (GDC): Your Questions Answered

Many in the DIF community are asking about the upcoming Global Digital Collaboration conference in Geneva. As the date is quickly arriving, we wanted to give a sneak preview of what's ahead. Table of Contents: About the GDC Conference The Agenda Learn more and participate About the GDC

Many in the DIF community are asking about the upcoming Global Digital Collaboration conference in Geneva. As the date is quickly arriving, we wanted to give a sneak preview of what's ahead.

Table of Contents:

About the GDC Conference The Agenda Learn more and participate About the GDC Conference Key Details When: July 1-2, 2025 Where: Centre International de Conférences Genève (CICG), Switzerland Cost: Free (registration required) Register: https://lu.ma/gc25 (registration is available through any co-organizing partner) What is the GDC?

Global Digital Collaboration is a landmark gathering bringing together 30+ global organizations to advance digital identity, wallets, and credentials - hosted by the Swiss Confederation.

What makes this conference truly unique is that, from the beginning, it's been co-organized by the participating organizations, who have worked with their communities, and with each other, to form an agenda that will help advance the most critical topics in digital identity.

Rather than being driven by a single organization's vision, the GDC represents a collaborative effort where international organizations, standards bodies, open-source foundations, and industry consortia have jointly defined priorities and sessions that address the most pressing challenges in digital trust infrastructure. This multi-stakeholder approach ensures broader perspectives are represented and creates unprecedented opportunities for alignment across traditionally separate communities.

Why Attend? Unprecedented collaboration: This conference's collaborative nature bridges organizations that rarely coordinate at this scale. Connect: Connect with peers from government and private sectors to advance standards coordination, cross-border interoperability, and robust digital public infrastructure Network with experts: Engage directly with technical leaders, government officials, and industry pioneers shaping the future of digital trust Who Is Organizing?

The current list of co-organizers can be seen in the header image, with more to be added later this week. As a brief preview, this includes:

International & Government Organizations

European Commission (DG-CNECT) International Telecommunication Union (ITU) United Nations Economic Commission for Europe (UNECE) World Health Organization (WHO)

Standards Development Organizations & Open Source Foundations

Decentralized Identity Foundation (DIF) Eclipse Foundation European Telecommunications Standards Institute (ETSI) FIDO Alliance International Electrotechnical Commission (IEC) International Organization for Standardization (ISO) Linux Foundation Decentralized Trust (LFDT) OpenWallet Foundation (OWF) Trust Over IP (TOIP) World Wide Web Consortium (W3C)

Industry Consortia

Cloud Signature Consortium (CSC) Digital Credentials Consortium (DCC) Global Legal Entity Identifier Foundation (GLEIF)

Next, we'll look at the exciting conference agenda and highlight key sessions for the DIF community.

The Agenda

The conference is structured across two distinct days, each with a specific purpose. Day 1 features plenary sessions designed to provide comprehensive overviews of global initiatives and sector-specific developments in digital identity. This agenda is nearly finalized and a draft has been published.

Day 2 offers a more interactive format with parallel presentations, technical deep dives, workshops, and collaborative sessions. The preliminary Day 2 schedule will be published next week, but we can share an early preview of the key themes and sessions that should be of particular interest to the DIF community.

Day 1: Global Landscape & Sector Scan Morning sessions feature updates from government and industry stakeholders worldwide Afternoon sessions explore major use cases across sectors including travel, health, education, and finance Morning: Opening & Global Landscape Opening addresses by leaders from ITU, ISO, WHO, and more Regional updates from: European Commission Switzerland United States China/Singapore Japan India Korea Australia Global South Afternoon: Sector Updates 🚘 Driving Licenses 🧳 Travel Credentials ⚕️ Health Credentials 📚 Educational Credentials 📦 Trade 💸 Payments 🏢 Organizational Credentials 🪙 Digital Assets 🪪 Standards for ID and Wallets 🔏 Digital Signatures 🔑 Car Keys Day 2: Technical Deep Dives and Working Sessions

Day 2 features parallel sessions where participants will be encouraged to follow their interests plus share their experience and expertise.

Parallel sessions across multiple tracks including:

Privacy & Security: Zero-knowledge proofs, unlinkability Industry and Organizational Focus: Industry 4.0, Digital Product Passports, European Business Wallet Implementation & Deployment: Real-world wallet applications Standards & Interoperability: Cross-border credential exchange Policy & Regulation: Governance frameworks Emerging Technology: Emerging needs around AI and digital identity Demo Hour: See wallet applications and more Learn More and Participate Get Updates

There will soon be a GDC web site to more easily access event information and schedule. For now, we recommend:

Follow Global Digital Collaboration on LinkedIn And of course, subscribe to the DIF blog for additional updates focused on the DIF community Ready to Register?

You can also register through any co-organizer available at https://lu.ma/gc25

👉 DIF community members are encouraged to use DIF's dedicated registration link: https://lu.ma/gc25-dif

Tickets are free of charge and grant full access to the entire conference, regardless of the organization used during registration

Hotels & Discounts

The upcoming GDC web site will be updated with the latest information. For now, feel free to use the discount codes on this google document.

Looking Forward

The Global Digital Collaboration conference represents a unique opportunity for advancing digital identity solutions that can work across borders while putting users in control. DIF is committed to ensuring privacy and agency remain front and center in these conversations.

For those in the DIF community and beyond, this is an unparalleled opportunity to shape the future of digital identity in collaboration with global decision-makers and implementers.


Next Level Supply Chain Podcast with GS1

Turning Wellness Rituals Into Retail Wins With Aisha Chottani

How do you build a beverage brand from scratch and land in over a thousand stores? For Aisha Chottani, it started with stress and a few homemade “potions”. In this episode, Aisha, Founder and CEO of Moment, joins hosts  Reid Jackson and Liz Sertl to talk through what really goes into launching and scaling a functional drink brand. From labeling boxes by hand to managing relationships

How do you build a beverage brand from scratch and land in over a thousand stores?

For Aisha Chottani, it started with stress and a few homemade “potions”.

In this episode, Aisha, Founder and CEO of Moment, joins hosts  Reid Jackson and Liz Sertl to talk through what really goes into launching and scaling a functional drink brand. From labeling boxes by hand to managing relationships with co-packers and navigating supply chain failures, Aisha shares the behind-the-scenes story most startup founders keep to themselves.

She also gets real about what went wrong, like barcode mix-ups and Amazon returns gone sideways, and how those lessons became systems that power Moment’s growth today.

In this episode, you’ll learn:

Why small brands need relationships more than volume

How early mistakes can turn into long-term wins

What to watch out for when scaling distribution and operations

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:34) Building a global mindset from four continents

(03:07) From McKinsey burnout to homemade “potions”

(06:06) Barcode errors and the pain of early logistics

(08:21) Growing Moment to 1,000 stores and 30 DCs

(11:33) What small brands can leverage on

(14:06) Collaborating with Lululemon

(17:15) Why Moment leans into a subscription model 

(20:39) Operational failures to learn from

(27:36) Aisha’s favorite technology

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Aisha Chottani on LinkedInCheck out Moment

Tuesday, 20. May 2025

Trust over IP

EGWG 2025-05-15: The C2PA Conformance Program, Scott Perry

The Coalition for Content Provenance and Authenticity (C2PA) has created an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. Over the last year, the organization has been creating a governance framework and conformance program (based on the ToIP Metamodel) and is ready to launch it this quarter. The post EGWG 20
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/EGWG-2025-05-15_-The-C2PA-Conformance-Program-Scott-Perry.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Briefing Document: Review of C2PA and its Governance

Date: May 15, 2024
Source: Excerpts from “GMT20250515-145218_Recording_2560x1440.mp4”
Presenter: Scott Perry, Co-chair of Trust over IP’s Foundations Steering Committee, Founder and CEO of the Digital Governance Institute, Co-chair of the Creator Assertions Working Group at the Decentralized Identity Foundation (DIF).
Topic: C2PA (Coalition for Content Provenance and Authenticity) and the Application of Trust over IP’s Governance Metamodel.

1. Executive Summary

This briefing summarizes a presentation by Scott Perry on the Coalition for Content Provenance and Authenticity (C2PA) and the application of the Trust over IP (ToIP) governance metamodel to its conformance program. The C2PA is an industry-wide initiative creating a technical standard to attach “truth signals” or provenance information to digital objects. Facing a critical need to operationalize and govern this specification to ensure market trust and adoption, the C2PA has adopted the ToIP governance metamodel. This framework provides the necessary structure to establish a conformance program, define roles and responsibilities, manage risks, and create trust lists for compliant products and certification authorities. The program is set to officially launch on June 4th, initially focusing on self-assertion for conformance and introducing two levels of implementation assurance, with plans for independent attestation and higher assurance levels in the future.

2. Key Themes and Ideas The Problem of Trust in Digital Objects: The presentation highlights the growing challenge of establishing trust and authenticity for digital content in a world of easily manipulated or AI-generated media. This is particularly relevant for industries like telecommunications struggling with identity and verification, as noted by a participant’s observation about OTPs and SMS verification. C2PA as a Standard for Provenance and Authenticity: The C2PA specification aims to provide a technical solution by creating a “content credential” or manifest that is cryptographically bound to a digital object. This manifest acts as a ledger of actions taken on the object, providing a history and “nutrition label” of its source and modifications. “basically, it’s all of the major tech companies except Apple… coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” Content Credential (Manifest): This is the core mechanism of the C2PA. It is a digitally attached ledger of actions taken on a digital object, such as “Camera took picture,” “edited picture,” or “an AI took this picture.” This manifest is “bound to it and linked to it” in a “cryptographically binding format,” providing tamper evidence. Scope of C2PA Responsibility: The C2PA primarily focuses on “created assertions,” which are “product-driven,” documenting actions taken within a product (e.g., a camera generating a picture, Photoshop editing an image). Distinction from “Gathered Assertions”: The C2PA does not take responsibility for “gathered assertions,” which are claims made by individuals or organizations outside of a product (e.g., “I Scott Perry took the picture” or industry-specific identifiers). These are the purview of other groups like CAWG (Content Authenticity Working Group) and related efforts like the Creator Assertions working group at DIF. Binding Mechanism: The C2PA uses X.509 certificates to bind the generator product to the digital asset. “when a picture is taken, the X509 certificate will be used will be binding it will be used to bind it bind the product to the asset.” This requires camera manufacturers and other product vendors to obtain certificates from approved Certification Authorities (CAs). The Need for Governance: While the C2PA created a technical specification, they recognized the critical need for a governance framework to operationalize and control the standard’s implementation and use in the market. “the key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” Application of ToIP Governance Metamodel: Scott Perry highlights how the ToIP governance metamodel provided the necessary structure for the C2PA to build its conformance program. “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” Key Components of the Governance Program (based on ToIP):Risk Assessment: Started with a “threats and harms task force” to identify major risks, particularly around the tampering of evidence and manifests. Governance Requirements and Framework: Defined primary documents (specification, security requirements, legal agreements) and control documents (privacy, inclusion, equitability requirements). A key output is a glossary of terms for the new ecosystem. Governance Roles and Processes: Identified key roles: the Governing Authority (C2PA Steering Committee), the Administering Party (Conformance Task Force), and Governed Parties (CAs, Generator Product companies, Validator Product companies). Legal Agreements: Formal agreements are being established between the C2PA and governed parties outlining roles, responsibilities, conformance requirements, and dispute resolution mechanisms. Conformance Criteria and Assurance: Defined based on the C2PA specification and implementation security requirements. The program includes “four levels of of assurance around the implementation of products,” though initially rolling out with two levels. These levels are tied to “security objectives” and assessed against the “target of evaluation” (the product and its supporting infrastructure). Conformance Process: Involves an intake form, application review, assessment of infrastructure (initially self-assertion, moving towards independent attestation), legal agreement signing, and adding records to trust lists. Residual Risk Assessment and Adaptation: The program includes a process to learn from the rollout, identify unmet requirements or issues, and adapt the program for continuous improvement. Trust Lists (Registries): Central to the program are trust lists identifying approved Generator Products, Validator Products, and Certification Authorities. A timestamp authority trust list is also being added. Levels of Assurance: The program is defining levels (initially rolling out two) to reflect different degrees of confidence in the implementation of the C2PA specification and associated security requirements. Achieving a higher level of assurance requires meeting all requirements for that level. Self-Assertion (Initial Rollout): Due to the complexity of auditing and getting the program launched quickly, the initial phase requires participants to self-assert that they meet the specification and requirements. Conformance Certificate: Upon successful conformance, products will receive a certificate tied to an OID (Object Identifier) denoting the assurance level they have achieved. This OID in the manifest’s certificate will identify the assurance level of the provenance information. JPEG Trust and Copyright: While C2PA provides provenance information that can be used for copyright, it doesn’t define ownership or copyright laws. JPEG Trust is mentioned as an organization creating an ISO standard focused on copyrights in concert with the C2PA standard. Relationship with W3C: The C2PA is actively engaged with the W3C, with discussions happening at the technical working group level regarding related standards like PROV (for provenance). Future Directions: Plans include introducing higher levels of assurance, implementing independent attested conformance, developing quality control software for assessing product compliance, and establishing a fee structure for the conformance program. CAWG (Content Authenticity Working Group) as a Broader Ecosystem: CAWG is viewed as a potentially larger ecosystem dealing with identity, metadata, endorsements, and AI learning process specifications, which will need to create their own applications and standards that can integrate with the C2PA foundation. 3. Important Ideas and Facts The C2PA is the Coalition for Content Provenance and Authenticity. It includes major tech and product manufacturers, excluding Apple initially but aiming to include them. The core technical output is the Content Credential (Manifest), a digitally attached ledger of actions on a digital object. The manifest provides tamper evidence and binds the product to the asset using X.509 certificates. C2PA focuses on “created assertions” (product-driven actions), leaving “gathered assertions” (individual/organizational claims) to other groups like CAWG. The Trust over IP governance metamodel has been successfully applied to structure the C2PA conformance program. The program addresses threats and harms related to tampering and requires adherence to implementation security requirements. The C2PA conformance program will officially launch on June 4th at the Content Authenticity Initiative symposium in New York City. The initial launch will include two levels of implementation assurance and a self-assertion confidence model. Key outputs of the governance program are legal agreements and trust lists of conforming products and certification authorities. The C2PA standard is becoming an ISO standard this year. Timestamp authorities will play a crucial role in providing trust signals related to the time of claim assertion. The program includes mediation and dispute resolution mechanisms in its legal agreements. The governance program provides the structure for the C2PA to “operationalize the spec” and control its use. 4. Key Quotes “basically, it’s all of the major tech companies except Apple… Coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” “what it what it’s proposed to do is to create a ledger of actions against a digital object that is bound to it.” “It’s kind of the nutrition label on food… it’s really the nutrition label of all digital objects.” “The C2PA did not want to get involved in all of the the potential root, you know, actions and and variances about those types of things. They wanted to create the platform.” “They create the platform and they create the binding between the digital asset and the and the manifest using X509 certificates.” “The key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” “the governance program was needed to operationalize the spec. The spec was had, you know, a limitation in its usefulness without a governance program around it.” “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” “we’re creating a program which will hold generator and validator products accountable to the specific ification that’s already been published.” “We are creating two levels of implement implementation assurance and we are are using a self assertion confidence model we don’t have the mechanisms in place to hold organizations accountable for meeting the specification we don’t have an you know an assurance mechanism in place yet to do that.” “It is the hope that you know copyright laws can use the trust signals that are coming from the CTBA specification and conformance program in use for defining ownership and copyright.” “The conformance criteria is the spec and the spec is now at at level 2.2.” “we are looking at levels of assurance around the implementation of a product. Now it’s not just the product but it’s also its infrastructure.” “These are the kinds of records that were that are in the schema for the trust list.” 5. Next Steps Official launch of the C2PA conformance program on June 4th. Continued work on independent attestation and higher levels of assurance for the conformance program. Development of quality control software or processes for assessing product compliance. Ongoing collaboration with W3C and other relevant standards bodies. Further exploration of the broader CAWG ecosystem and its integration with C2PA.

This briefing provides a foundational understanding of the C2PA, its technical specification, and the crucial role of the newly established governance program, structured using the Trust over IP metamodel, in driving its adoption and ensuring trust in the digital content landscape.

For more details, including the meeting transcript, please see our wiki 2025-05-15 Scott Perry & The C2PA Conformance Program – Home – Confluence

https://www.linkedin.com/in/scott-perry-1b7a254/ https://digitalgovernanceinstitute.com/

The post EGWG 2025-05-15: The C2PA Conformance Program, Scott Perry appeared first on Trust Over IP.


FIDO Alliance

Business Review: NETOPIA Payments launches Click to Pay: a simpler, faster, and more secure online payment experience

NETOPIA Payments becomes the first online payment processor in the world to implement Click to Pay with Passkey FIDO (Fast Identity Online) – a modern online checkout solution built on […]

NETOPIA Payments becomes the first online payment processor in the world to implement Click to Pay with Passkey FIDO (Fast Identity Online) – a modern online checkout solution built on EMV® global standards, designed to redefine the digital payment experience: faster, safer, and without manual card data entry.


White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication

Editors Shane Weeden, IBMAn Ho, IBM Abstract Session hijacking is a growing initial attack vector for online fraud and account takeover. Because FIDO authentication reduces the effectiveness of other simpler […]
Editors

Shane Weeden, IBM
An Ho, IBM

Abstract

Session hijacking is a growing initial attack vector for online fraud and account takeover. Because FIDO authentication reduces the effectiveness of other simpler forms of compromise, such as credential stuffing and phishing, cybercriminals turn to theft and re-use of bearer tokens. Bearer tokens are a form of credential which include session cookies used by browsers connecting to websites and OAuth access tokens used by other thick client application types such as native mobile applications. When these credentials are long-lived and can be “lifted and shifted” from the machine where they were created to be usable by a bad actor from another machine, their tradable value is significant. Emerging technologies such as Device Bound Session Credentials (DBSC) for browsers and Demonstrating Proof of Possession (DPoP) for OAuth applications seek to reduce the threat of session hijacking. This article describes how these technologies address the problem of session hijacking and how they complement strong phishing resistant authentication in online ecosystems.

Audience

This white paper is for chief information security officers (CISOs) and technical staff whose responsibility it is to protect the security and life cycle of online identity and access management from online fraud. 

Download the White Paper 1. Introduction

Authentication and authorization are integral parts of an identity lifecycle, especially for online credential ecosystems. The growing threat of online identity fraud with costly security incidents and breaches has enterprises looking for ways to protect and secure their workforces from account takeover through different attack vectors such as phishing, credential stuffing, and session hijacking. For authentication, FIDO authentication with passkeys provides users with “Safer, more secure, and faster online experiences”, and an increase in the adoption of passkeys has contributed to a reduction of the success of attack vectors of credential phishing, credential stuffing, and session hijacking accomplished via man-in-the-middle (MITM) phishing attacks. However, what happens after the authentication ceremony?

After authentication, browsers and application clients are typically issued other credentials. Enterprise applications generally fall into two primary categories: those that are web browser based and use session cookies for state management and those that are thick client applications using OAuth access tokens (this includes some browser-based single page applications and most native mobile applications). Both types of credentials (session cookies and access tokens) are considered, in their basic use, as “bearer” tokens. If you have the token (the session cookie or the access token), then you can continue to transact for the lifetime of that token as the user who authenticated and owned it.This whitepaper explores adjacent technologies that address the “lift and shift” attack vector for bearer tokens and how these technologies complement FIDO-based authentication mechanisms. In particular, this paper focuses on the proposed web standard Device Bound Session Credentials (DBSC) for protecting browser session cookies and OAuth 2.0 Demonstrating Proof of Possession (DPoP) for protecting OAuth grants.

2. Terminology

session hijacking: An exploitation of the web session control mechanism that is normally managed for a session cookie.

credential stuffing: An automated injection of stolen username and password pairs (credentials) into website login forms to fraudulently gain access to user accounts.

1. Passkeys – https://fidoalliance.org/passkeys/
2. Device Bound Session Credentials – https://github.com/w3c/webappsec-dbsc
3. OAuth 2.0 Demonstrating Proof of Possession (DPoP) – RFC9449: https://datatracker.ietf.org/doc/html/rfc9449
4. Session hijacking attack https://owasp.org/www-community/attacks/Session_hijacking_attack
5. Credential stuffing https://owasp.org/www-community/attacks/Credential_stuffing

access token: A credential used by a client-side application to invoke API calls on behalf of the user.

session cookie: A credential managed by browsers to maintain session state between a browser and a website.

bearer token: A token (in the context of this whitepaper may refer to either an access token or a session cookie) so called because whoever holds the token can use it to access resources. A bearer token on its own can be “lifted and shifted” for use on another computing device.

sender-constrained token: A token protected by a mechanism designed to minimize the risk that anything other than the client which established the token during an authentication process could use that token in subsequent requests for server-side resources.

Device Bound Session Credential (DBSC): A proposal for a W3C web standard defining a protocol and browser behavior to establish and maintain sender-constrained cookies. The mechanism uses proof of possession of an asymmetric cryptographic key to help mitigate session cookie hijacking.OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.

OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.

3. Adjacent/complementary technologies for a secure ecosystem

While FIDO authentication technology can effectively eliminate phishing and credential stuffing attacks that occur during the login process, the addition of solutions to mitigate threats associated with bearer token theft is equally important. Bad actors whose attacks are thwarted during the login process will go after the next weakest link in the chain and try to steal post-authentication bearer tokens. This section explores two of these technologies for protecting bearer tokens: Device Bound Session Credentials (DBSC) protect browser-based session cookies and Demonstrating Proof of Possession (DPoP) protects OAuth grants. Alternative approaches to protect bearer tokens are also discussed.

Because no single piece of technology can protect against all threats, a combination of multiple techniques is required for adequate protection.

Table 1: Combination of technologies for increased security

TechnologiesAuthentication threatsPost-authentication threatsRemote PhishingCredential StuffingToken TheftPasskeysDBSC/DPoPPasskeys + DBSC/DPoP

3.1 Browser session cookie security

Before discussing Device Bound Session Credentials (DBSC), you will need to understand the problem being addressed regarding browser session cookies. Session hijacking via cookie theft allows an attacker, who possesses stolen cookies, to bypass end-user authentication, including any strong or multi-factor authentication (MFA). This is particularly problematic when browsers create long-lived session cookies (which are a type of bearer token), since these cookies can be traded as alternatives to a user’s primary authentication credentials and then used from the attacker’s machine. This can lead to unauthorized access to sensitive data, financial loss, and damage to an organization’s reputation. 

Attackers perform cookie theft through various methods such as man-in-the-middle phishing of a user’s existing MFA login process (when phishing-resistant authentication such as FIDO is not used), client-side malware, and occasionally through vulnerabilities in server-side infrastructure or software. Regardless of how cookie theft is perpetrated, when successful, these attacks are not only dangerous, but also hard to isolate and detect. Complementary technologies, such as Device Bound Session Credentials (DBSC), minimize the risks associated with browser cookie theft by making stolen cookies impractical to use from any machine other than the machine to which they were issued during authentication.

3.2 Device Bound Sessional Credentials – DBSC

DBSC refers to a proposed web standard currently in development within the Web Application Security working group of the W3C[2]. The goal of DBSC is to combat and disrupt the stolen web session cookies market. This is achieved by defining an HTTP messaging protocol and required browser and server behaviors to result in binding the use of application session cookies to the user’s computing device. DBSC uses an asymmetric key pair and in browser implementations the private keys should be unextractable by an attacker – for example stored within a Trusted Platform Module (TPM), secure element, or similar hardware-based cryptographic module.

At a high level, the API in conjunction with the user’s browser and secure key storage capabilities, allows for the following:

The server communicates to the browser a request to establish a new DBSC session. This includes a server-provided challenge. The browser generates an asymmetric key pair, then sends the public key along with the signed challenge to the server. This process is referred to as DBSC registration. Browser implementations of DBSC should use operating system APIs that facilitate secure, hardware-bound storage and use of the private key. The server binds the public key to the browser session by issuing a short-lived, refreshable auth_cookie which is then required to be transmitted in subsequent browser requests to the web server.

As the auth_cookie regularly expires, a mechanism is required for the browser to refresh the auth_cookie asynchronously to primary application web traffic. The refresh process requires signing a new server-issued challenge with the same private key created during DBSC registration, thereby re-proving (regularly) that the client browser is still in possession of the same private key.

Limiting the lifetime of the auth_cookie to short periods of time (for example, a few minutes) disrupts the market for trading long-lived session cookies. An attacker can only use stolen session cookies (including the auth_cookie) for a brief period, and cannot perform a refresh operation, since the private key required to perform a refresh operation is not extractable from the client machine.

DBSC may be introduced into existing deployments with minimal changes to the application. This is important as DBSC could easily be incorporated as a web plugin module in existing server-side technology (for example, Apache module, Servlet Filter, or reverse proxy functionality). This permits enterprises to roll out deployment of DBSC in phases without a complete overhaul of all current infrastructure and companies can prioritize certain critical endpoints or resources first.

DBSC server-side implementations can also be written in a manner that permits semantics, for example: “If the browser supports DBSC, use it, otherwise fallback to regular session characteristics.” This allows users to gain the security advantages of DBSC when they use a browser that supports it without having to require all users to upgrade their browsers first.

Refer to the Device Bound Session Credentials explainer for more details on the DBSC protocol and standard, including a proposal for enterprise-specific extensions that adds attestation to DBSC keypairs.

3.2.1 What makes DBSC a technology complementary to FIDO?

The DBSC draft standard permits the login process to be closely integrated with the DBSC API. While FIDO is a mechanism that makes authentication safer and phishing resistant, DBSC is a mechanism that makes the bearer credential (session cookie) safer post-authentication. They complement each other by reducing the risk of account takeover and abuse, making the entire lifecycle of application sessions safer.

3.2.2 Alternative solutions

DBSC is not the first standard to propose binding session cookies to a client device. Token Binding is an alternative that combines IETF RFCs 8471, 8472, and 8473. Token Binding over HTTP is implemented via a Transport Layer Security (TLS) extension and uses cryptographic certificates to bind tokens to a TLS session. Token Binding has had limited browser adoption and is complex to implement as it requires changes at the application layer and in TLS security stacks. The Token Binding over HTTP standard has not been widely adopted and only one major browser currently offers support.

3.2.3 Advice

The DBSC standard relies on local device security and operating system APIs for storage and use of the private key that is bound to the browser’s session. While these private keys cannot be exported to another device, the key is available on the local system and may be exercisable by malware residing on the user’s device. Similarly, in-browser malware still has complete visibility into both regular session cookies and short-lived auth_cookies. DBSC is not a replacement for client-side malware protection, and the threat model for DBSC does not provide protections from persistent client-side malware. Ultimately, the user must trust the browser.

As browsers start to support DBSC over time, it will be important for servers to be able to work with a mix of browsers that do and do not include support for this technology. Some enterprises may dictate that corporate issued machines include browsers known to support DBSC, but many will not. It will be necessary for server-side implementations to take this into consideration, using DBSC when the browser responds to registration requests, and tolerating unbound session cookies when the browser does not. When building or choosing a commercial solution, ensure you consider this scenario, and include the ability to implement access control policies that strictly require DBSC in highly controlled or regulated environments or for specific applications.

At the time of writing, DBSC is in early evolution. It remains to be seen whether or not it will be widely adopted by browser vendors. The hope is that incubating and developing this standard via the W3C will result in wider adoption than previous proposals, similar to the way that the WebAuthn API has been adopted to bring passkey authentication to all major browser implementations.

4. OAuth grants

The previous section introduced DBSC as a means to protect against session cookie theft in web browsers. Thick application clients, including mobile applications and single-page web applications, typically use stateless API calls leveraging OAuth grants instead of session cookies. An OAuth grant may be established in several ways, with the  recommended pattern for thick clients being to initially use the system browser to authenticate a user, and grant access for an application to act on their behalf. Conceptually this is remarkably similar to browser-based sessions, including the ability and recommendation, to use FIDO authentication for end-user authentication when possible. At the conclusion of the browser-based authentication portion of this flow, control is returned to the thick client application or single-page web application where tokens are established for use in programmatic API calls. 

The challenge that occurs from this point forward is almost identical to that described for browsers – the OAuth tokens are bearer tokens that if exposed to a bad actor can be used to call application APIs from a remote machine instead of from the legitimate application.

This section describes the use of DPoP, a technology for protecting the “lift and shift” of credentials used in OAuth-protected API calls which, just like DBSC, makes use of an asymmetric key pair and ongoing proof of possession of the private key.

4.1 Demonstrate Proof of Possession (DPoP)

OAuth 2.0 Demonstrating Proof of Possession (DPoP) is an extension of the existing OAuth 2.0 standard for implementing device bound (or sender-constrained) OAuth access and refresh tokens. It is an application-level mechanism that allows for the tokens associated with an OAuth grant (that is, refresh tokens and access tokens) to bind with the requested client using a public and private key pair. This requires the client to prove ownership of its private key to the authorization server when performing access token refresh operations and to resource servers when using access tokens to call APIs.

6. OAuth 2.0 for Native Apps https://datatracker.ietf.org/doc/html/rfc8252

High assurance OpenID specifications, such as Financial-grade API (FAPI 2.0), mandate the use of sender-constrained tokens and DPoP is the recommended method for implementing this requirement when Mutual TLS (mTLS) is not available.

At a high level, DPoP requires that:

The client generates a per-grant public/private key pair to be used for constructing DPoP proofs. Best practice implementations should use operating system APIs to ensure the private key is non-extractable. On initial grant establishment (for example, exchanging an OAuth authorization code for the grant’s first access token and refresh token), a DPoP proof (a JWT signed by the client’s private key that contains, among other things, a copy of the public key) is used to bind a public key to the grant. Requests to a resource server using an access token obtained in this manner must also include a DPoP proof header, continuously proving possession of the private key used during grant establishment. This is done for every API request. Resource servers are required to check if an access token is sender-constrained, confirm the public key, and validate the DPoP proof header on each API call. For public clients, subsequent refresh_token flows to the authorization server’s token endpoint must also contain a DPoP proof signed with the same key used during initial grant establishment. This is particularly important as the refresh tokens are often long-lived and are also a type of bearer token (that is, if you have it you can use it). The authorization server must enforce the use of a DPoP proof for these refresh token flows and ensure signature validation occurs via the same public key registered during initial grant establishment.

Unlike a plain bearer access token which can be used by any holder, DPoP based access tokens are bound to the client that initially established the OAuth grant, since only that client can sign DPoP proofs with the private key. This approach minimizes the risks associated with malicious actors trading leaked access tokens.

Refer to DPoP RFC 9449 – OAuth 2.0 Demonstrating Proof of Possession (DPoP) for more information.

4.2 What makes DPoP a complementary technology to FIDO?

FIDO can be leveraged for phishing resistant end-user authentication during establishment of an OAuth grant. Refresh and access tokens obtained by a client following this authentication should be safeguarded against “lift and shift” attacks just like session cookies in browser-based apps. DPoP is a recommended solution for protecting these OAuth tokens from unauthorized post-authentication use. Together, FIDO for end user authentication and DPoP for binding OAuth tokens to a client device complement each other to improve the overall security posture for identities used in thick client applications.

4.2.1 DPoP alternative solutions?

RFC8705 – OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens describes a mechanism that offers a transporter layer solution to bind access tokens to a client certificate. While it has been approved for use in FAPI 2.0 for open banking solutions, it is not particularly suitable for public clients such as native mobile applications.

RFC9421 – HTTP Message Signatures defines an application-level mechanism for signing portions of an HTTP message. Key establishment and sharing between the client and verifier are not defined by this specification, although this could be performed in a trust on first user manner during initial grant establishment in a similar manner to DPoP. There is no known public specification that maps the use of HTTP message signatures to the use case of sender-constrained bearer tokens in an OAuth client application. In the absence of such a public specification, widespread adoption for this use case is unlikely.

4.2.2 Advice

Sender-constrained tokens are a good idea, and, in some deployments, they are a regulatory requirement. For example, use of the FAPI profiles of OAuth is now mandated by many sovereign open banking initiatives. DPoP is a relatively simple way to achieve this requirement and is flexible enough to cover a wide range of application client types. That said, care must still be taken to adhere to the security considerations of DPoP. Pay close attention to section 11 of RFC9449, as well as apply other application security strategies for native or browser based single page applications as your scenario dictates. Remember that DPoP is focused solely on addressing the threats associated with token exfiltration, which include trading and use by malicious actors. It should be considered part of a defense-in-depth strategy for OAuth applications.

5. Conclusion

The intent of this paper is to inspire thinking around how different web security standards fit together and how those standards relate to the use of FIDO authentication for users. There are so many standards and standards bodies that it is often hard to understand which compete in the same space and which augment one another to form part of a comprehensive defense-in-depth strategy for identity fraud protection in online applications.

This paper tackled a specific, prevalent application security problem – the malicious trading and use of stolen cookies and access tokens. This paper also showed how technologies such as DBSC and DPoP mitigate the threats associated with token theft and how these technologies are complementary to FIDO authentication. Paired with FIDO, DBSC and DPoP provide greater overall identity fraud protection for your applications.


We Are Open co-op

Building Power and Making Change

Community strategies and tools

cross-posted on the Amnesty UK blog

Community-driven change is more important than ever. Whether we are advocating for social justice, environmental sustainability, or political reform, collective action is how we create lasting impact. But how do we build a movement that starts with individual curiosity and grows into sustained activism? That’s where the Amnesty International UK (AIUK) community platform project comes in — a digital hub designed to empower individuals, support collaboration, and drive meaningful change.

This blog post outlines how the platform and community strategy work together to guide people from discovery of the AIUK community to becoming activists within it.

1. Discovery

The journey of community-driven change starts with discovery. This is the stage where individuals first come into contact with AIUK. Maybe they learn about an issue, identify it as important, and begin to consider how they might want to get involved. Or maybe they meet someone at a demonstration, and discover the community first-hand.

AIUK social media, through broadcasting, is just one tool that helps people discover Amnesty International UK. AIUK makes complex issues accessible and relatable. We want to do the same as AIUK highlights grassroots efforts and community initiatives.

We want to encourage posts that show:

Our dedicated community and highlight key grassroots initiatives and campaigns. Signposts to find local groups or events based on interests. Digital actions, such as petitions or downloading campaign guides, to help users take their first steps.

Such content ensures that even people who are new can find relevant AIUK communities and take the first steps toward engagement.

2. Intention to Engage

Once someone discovers a cause they care about, the next step is forming an intention to engage. This stage is all about commitment — moving from passive interest to active participation.

By showcasing community on the AIUK website, we both invite people in and celebrate what the community is achieving. We want to present clear pathways for involvement and help community members inspire others to take steps towards action.

We need to figure out processes that help:

Goal-setting: Encouraging community members to set personal milestones, like committing to attend 100 meetings. Sharing success: Telling success stories and finding testimonials that effectively attract new people while celebrating community achievements. Balancing information: Showcasing static information about past successes with dynamic, real-time updates on current campaigns from the community.

By making it easy for people to express their intent and take small but meaningful steps, we build confidence and lay the groundwork for deeper engagement.

3. Taking Action: Turning Intent into Impact

With intention comes action, and this is where real change begins. At this stage, people start to feel a sense of belonging and are ready to contribute to a cause they care about.

A knowledge base can help equip users with actionable tools. We’ll need clear resources and learning pathways that:

Guide people to the right information: Whether it’s organising a protest, writing letters to policymakers, or starting a local campaign, the knowledge hub can provide step-by-step guidance tailored to issues we work on. Help people collaborate: People should be able to connect with others who share their interests and work together on projects — whether virtually or in person. Best practices and community policies may also be at home in the knowledge hub. Show them into the community: Make sure that people feel supported and seen as they take action. Create an architecture of participation that brings them into the community platform.

This stage is about turning isolated actions into collective power, with the support of the community ensuring that every contribution counts.

4. Sustaining Action: Building Lasting Commitment

Sustained action is the key to creating lasting change. Too often, movements fizzle out after an initial burst of energy, but with a strong community strategy and integrated platform, we can keep momentum going.

To sustain engagement, the community platform needs to help people align with others in the AIUK movement. We need to think about:

Feedback loops: Regular check-ins with the community to understand their needs and ensure that we are adapting the community strategy and platform accordingly. A recognition ecosystem: Using digital badges and shoutouts for individuals or groups who demonstrate consistent commitment to help us make activism more visible. Storytelling opportunities: Sharing success stories and lessons learned will inspire others and keep motivation high.

By encouraging a sense of belonging and purpose, we ensure that members find reasons to continue building collective power for human rights.

5. Becoming an Activist: Empowering Future Leaders

The final stage is becoming an activist. At this point, individuals understand that community isn’t one person, but rather all of us. They begin to work on behalf of others, coordinate together and lift people up with their leadership.

These leaders will use other coordination tools and processes and that’s great! We want to empower the development of activist and leadership skills. We’ll need:

Decentralised coordination best practices: For members who are ready to take on larger roles, such as leading groups or campaigns. Mentorship programs: Connecting experienced activists with newcomers to share knowledge and build networks. Advocacy training: Workshops, webinars, and resources focused on effective communication, policy advocacy, and community organising.

Through these efforts, we can go beyond nurturing individual leaders to continue building a movement.

The Power of Community Work in Driving Change

The journey from discovery to becoming an activist is a process of gradual engagement and empowerment. There is a system of platforms, processes and content that help AIUK move people towards becoming an activist. Although we use various digital tools, the journey is an emotional and social one.

We are working hard to make sure the community platform project harnesses the collective strength of our community and makes a difference that lasts.


Oasis Open

OASIS Common Security Advisory Framework v2.0 Approved as an ISO/IEC International Standard

Boston, MA USA; 20 May 2025 — The Common Security Advisory Framework (CSAF) Version 2.0, an open standard developed by OASIS Open, has officially been approved for release by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Now designated as ‘ISO/IEC 20153,’ the framework was successfully balloted through the Joint Technical […] Th

Designation as ISO/IEC 20153 Solidifies CSAF's Global Role in Standardized Vulnerability Reporting

Boston, MA USA; 20 May 2025 — The Common Security Advisory Framework (CSAF) Version 2.0, an open standard developed by OASIS Open, has officially been approved for release by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Now designated as ‘ISO/IEC 20153,’ the framework was successfully balloted through the Joint Technical Committee on Information Technology (JTC 1), marking a significant step forward in global security advisory standardization.

“Congratulations to OASIS on the publication of ISO/IEC 20153,” said Philip Wennblom, Chair of ISO/IEC JTC 1. “OASIS has been a valued JTC 1 partner since 2004, and this milestone highlights the strength of our collaboration in addressing critical challenges, including in cybersecurity, and advancing standards that benefit consumers, industries, and governments worldwide.”

Developed by the OASIS CSAF Technical Committee (TC) through a collaborative, cross-industry effort, CSAF v2.0 provides a modern, machine-readable framework for enhancing vulnerability reporting and response. With support for the Vulnerability Exploitability Exchange (VEX) profile, CSAF v2.0 seamlessly integrates with Software Bill of Materials (SBOM) data, allowing organizations to efficiently assess vulnerabilities and take informed actions.

“Achieving ISO/IEC recognition for CSAF 2.0 is a tremendous milestone for the global cybersecurity community,” said Omar Santos, co-chair of the OASIS CSAF TC. “This international standardization will drive broader, more consistent adoption of machine-readable vulnerability disclosures and response processes—ultimately helping organizations around the world protect their assets more effectively and streamline their cybersecurity practices. Whether vulnerabilities emerge in legacy environments or in cutting-edge AI solutions, CSAF 2.0 provides a modern framework for effective vulnerability reporting in hardware and software. We look forward to continuing our work within the OASIS CSAF TC to ensure the standard remains at the forefront of global cybersecurity efforts.”

“CISA is pleased that CSAF 2.0 is now recognized as an ISO/IEC standard, a significant achievement in strengthening global cybersecurity resilience. We value our work and ongoing partnership with OASIS,” said Justin Murphy, CISA Vulnerability Disclosure Analyst and co-chair of the OASIS CSAF TC. “CSAF 2.0 enables organizations to respond more effectively to evolving cyber threats across complex environments including critical infrastructure. We encourage and look forward to broader, global adoption of machine-readable standards for vulnerability management efforts.”

CSAF v2.0 was ratified as an OASIS Open Standard in November 2022 and subsequently submitted by OASIS to the ISO/IEC JTC 1 Information Technology body. As ISO/IEC 20153, this International Standard will continue to be maintained and advanced by the OASIS CSAF Technical Committee, which includes representatives of Cisco, Cryptsoft, Cyware, Huawei, Microsoft, Oracle, Red Hat, and others. New members are welcome, and participation in the CSAF TC is open to all through membership in OASIS.

About ISO/IEC JTC 1
ISO-IEC Joint Technical Committee (JTC 1) is a consensus based, voluntary international standards group focusing on information technology (IT). Many hundreds of experts from over 100 countries, represent their nation’s national standards body or national standards committee to mutually develop beneficial guidelines that enhance global trade, while protecting intellectual property. As one of the largest and most prolific technical committees in the international standardization community, ISO/IEC JTC 1 has had direct responsibility for the development of more than 3,500 published ISO standards, with nearly 600 currently under development. Its work in standardization also encompasses 24 subcommittees that make a tremendous impact on the global ICT industry.

About OASIS Open
One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement.
www.oasis-open.org

Media Inquiries: communications@oasis-open.org

Support for CSAF  

Cyware:
“Cyware is committed to advancing security automation through the adoption of open, machine-readable standards like CSAF. Integrating CSAF into our threat intelligence and security orchestration platforms enables real-time ingestion, normalization, and automated dissemination of vulnerability advisories, enhancing our customers’ ability to rapidly correlate threat data and initiate timely response actions.”
– Avkash Kathiriya, Sr. VP, Research and Innovation, Cyware Labs

Microsoft:
“The designation of the Common Security Advisory Framework (CSAF) as an ISO/IEC 20153 standard marks a significant milestone for the global vulnerability management ecosystem. At Microsoft, we are proud to support this advancement by publishing advisories conforming to the CSAF specification and expanding its adoption across our security practices. CSAF enhances automation, improves interoperability, and accelerates vulnerability response—empowering organizations worldwide to better protect their ecosystems.”
– Bret Arsenault, Corporate Vice President and Chief Cybersecurity Advisor, Microsoft

Disclaimer: CISA does not endorse any commercial entity, product, company, or service, including any entities, products, or services linked or referenced within this press release. Any reference to specific commercial entities, products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply endorsement, recommendation, or favoring by CISA.

The post OASIS Common Security Advisory Framework v2.0 Approved as an ISO/IEC International Standard appeared first on OASIS Open.

Friday, 16. May 2025

Project VRM

Four Roads to The Intention Economy

Thirteen years after The Intention Economy was published by Harvard Business Review Press, there are now four clear paths toward making it come true. IEEE P7012, aka MyTerms. This will make individuals first parties in their agreements with companies, completely flipping the status quo that has been with us since industry won the Industrial Revolution […]

Thirteen years after The Intention Economy was published by Harvard Business Review Press, there are now four clear paths toward making it come true.

IEEE P7012, aka MyTerms. This will make individuals first parties in their agreements with companies, completely flipping the status quo that has been with us since industry won the Industrial Revolution and manifests today in those insincere and annoying cookie notices that interrupt your experience every time you visit a new website or open a new app. MyTerms makes each of us first parties in agreements with sites and services, and in full charge of personal privacy online. The First Person Project, or FPP  (website pending). With help on the buy side from Customer Commons and on the sell side by Ayra, we can finally replace “show your ID” with verifiable credentials presented on an as-needed basis by independent and self-sovereign individuals operating inside their own webs of trust. Visa Intelligent Commerce, which will make intentcasting happen in a big way. It will also elevate the roles of Inrupt and the open-source  Solid Project. Personal AI. This is AI that is as much yours as your shoes, your bike, and your PC. Personal, not personalized.

To explain how these will work together, start here:

Not long after The Intention Economy came out in May, 2012, Robert Thomson, Managing Editor of The Wall Street Journal, wanted the book’s opening chapter to serve as the cover essay for the Marketplace section of an upcoming issue. Harvard Business Review Press didn’t like that idea, so I wrote an original piece based on one idea in the book: that shoppers will soon be able to tell the market what they’re looking for, in safe, secure and anonymous ways—a kind of advertising in reverse that the book called “personal RFPs” and has since come to be called “intentcasting.” This became The Customer as a God: The image above was the whole cover of the Marketplace section on Monday,  July 23, 2012. The essay opened with these prophetic words: “It’s a Saturday morning in 2022…”

It is now a Friday morning in 2025, and that godly future for customers is still not here. Yes, we have more market power than in 2012, but we are digital serfs whose powers are limited to those granted by  Amazon, Apple, Facebook, Google, Microsoft, and other feudal overlords. This system is a free market only to the degree that you can choose your captor.  This has led to—

The IONBA (Internet Of Notning But Accounts) is based on a premise: that the best customers are captive ones. In this relic of the industrial age, customers are captive to every entity that requires logins and passwords. Customers also have no ways of their own to globally control what data is collected about them, or how. Or to limit how that data is used.  This is why our digital lives are infected by privacy-killing data-collection viruses living inside our computers, phones, TVs, and cars.

If you didn’t know about those last two, dig:

Consumer Reports says “All smart TVs—from Samsung, LG, you name it—collect personal data.” They also come with lame “privacy” controls, typically buried deep in a settings menu. (Good luck exhuming them. The ones in our TCL and Samsung TVs have all but disappeared.) Mozilla calls new cars “the Worst Product Category We Have Ever Reviewed for Privacy.” There is also nothing you can do to stop your car from reporting on everything your car does—and everything you do, including sexual ativity—to the carmaker, insurance companies, law enforcement, and who knows who else. This data goes out through your car’s cell phone, misleadingly called a telematics control unit. The antenna is hidden in the shark fin on your car’s roof or in an outside mirror.

Businesses are also starting to lose faith in surveillance, for at least eight reasons:

People hate it. They also fight it. By 2015 ad blocking and tracking protection were the biggest boycott in world history. It tarnishes brands. Ad fraud is a gigantic problem, and built into the system. It commits Chrysoogocide (killing golden geese, most notably publishers)Bonus link. Regulatory pressure against it is getting bigger all the time. Advertisers are finally remembering that brands are made by ads aimed at populations, while personalized ads are just digital junk mail. Customers are using AI tools for guidance toward a final purchase, bypassing marketing schemes to bias purchasing decisions along the way. For more on that, see Tom Fishburne’s cartoon, and Bain’s report about it.

So our four roads to The Intention Economy start with the final failings of the systems built to prevent it. Now let’s look at those roads.

1—IEEE P7012 “MyTerms”

MyTerms, the most important standard in development today, will be a keystone service of Customer Commons, the nonprofit spinoff of ProjectVRM. It will do for contract what Creative Commons did for copyright: give individuals a new form of control. With MyTerms, agreements between customers and companies will be far more genuine mutual, and open to new forms of innovation not based on the kind of corporate control that typifies the IONBA. For example, it can open Visa Intelligent Commerce to conversations and relationships that go far past transaction. Take for example Market intelligence that flows both ways. While this has been thinkable for a decade or more (that last link is from 2016), it’s far more do-able when customers and companies have real relationships based on equal power and mutual interests. These are best framed up on agreements that start on the customer’s side, and give customers scale across all the companies with which they have genuine relationships.

2—First Person Project (FPP)

To me, FPP begins with the vision “Big Davy” Sallis came up with while he was working for VISA Europe in 2012, and read the The Intention Economy. At the time, he wanted Visa to make VRM a real category, but assumed that would take too long. So he decided to create a VRM startup called Qredo. Joyce and I consulted Qredo until  Davy died (far too young) in 2015. Qredo went into a different business, but a draft I created for Qredo’s original website survives, and it outlines much of what the  FPP will make possible. That effort is led by Drummond Reed, another friend and collaborator of Davy’s and a participant in ProjectVRM from the start. Drummond says the FPP is inspired by Why We Need First Person Technologies on the Net, a post published here in 2014. That post begins,

We need first person technologies for the same reason we need first person voices: because there are some things only a person can say and do.

Only a person can use the pronouns  “I,” “me,” “my” and “mine.” Likewise, only a person can use tools such as screwdrivers, eyeglasses and pencils. Those things are all first person technologies. They were invented for individual persons to use.

We use first person technologies the same unique ways we use our voices.

Among other things, the First Person Project will fix how identity works on the Internet. With FPI—First Person Identity—interactions with relying parties (the ones wanting “your ID”) don’t need your drivers license, passport, birth certificate, credit card, or account information. You just give them what’s required, on an as-needed basis, in the form of verifiable credentials. The credentials you provide can verify that you are a citizen of a country, licensed to drive, have a ticket to a game, or whatever. In other words, they do what Kim Cameron outlined in his Laws of Identity: disclose minimum information for constrained uses (Law 2) to justifiable parties (Law 3) under your control and consent (Law 1). The credential you present is called a DID: a Decentralized Identifier. No account is required.

Trust in FPI also expands from individual to community. Here is how Phil Windley explains it in Establishing First Person Digital Trust:

When Alice and Bob met at IIW, they didn’t rely on a platform to create their connection. They didn’t upload keys to a server or wait for some central authority to vouch for them. They exchanged DIDs, authenticated each other directly, and established a secure, private communication channel.

That moment wasn’t just a technical handshake—it was a statement of first-person identity. Alice told Bob, “This is who I am, on my terms.” Bob responded in kind. And when they each issued a verifiable relationship credential, they gave that relationship form: a mutual, portable, cryptographically signed artifact of trust. This is the essence of first-person identity—not something granted by an institution, but something expressed and constructed in the context of relationships. It’s identity as narrative, not authority; as connection, not classification.

And because these credentials are issued peer-to-peer, scoped to real interactions, and managed by personal agents, they resist commodification and exploitation. They are not profile pages or social graphs owned by a company to be monetized. They are artifacts of human connection, held and controlled by the people who made them. In this world, Alice and Bob aren’t just users—they’re participants.

This also expands outward into community, and webs of trust. You get personal agency plus community agency.

The FPP covers a lot more ground than identity alone, but that’s where it starts. Also, Customer Commons is a funding source for the FPP, and I’m involved there as well.

3—Visa Intelligent Commerce

The press release is Find and Buy with AI: Visa Unveils New Era of Commerce. Less blah is Enabling AI agents to buy securely and seamlessly. Here’s the opening copy.

Imagine a future where an AI agent can shop and buy for you. AI commerce — commerce powered by an AI agent — is going to transform the way consumers around the world shop.

Introducing Visa Intelligent Commerce, an initiative that will empower AI agents to deliver personalized and secure shopping experiences for consumers – at scale.

From browsing and selection to purchase and post-purchase management, this program will equip AI agents to seamlessly manage key phases of the shopping process.

Visa CEO Ryan McInerney says a lot more in a 1:22 talk at Visa Product Drop 2025. The most relevant part starts about 26 minutes in, with a demo starting at about 31:30. Please watch it. Much of what you see there owes to Inrupt and Solid, which Sir Tim Berners-Lee says were inspired by The Intention Economy. For more about where Inrupt and Solid fit in Visa Intelligent Commerce, see Standards for Agentic Commerce: Visa’s Bold Move and What It Means: Visa’s investment in safe Intelligent Commerce points to a future of standards-forward personal AI, by John Bruce, Inrupt’s CEO. John briefed Joyce and me over Zoom the other day. Very encouraging, with lots to develop on and talk about.

More links:

A tweet appreciative of Inrupt by Visa’s @JackForestell Privacy for Agentic AI, by Bruce Schneier, Inrupt’s CISO (as well as the world’s leading security expert, and an old pal through Harvard’s Berkman Klein Center). Also from Bruce: What Magic Johnson and Bruce Schneier taught us at RSAC 2025 and RSAC 2025: The Pioneers of the Web Want to Give You Back Control of Your Data Visa announces AI Agent Payment APIs – and a pathway to Empowerment Tech, by Jamie Smith, who writes Customer Futures, the most VRooMy newsletter out there.

Some news being made about Visa Intelligent Commerce:

Visa partners with AI giants to streamline online shopping Visa Gives AI Shopping Agents ‘Intelligent Commerce’ Superpowers Visa launches ‘Intelligent Commerce’ platform, letting AI agents swipe your card—safely, it says How major payment companies could soon let AI spend your money for you Visa, Mastercard offer support for AI agents Visa wants to give artificial intelligence ‘agents’ your credit card Visa adds ‘unknown concept’ where AI makes purchases for you – but shoppers suspect more ‘sinister purpose’ Visa Unveils Intelligent Commerce to Power AI-Driven Payments

4—Personal AI

Reza Rassool was also inspired by The Intention Economy when he started Kwaai.ai, a nonprofit community developing open-source personal AI. I now serve Kwaai as its volunteer Chief Intention Officer.

Let’s look at what personal AI will do for this woman:

Looks great, but we’re stuck in IONBA, she has little control over her personal data in all those spaces. For example,

She doesn’t have the digital version of what George Carlin called “a place for my stuff.” (Watch that video. It’s brilliant—and correct.) She has few records of where she’s been, who she’s been with and when—even though apps on her phone know that stuff and are keeping it inside the records of her giant overlords and/or selling it to parties unknown, with no way yet for getting it back for her own use. Her finances are possibly organized, but scattered between the folders she keeps for taxes, plus the ones that live with banks, brokers, and other entities she hardly thinks about. It would be mighty handy to have a place of her own where she could easily see all her obligations, recurring payments, subscriptions, and other stuff her counterparties would rather she not know completely. Her schedules are in Apple, Google, and/or Microsoft calendars, which are well app’d and searchable, but not integrated. She has no digital calendar that is independent and truly her own. Her business and personal relationship records are scattered across her contact apps, her Linkedin page, and piles of notes and business cards. She has no place or way of her own to manage all of them. Her health care records (at least here in the U.S.) are a total mess. Some of them ares inside the MyCharts and patient portals provided by separate (and mostly unconnected) health care specialists and medical systems. Some of it is in piles of printouts she has accumulated (if she’s kept them) from all the different providers she has seen. Some of it is in fitness and wellness apps, all with exclusive ways of dealing with users. None of it is in a unified and coherent form.

So the challenge for personal AI is pulling all that data out of all her accounts, and putting it into forms that give her full agency, with the help of her personal AIs.  Personalized AIs from giants can’t do that. We need our own personal AIs.

And there we have it: Four roads to a world where free customers prove more valuable than captive ones. And we’re making it happen. Now.


FIDO Alliance

Techradar Pro: Millions of Brits to be impacted by UK Gov decision to move away from passwords, 2FA and the replacement is far from perfect

The UK government has said it will roll out passkey technology across its digital services later in 2025, aiming to phase out SMS-based verification in favour of a more secure, user-friendly alternative. […]

The UK government has said it will roll out passkey technology across its digital services later in 2025, aiming to phase out SMS-based verification in favour of a more secure, user-friendly alternative.

Passkeys are unique digital credentials tied to a user’s personal device and offer a way to authenticate identity without the need for traditional passwords or one-time text codes.

Passkeys never leave the device and so cannot be reused across websites, which makes them resistant to phishing and other common attacks.


DIF Blog

DIF at EIC 2025: Advancing Digital Identity at the Crossroads of AI and Self-Sovereignty

DIF members showcased their vision at this year's European Identity and Cloud Conference (EIC 2025), bringing together experts who are defining the future of human-centric digital identity. As AI capabilities accelerate, DIF members are tackling both the architectural foundations and philosophical implications of self-sovereign identity, and EIC provided

DIF members showcased their vision at this year's European Identity and Cloud Conference (EIC 2025), bringing together experts who are defining the future of human-centric digital identity. As AI capabilities accelerate, DIF members are tackling both the architectural foundations and philosophical implications of self-sovereign identity, and EIC provided an excellent forum to share how they are solving digital identity's most complex challenges.

The Philosophy Behind Standards: Values in Digital Identity

Markus Sabadello, CEO of Danube Tech and DIF Steering Committee member, delivered a compelling talk examining the philosophical underpinnings of digital identity standards. His presentation, "The Worldviews behind Digital Identity Standards," argued that technical choices in standards like OID4VC, DIDComm, SD-JWT-VC, and the W3C verifiable credential data model reflect deeper philosophical trajectories variously aligned with European values like Liberty, Equality, and Fraternity.

Markus Sabadello presents "The Worldviews behind Digital Identity Standards"

Sabadello illustrated how technologies like DIDComm prioritize fraternity through peer-to-peer connections, while JSON-LD enables innovation and liberty through permissionless semantic flexibility and self-publishing. As the industry standardizes wallets and verifiable credentials, he emphasized that these standards should be evaluated not only on technical merits but also on how they impact human values like sovereignty and equitable participation.

The talk concluded with an important reminder that technology is never value-neutral, highlighting the need to align digital identity standards with humanistic values while avoiding the pitfalls of fragmentation from competing, politically and commercially driven standards.

AI and Identity: A New Frontier

Another highlight of the conference was the Verifiable AI talk and panel series. In his talk "Private Personal AI and Verified Identity for AI Agents", Alastair Johnson (CEO of Nuggets) explored the challenges of implementing truly private personal AI that protects user sovereignty while creating verifiable identities for AI agents. Johnson explored how privacy-preserving technologies and self-sovereign identity frameworks can enable secure AI agent operations while maintaining individual control over personal data.

Alastair Johnson presents "Private Personal AI and Verified Identity for AI Agents"

The subsequent panel, "Verifiable AI: The New Frontier" was moderated by Ankur Banerjee, CTO of Cheqd and DIF TSC Co-chair. The panel brought together Matthew Berzinski (Ping Identity), Sebastian Rodriguez (Advisor to Privado.ID), and Alastair Johnson to explore the intersection of AI and digital identity.

The panel addressed critical questions about how private personal AI agents can securely interact with identity systems, approaches to verifying AI agent identities, and frameworks for establishing trust in AI-human interactions.

As Ankur described in his following LinkedIn post, key takeaways included:

The need for both decentralized and centralized/hybrid approaches for different scenarios, including "AI employees" like the Devin software engineering assistant The challenge of allowing "good bots" into systems designed to keep malicious automation out The emerging consensus that AI agents will need their own wallets (or at least high-stakes delegation capabilities to and from wallets, or operate inside of wallets), and what kind of unique identifiers can power these interfaces The vulnerability of AI agents to bribing, threats, and "social" engineering attacks despite (or due to their primarily) rule-based constraints The agentic "Ship of Theseus" problem: at what point is an AI agent sufficiently changed that it invalidates prior attestations? The Personhood Challenge: Humans in a World of AI

Another significant focus at the conference was the development of personhood credentials as a defense against AI-generated deepfakes. Drummond Reed, Director of Trust Services at Gen Digital, presented "First-Person Credentials: A Case Study," discussing a collaborative effort between the Ayra Association, Customer Commons, Trust Over IP, and DIF to create a people-powered, privacy-preserving proof of personhood.

Personhood Credentials Why is proof of personhood so hot? Because it sits at the intersection of AI and decentralized identity. The threat of generative AI deep fakes has accelerated the search for a sustainable… KuppingerCole

This work built on a 2024 paper titled "Personhood Credentials," which proposed using a decentralized architecture based on verifiable credentials and zero-knowledge proofs. Reed's presentation covered design goals, trust models, user experience considerations, and go-to-market strategies for this emerging approach.

Personhood Credentials: From Theory to Practice

The subsequent panel, "Personhood Credentials: From Theory to Practice," brought together Ankur Banerjee, Drummond Reed, Steven McCown (Chief Architect of Anonyome Labs), and Sebastian Rodriguez to examine real-world implementations and practical challenges in creating personhood credentials. The panel explored how technologies like zero-knowledge proofs and selective disclosure can preserve individual privacy while meeting legitimate verification requirements.

PANEL: Personhood Credentials: From Theory to Practice This panel features experts examining real-world implementations, emerging standards, and practical challenges in digital identity. They will explore how technologies such as zero-knowledge proofs… KuppingerCole Technical Innovations in Identity Infrastructure

The conference also featured several technical presentations on practical implementations of verifiable credentials and digital identity wallets:

Richard Esplin, Head of Product at Dock, presented "Biometrics and Verifiable Credentials: Balancing Security and Privacy," addressing the challenges biometric providers face as regulations become stricter. Esplin shared best practices for integrating biometrics with verifiable credentials without undermining privacy and flexibility.

Biometrics and Verifiable Credentials: Balancing Security and Privacy [Intermediate] Biometric providers are facing new challenges as regulations governing biometric data become stricter and organizations try to extend their biometric enabled business processes across ecosystems… KuppingerCole

Dr. Paul Ashley, CTO of Anonyome Labs, discussed the implementation of Hardware Security Modules (HSMs) in digital identity wallets in his talk "Digital Identity Wallet Utilizing a Hardware Security Module." The presentation explored how digital identity wallets can be enhanced through HSM integration to fulfill the requirements of the EU Digital Identity Wallet framework, with analysis of each credential standard's compatibility with various HSMs' cryptographic capabilities.

Dr. Paul Ashley presenting "Digital Identity Wallet Utilizing a Hardware Security Module" Looking Forward

The DIF community remains the leading forum for innovation in decentralized identity standards and implementations. The frameworks, protocols, and approaches discussed at the conference provide a clear architectural roadmap for solutions that protect individual autonomy while enabling secure, verified interactions between humans and AI systems. Through continued collaboration across our working groups, DIF remains committed to developing open standards that address both current and emerging identity challenges.

To learn more about these topics or to get involved with the Decentralized Identity Foundation's work, visit DIF's website.

Thursday, 15. May 2025

Internet Safety Labs (Me2B)

Me2B or Me2Who Knows: App Stores Fail to Provide Clear Legally Responsible Party

ISL began its life as the Me2B Alliance, striving to create standards to enable greater power symmetry in the digitally facilitated relationship between consumers (“Me-s”) and the companies whose technology they use (“B-s”). We called this the M2B relationship. For mobile apps, all too often it’s a case of “Me2 Who Knows?!” People have a right to know who’s legally responsible for the apps they us

ISL began its life as the Me2B Alliance, striving to create standards to enable greater power symmetry in the digitally facilitated relationship between consumers (“Me-s”) and the companies whose technology they use (“B-s”). We called this the M2B relationship. For mobile apps, all too often it’s a case of “Me2 Who Knows?!” People have a right to know who’s legally responsible for the apps they use, and it is anything but clear in mobile app stores today. App stores are failing to make clear the legal entity who is responsible for apps. ISL has filed responsible disclosures with Apple starting in late 2024 but our repeated attempts have been dismissed.

Anatomy of Responsible Party Info in the App Stores

Both Google and Apple allow for two kinds of developer accounts: individual and organization. The creation of either type of account requires identity validation, but it’s a lower bar for individuals than for organizations. Individuals must provide a government issued ID credential before being allowed to open a developer account. This validates the individual’s name and address. Organizations, however, must provide a DUNS number to validate the legal existence of the organization. 12

▶ Problem 1: How effective is this level of identification authentication? ISL recently found an app developer with 15 apps in the Google Play store with no verifiable legal existence whatsoever. Thus, the process is imperfect at best.

In both stores, the “Account Holder” (to use Apple’s language) is the individual/entity who is in a legal relationship with the app store [owner].

Figures 1a and 1b show two parts of an Apple App store listing. Note that the name in blue under the app name appears to be the Account Holder (Figure 1a). Note that the Information section of the app listing shows five other places where we expect to see the same Account Holder name and websites.

Figure 1a: Apple App Store Example – App Header

 

Figure 1b: Apple App Store Example – App Information


Figures 2a and 2b show a similar annotated view of the Google Play Store app listing. Between Figures 2a and 2b, there are six instances where the Account Holder name appears.

Figure 2a: Google Play Store Example – Part 1

 

Figure 2b: Google Play Store Example – Part 2


This all seems fine. What we see in practice, though, is that the various links and names presented in the app store listing that should be definitively showing the name of the legally party responsible for the app often have inconsistencies. Which brings us to additional problems.

▶ Problem 2: Account Holders can create additional user accounts within their account, including users with permissions to submit/delete apps.3 There’s seemingly no governance over this capability, left strictly in the hands of the Account Holder.

▶ Problem 3: The app store app listing doesn’t indicate if the developer of the app is an individual or a company. This information matters. People deserve to know if they’re using an app developed by an individual developer, or by a company. No matter what, so long as apps are collecting personal information, people have an unconditioned right to know who gets their data and what they’re doing with it.

▶ Problem 4: The Apple app store doesn’t disclose the location of the responsible app developer but the Google Play store does.4 The great thing about app ecosystems is that they foster worldwide participants. The problem is that the responsible developer can be oceans away from consumers, making it difficult or impossible to hold the developer accountable if there are issues.

▶ Problem 5: Apps have broken developer links. It’s wildly confusing when the name in blue or green font under the app name is different from the name that appears when you click on the developer link. Imagine if you went to a grocery store and there was a loaf of bread with no brand or company information. You wouldn’t want to eat that. When you click on the Developer Website link for the app shown in Figure 1b you find yourself not only not at a site that says Kepler47, you find a non-functional page for audiojoy.com (Figure 3).

Figure 3 Developer website URL for 12 Step AA NA Daily Meditation from the Apple App Store: https://audiojoy.com/cgi-sys/suspendedpage.cgi


▶ Problem 6: The Account Holder name from the listing header doesn’t match the name in the privacy policy OR in the App Support link. Figures 4a and 4b illustrate a case where the listed developer in the listing header is Will Aitchison (Figure 4a), but the privacy policy fails to indicate a legally responsible data controller entirely (Figure 4b).

 

Figure 4a: Account Holder name

 

Figure 4b: Privacy policy link for app by Will Aitchison: https://www.firststeporegon.org/docs/PrivacyPolicy_25-05-2018.pdf

 

Figure 4c: Privacy Policy from “Developer’s Website”

Note that there’s another layer of confusion for the First Step Oregon app, namely, the privacy policy found on the App Support page differs from the privacy policy linked in the app store (Figure 4c). This case is a case where the app developer was likely an individual affiliated with the organization who wrote and submitted the app on behalf of the company. Still, it leaves a question for users: who is responsible? Who does the user contact in the case of issues?

The Boggle: Arcade Edition app in the Apple store shows a similar situation. The Account Holder appears to be Zynga Inc. from the app store listing header (Figure 5a). But when you click on the App Support link you see the Take-Two Terms of Service (Figure 5.b). Similarly, the linked privacy policy is also Take-Two’s. Finally, this app includes a copyright showing Zynga Inc. in the information section (Figure 5c). In this instance, the original Account Holder (Zynga Inc.) was acquired by another company (Take-Two). Zynga appears to be a wholly owned subsidiary of Take-Two based on its California business registration status, but the “hybrid” information in the app store is confusing.

Figure 5a: Boggle App store listing header – Account Holder: Zynga Inc.

 

Figure 5b: Boggle App Support Link

 

Figure 5c: Boggle App store listing – Information Section

Interestingly, not all Zynga games in the app store show Take-Two info at the App Support link. Figure 6b shows the App Support link for FreeCell, another Zynga game.

Figure 6a: FreeCell App store listing header

 

Figure 6b: FreeCell App Support link


▶ Problem 7: App Information shows two different names. Figures 7a and 7b show elements of the Apple app store listing for the app, Count Money and Coins – Photo Touch Game. In the Information section of the app store listing, Innovative Investments Limited is shown as the Seller, but Grasshopper Apps is the copyright registrant.

Figure 7a: Count Money and Coins App store listing header

 

Figure 7b: Count Money and Coins app – Information section


▶ Problem 8: App store listings with broken privacy policy links. It’s relatively easy to find apps in the app stores whose privacy links are simply broken, non-functional. This is what we found with most of the Innovative Investments Limited apps (Figure 7c).

Figure 7c: http://www.grasshopperapps.com/privacy-policy

Conclusions

NONE of this should be happening today. App stores receive 30% of all app revenues and thus have ample resources to programmatically monitor these situations. Consumers should never have to conduct forensic research in order to figure out who they’re entering into a business relationship with. Here’s a recap of what the app store owners should do:

Make it crystal clear on your label who the legal entity responsible for the app is (I’ll call this “responsible developer”). Make sure ALL instances in and related to the app store listing consistently show the same responsible developer name. Make sure there is valid, working contact information for the responsible developer. Indicate if the developer is an individual or an organization.
a. Ideally, we’d like to know the organization type, as this effects their legal obligations. For instance, for-profit vs. non-profit vs. government entities.
b. It’s also important to indicate coarse location information of the developer—i.e. city, state and country. Make sure privacy policy links are functional all the time.

Here are Recommendations for app consumers:

If there isn’t a privacy policy, don’t install the app. If there are no privacy details provided in the Apple store, don’t install the app. If there’s no developer contact information provided, don’t install the app. Contact us if you find these or other problems with app store entries. Final Thoughts

We are well past the point of understanding the risks of these things, yet we see no systemic changes under development on the part of Apple and Google to put safety measures in place. Perhaps shining this light on some of the issues can help spur action.

Footnotes: https://support.google.com/googleplay/android-developer/answer/13628312?sjid=9574226792909682372-NC https://developer.apple.com/programs/enroll/ Summary of roles and permissions for Apple developer accounts: https://developer.apple.com/help/account/access/roles/ Location of the developer was met with some warranted and some dubious pushback from Android developers as shown on this Reddit thread https://www.reddit.com/r/androiddev/comments/17w3pgz/google_started_displaying_full_legal_name_and/?rdt=50889 . Mandatory disclosure of an individual developer’s location presents some risks. That said, the developer is capable of getting every user’s location information so it seems a reasonable requirement.

The post Me2B or Me2Who Knows: App Stores Fail to Provide Clear Legally Responsible Party appeared first on Internet Safety Labs.


The Engine Room

Introducing CATio Spaces: A Learning Space to Talk Cybersecurity

We’re excited to introduce CATio Spaces, a new way for civil society organizations to connect with our team and talk about cybersecurity in a low pressure, friendly environment. The post Introducing CATio Spaces: A Learning Space to Talk Cybersecurity appeared first on The Engine Room.

We’re excited to introduce CATio Spaces, a new way for civil society organizations to connect with our team and talk about cybersecurity in a low pressure, friendly environment.

The post Introducing CATio Spaces: A Learning Space to Talk Cybersecurity appeared first on The Engine Room.


Blockchain Commons

Blockchain Commons Receives 2025 FROST Grant from Human Rights Foundation

Blockchain Commons was pleased to receive another grant this year from Human Rights Foundation (HRF) and their Bitcoin Development Fund grant to support our work with developers and implementers to expand the usage of FROST. To quote HRF’s press release: “For nonprofits operating under authoritarian rule, securing Bitcoin is critical for survival. If private keys (which control access to bitcoin) a

Blockchain Commons was pleased to receive another grant this year from Human Rights Foundation (HRF) and their Bitcoin Development Fund grant to support our work with developers and implementers to expand the usage of FROST.

To quote HRF’s press release:

“For nonprofits operating under authoritarian rule, securing Bitcoin is critical for survival. If private keys (which control access to bitcoin) are compromised, funds can be seized and movements dismantled. Blockchain Commons is a [not-for-profit] supporting the development of FROST (Flexible Round-Optimized Schnorr Threshold Signature), a protocol that strengthens multisignature wallets (bitcoin wallets with multiple private keys) by making them more secure, private, and flexible for shared custody. With this grant, Blockchain Commons will help build critical infrastructure to keep civil society groups operational and financially resilient under dictatorships.”

Blockchain Commons has previously held and documented four meetings supporting FROST in 2023 and 2024, and we plan to continue with that later in 2025. Thanks to HRF, we also are expanding that work this year with the development of a FROST signing tool and the creation of a brief “Learning FROST from the Command Line” course. Our goal, as ever, is to help support the implementers who are creating FROST and to make it easier for developers to incorporate FROST into their wallets.

For more on FROST, see our FROST developer pages.

Wednesday, 14. May 2025

Origin Trail

OriginTrail powers the future of ethical AI in healthcare with ELSA

A decentralized repository for secure, scalable genomic data sharing & AI-driven personalized healthcare insights — powered by OriginTrail Decentralized Knowledge Graph (DKG). OriginTrail powers the future of ethical AI in healthcare with ELSA We’re excited to announce that OriginTrail is joining forces with the ELSA (European Lighthouse on Secure and Safe AI) initiative to shape t
A decentralized repository for secure, scalable genomic data sharing & AI-driven personalized healthcare insights — powered by OriginTrail Decentralized Knowledge Graph (DKG).
OriginTrail powers the future of ethical AI in healthcare with ELSA

We’re excited to announce that OriginTrail is joining forces with the ELSA (European Lighthouse on Secure and Safe AI) initiative to shape the future of decentralized, privacy-preserving artificial intelligence (AI) in healthcare. Digital healthcare today faces three pressing challenges: safeguarding patient privacy, bridging fragmented data silos for seamless interoperability, and meeting strict regulatory requirements without stifling innovation.

At the heart of this collaboration lies a DeReGenAI — a decentralized repository for secure, scalable genomic data sharing and AI-driven personalized healthcare, powered by the OriginTrail Decentralized Knowledge Graph (DKG). This initiative tackles the most pressing challenges in digital health: enabling secure, compliant, and user-sovereign sharing of sensitive genomic data while unlocking the full potential of AI-driven personalized healthcare.

Trustworthy AI needs trustworthy infrastructure

AI is transforming healthcare — but for it to do so responsibly, it must be built on a foundation of trust, transparency, and ethics. That’s exactly what OriginTrail brings to the table within the ELSA consortium: an open-source, decentralized infrastructure that ensures data privacy, ownership, and interoperability at scale.

By integrating OriginTrail DKG, DeReGenAI becomes a decentralized repository that puts patients in control of their most personal asset — their genomic data. This enables:

User-managed permissions: Patients decide who can access their data, when, and for what purpose. Privacy-preserving monetization: Individuals can opt to share their data with research institutions or health providers on their own terms. AI-ready interoperability: Seamless interaction with AI systems while maintaining the integrity and provenance of the data.

At its core, the OriginTrail DKG act as a knowledge graph of knowledge graphs — a globally distributed network where each participant maintains control over their own knowledge node. These nodes interact in a fully decentralized manner, eliminating the risks of centralized data silos and single points of failure.

Here’s why this matters:

Global scale: Access data from diverse sources without compromising security. Privacy-first architecture: Data sovereignty is seamlessly integrated into the infrastructure. Compliance-ready: Designed with GDPR and other regulatory frameworks in mind. Interoperable: Built for seamless integration with AI technologies and healthcare systems. How does DeReGenAI work?

To power the next generation of personalized healthcare, DeReGenAI employs decentralized Retrieval-Augmented Generation (dRAG) — an evolution of how Large Language Models (LLMs) interact with external data.

Instead of querying a centralized source, the LLMs in DeReGenAI leverage the OriginTrail DKG to retrieve verified, decentralized knowledge. This unlocks:

More accurate AI insights, Context-aware healthcare recommendations, Trustworthy and verifiable AI behavior.

The ELSA initiative brings together top-tier European academic, industrial, and technology partners, such as University of Oxford, The Alan Turing Institute, NVIDIA, and others, to build a future where AI is both effective and ethical. As part of the ELSA initiative, OriginTrail is used to build a trusted data ecosystem for the AI age — one where people, not platforms, control their data, and where innovation never comes at the cost of ethics.

We’re proud to be driving this change, and even prouder to be doing it alongside an incredible group of partners.

Learn how OriginTrail is powering the shift to human-centric AI at https://origintrail.io/.

Trust the source.

OriginTrail powers the future of ethical AI in healthcare with ELSA was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 13. May 2025

Digital ID for Canadians

Advancing Digital Trust to Strengthen Public Safety

Current Landscape The public safety sector is undergoing rapid digital transformation, embracing new technologies to enhance emergency response, law enforcement, and disaster management. However, this…
Current Landscape

The public safety sector is undergoing rapid digital transformation, embracing new technologies to enhance emergency response, law enforcement, and disaster management. However, this shift also brings challenges such as data security risks, privacy concerns, and the need for reliable information verification in critical situations.

When DIACC was established in 2012, its goal of creating a secure digital ecosystem extended to all sectors, including public safety. As a trusted authority in digital identity and authentication, DIACC’s mission is more crucial than ever as public safety agencies increasingly rely on digital systems and data sharing to protect communities.

By prioritizing digital trust, Canada can strengthen its public safety infrastructure, improve emergency response times, and enhance collaboration between various agencies. Interoperable frameworks, such as the DIACC Pan-Canadian Trust Framework (PCTF), ensure that public safety systems remain secure, adaptable, and trusted.

Advancing Digital Trust in Public Safety 1. Enhancing Emergency Response and Coordination

Implementing robust digital trust solutions can significantly improve emergency response by:

Enabling secure, real-time data sharing between agencies Verifying the authenticity of emergency communications Facilitating rapid and accurate identification of individuals in crises 2. Leveraging the DIACC PCTF for Public Safety

DIACC encourages public safety agencies to adopt the PCTF as a tool to:

Implement secure identity verification for first responders and emergency personnel Enhance the integrity of emergency alert systems Improve interagency collaboration through trusted data exchange 3. Addressing Privacy Concerns in Surveillance and Data Collection

To balance public safety needs with individual privacy rights, DIACCweDIACC recommendss:

Implementing transparent data collection and usage policies Adopting privacy-preserving technologies in surveillance systems Ensuring proper authentication and authorization for access to sensitive data 4. Combating Misinformation in Crisis Situations

Digital trust frameworks can help public safety agencies:

Verify the authenticity of information during emergencies Establish trusted channels for disseminating critical updates Collaborate with social media platforms to combat the spread of false information 5. Enhancing Cybersecurity in Critical Infrastructure

Public safety agencies can use digital trust solutions to:

Secure communication networks used in emergency response Protect critical infrastructure from cyber threats Implement robust identity and access management systems for sensitive facilities Best Practices and the Way Forward 1. Adopt Emerging Technologies

Public safety agencies should leverage technologies like Artificial Intelligence (AI) and Distributed Ledger Technologies (DLT) to enhance their digital trust capabilities and improve emergency response. AI can be used for real-time data analysis and decision-making, while DLT can ensure the integrity and immutability of critical information.

2. Foster Cross-Sector Collaboration

DIACC encourages collaboration between public safety agencies, technology providers, and other stakeholders to develop standardized digital trust practices.

3. Educate and Train

DIACC is committed to educating public safety personnel and the public about digital trust through:

Specialized training programs for emergency responders Public awareness campaigns on the importance of verified information during crises Advocacy for regulations that support the implementation of digital trust solutions in public safety Conclusion

The public safety sector urgently needs robust digital trust solutions to protect Canadians and Canadians and communities in an increasingly digital world. By adopting frameworks like the PCTF, public safety agencies can enhance their operational efficiency, build public trust, and improve their ability to respond to emergencies, providing a reassuring path forward.

Together, as public safety agency leaders, policymakers, and stakeholders in emergency management, we can create a public safety ecosystem that leverages digital trust to protect citizens, respects their privacy, and solidifies Canada’s position as a secure and effective leader in emergency management.

Download the paper here.

DIACC-Position-Advancing-Digital-Trust-to-Strengthen-Public-Safety_ENG

Monday, 12. May 2025

Trust over IP

EGWG 2025-05-01 Agri-food Data Canada – Carly Huitema

Learn about Agri-food Data Canada's Semantic Engine, a suite of tools designed to enhance research data management in the agri-food sector by making data FAIR. Podcast and YouTube inside. The post EGWG 2025-05-01 Agri-food Data Canada – Carly Huitema appeared first on Trust Over IP.
Play Video

Watch full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/2025-05-01-Agri-food-Data-Canada-Carly-Huitema-1.wav Summary

This briefing document summarizes a presentation about Agri-food Data Canada’s Semantic Engine, a suite of tools designed to enhance research data management in the agri-food sector by making data Findable, Accessible, Interoperable, and Reusable (FAIR). A central focus is the use of machine-readable data schemas authored with the Overlays Capture Architecture (OCA) standard, which is highlighted for its use of derived identifiers (digests) over traditional assigned identifiers for improved reproducibility and authenticity. The document also details the Semantic Engine’s practical tools and ongoing efforts to integrate these standards into existing research infrastructure, addressing challenges like data context and decentralized ecosystems.

Briefing Document: Agri-food Data Canada and the Semantic Engine

Date: May 1, 2025

Subject: Review of Agri-food Data Canada (ADC) project and its Semantic Engine, with discussion on data schemas, identifiers, and integration into research infrastructure.

Sources:

Excerpts from “2025.05.01-ToIP_EGWG.pdf” (Slides) Excerpts from “GMT20250501-145814_Recording.transcript.txt” (Transcript) Excerpts from “GMT20250501-145814_Recording_1920x1080.mp4” (Video – used for verification of content and speakers) Excerpts from “GMT20250501-145814_RecordingnewChat.txt” (Chat Log)

Attendees/Speakers: Carly Huitema (University of Guelph, ADC), Michelle Edwards (ADC, mentioned), Eric Drury (Forth Consulting), Scott Perry (Digital Governance Institute), Neil Thomson (QueryVision), Steven Milstein (Collab.Ventures), Donald Sheppard.

Overview

This briefing summarizes a presentation by Carly Huitema from Agri-food Data Canada (ADC) to the Trust over IP (ToIP) Ecosystem and Governance Working Group. The presentation focuses on ADC’s efforts to improve research data management in the agri-food sector, specifically through the development of the “Semantic Engine” suite of tools. A central theme is the importance of “FAIR” data (Findable, Accessible, Interoperable, Reusable) and how machine-readable data schemas, particularly using the Overlays Capture Architecture (OCA) standard, contribute to achieving this goal. The discussion also highlights the advantages of derived identifiers (digests) over assigned identifiers for reproducibility, authenticity, and decentralization, and ADC’s ongoing work to integrate their tools and schemas into existing research infrastructure.

Key Themes and Important Ideas Improving Research Data Management in a Decentralized Ecosystem: The research data ecosystem is described as highly decentralized with independent research groups. While guided by best practices, mandates for standardized approaches are often slow to adopt. Incentives can be conflicting, particularly the “publish or perish” culture versus the time needed for thorough data documentation. Long-term planning (e.g., 50-year repository funding) is a crucial consideration. ADC, a project at the University of Guelph funded by multiple Canadian sources (CFREF, Genome Canada, UoG, OMAFA, Compute Ontario, etc.), aims to address these challenges by working directly with researchers. Making Agri-food Data FAIR: A core objective of ADC is to make agri-food data FAIR: Findable: Ability to identify and locate data resources and their context. Accessible: Ability to access (with permission) and use data once found, often requiring open protocols. Interoperable: Using standards for data to ensure compatibility, including standard vocabularies. Reusable: Data with sufficient context (licenses, provenance, descriptions) can be reused by others for replication or new research. Carly Huitema states: “Findable is the ability to identify and find resources as well as their context. If it’s accessible that once found you can access it with permission and use it interoperable. Certainly lots of our work at Trust Over IP is about how to ensure interoperability of standards including vocabularies and reusable that that there are licenses and provenance and other things that help make this data reusable.” Data Requires Context: Data alone is insufficient; it needs context to be useful. This context includes details like sample source, analysis methods, data schemas, catalogue information, data licenses, data governance agreements, associated publications, methodologies, scripts, and contributors. The Semantic Engine: ADC is developing the Semantic Engine, a suite of tools designed to help researchers create “rich contextual and machine-readable data schemas.” The Semantic Engine aims to make the process of documenting data less daunting for researchers. It functions as a self-teaching web app, providing guidance and tutorials. The engine uses the Overlays Capture Architecture (OCA) standard for writing schemas. Data Schemas and Overlays Capture Architecture (OCA): A data schema describes the attributes of a dataset (e.g., columns in a table) and provides detailed information about them (type, units, description, format, etc.). OCA is highlighted as an international and open standard for documenting schemas, developed by the Human Colossus Foundation. Two key advantages of OCA: Embeds Digests: OCA schemas can embed derived identifiers (digests/fingerprints) for the schema itself and for its constituent parts. This is crucial for reproducibility and authenticity of digital artifacts. Organized by Features: OCA structures schemas by features (e.g., all descriptions, all units) rather than attribute by attribute (e.g., JSON-LD, XML Schema). This organization offers advantages: Task-based Governance: Allows for governance at the feature level (e.g., assigning responsibility for translation features). Optimized for Feature Management: Facilitates adding or removing features (like languages or units) without altering the identifiers of other features. Mix-and-Match: Enables easier combination and reuse of different schema components. ADC has developed an “OCA Package” which wraps the core OCA standard with extensions for community-specific features and developing standards, allowing for gradual migration of these features to the core standard as they become accepted. Assigned vs. Derived Identifiers: Assigned Identifiers (Names): Created by a governance body, linked to an object via a lookup table. Resolution requires trusting the authoritative governance body and their lookup table. “If you find an object, you cannot figure out the identifier – you must go to the authoritative body and look it up in their table.” Resolution services can only be hosted or delegated by the governance body. Derived Identifiers (Digests/Fingerprints): Calculated directly from a digital object using a hashing function. They are unique fingerprints for a specific version of the object. Key for reproducibility and authenticity: “You can identify the resource originally used. You can verify the resource is the same one that was originally used.” Anyone can calculate a derived identifier, build a resolution service, and verify the resolution service is pointing to the correct object. Derived identifiers enable objects to be hosted in multiple locations. Carly Huitema humorously quotes, “If you liked it, then you should have put a digest on it.” Derived identifiers are excellent for snapshots but do not handle dynamic content or versioning directly. Versioning requires a governing authority or a decentralized identifier (DID) system where subsequent versions are linked and controlled. Tools Provided by the Semantic Engine: Schema Authoring Web App: Guides researchers through creating machine-readable schemas. Data Entry Excel Generator: Creates an Excel spreadsheet with headers and schema descriptions based on the authored schema, helping standardize data collection. Includes the schema’s derived identifier. Data Entry on the Web / Data Verification Engine: A tool to verify data sets against the rules defined in a schema. Allows researchers to quickly check for inconsistencies before combining data from multiple sources. Integration into Research Infrastructure: ADC is working to integrate their schemas and tools into existing Canadian and international research infrastructure. Schemas can be deposited into long-term research data repositories (e.g., Borealis in Canada), often receiving assigned identifiers like DOIs. These schemas, with their embedded derived identifiers, can then be found through federated search engines that index multiple repositories (e.g., Lunaris in Canada, OpenAIRE in Europe). This allows researchers to publish papers referencing schemas by their identifiers, enabling others to find and verify the schema used. Addressing IP and Sensitive Data: The Semantic Engine itself does not store user data or schemas, reducing IP concerns related to the platform. Schemas are generally less sensitive than the actual data, allowing them to be more openly shared. This enables discovery of datasets and potential collaborations without exposing proprietary information. Schemas can include flags for sensitive data attributes (e.g., farm location). While ADC’s tools don’t currently enforce access controls based on these flags, this information in the machine-readable schema can be used by internal pipelines or other systems to manage sensitive data appropriately (e.g., triggering anonymization). Future Directions: ADC plans to continue integrating with research infrastructure, add digests as identifiers to more objects, and develop tools for other machine-readable standards (e.g., cataloging metadata, policy rules). They also aim to increase the number of features supported in the schema description process (e.g., range rules, ontology framing). Key Takeaways for ToIP The FAIR data principles are highly relevant to decentralized ecosystems and align well with ToIP goals. Derived identifiers (digests) offer significant advantages for reproducibility, authenticity, and decentralized resolution, making them a powerful tool for digital objects within a trust framework. The architecture of data schemas (feature-by-feature vs. attribute-by-attribute) has implications for governance, versioning, and the application of derived identifiers to schema components. Integrating decentralized identity and verifiable credentials concepts (like OCA) into existing research infrastructure can enhance discoverability, interoperability, and trust in scientific data. The Semantic Engine provides a practical example of building user-friendly tools to generate machine-readable metadata, addressing the challenge of widespread adoption of such standards.

For more details, including the meeting transcript, please see our wiki 2025-05-01 Agri-food Data Canada – Carly Huitema – Home – Confluence

https://www.linkedin.com/in/carly-huitema-27727011/ https://www.semanticengine.org/

The post EGWG 2025-05-01 Agri-food Data Canada – Carly Huitema appeared first on Trust Over IP.


EGWG 2025-03-20: Richard Whitt, GliaNET

Learn about Richard Whitt's vision for a more human-centric internet built on trust. Podcast and YouTube inside. The post EGWG 2025-03-20: Richard Whitt, GliaNET appeared first on Trust Over IP.
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/ToIP-EGWG-2025-03-20_-Richard-Whitt-GliaNET-1.wav

This document and podcast were generated by Google’s NotebookLM. They provide information about Richard Whitt‘s vision for a more human-centric internet built on trust, as presented to the Ecosystem & Governance Working Group (EGWG) of the Trust over IP Foundation on March 20, 2025. It also draws from materials related to the GLIA Foundation and the GliaNet Alliance, founded by Richard Whitt.

The current state of the web is characterized by surveillance capitalism, where companies prioritize data extraction and manipulation, leading to a lack of trust. Richard Whitt argues that this undermines human agency and necessitates a shift towards a web built on trustworthy intermediaries.

His proposed solution centers around the concept of Net Fiduciaries, a new category of entities that would prioritize users’ interests through the application of fiduciary duties like care and loyalty, similar to professionals in medicine and law. This would be a voluntary approach, driven by ethical considerations and good business practices, rather than imposed regulations on existing platforms.

The GliaNet Alliance is a coalition of companies and organizations committed to this vision. Its goal is to build a “web worthy of trust” by fostering ethical technology practices and using transformative governance principles. The alliance operates as a community of practice, with working groups focusing on areas like business models, policies, practices, and standards.

Key concepts discussed include:

The SEAMS cycle (Surveillance, Extraction, Analysis, Manipulation), which describes the problematic data practices prevalent on the web. Glea, the ancient Greek word for glue, symbolizing trust as the social glue. A multi-layered approach to change, encompassing governance, markets, technology (edge tech), and public policy. The importance of authentic personal AI agents that operate on behalf of the user, contrasting with the “double agent” nature of current AI assistants that primarily serve platform interests. The distinction between agenticity (capability) and agentiality (acting on behalf of) in AI systems. The potential for interoperability between AI agents across different platforms.

The alliance is exploring ways to demonstrate trust to the public, potentially through analogies (like a “doctor for your web life”), marketing that emphasizes fiduciary duties, and clear communication about data handling practices. They are also considering mechanisms for recourse in case of breaches.

Richard Whitt’s book, “Reweaving the Web,” further elaborates on these ideas, outlining the problems with the current web and proposing practical steps to create a more user-centric digital future. The book has received positive testimonials from prominent figures in the tech and policy fields.

The GliaNet Alliance is in its early stages, focused on building its community, establishing governance structures, and exploring business models that align with its ethical principles. They are also engaging with policymakers and exploring potential collaborations, including with the Trust over IP Foundation. Consumer Reports is an anchor member and is exploring branding its AI agent as a GliaNet project. Kwaai.ai, an open-source LLM project, is also part of the alliance, aiming to build a platform with fiduciary duties to developers and agents.

For more details, including the meeting transcript, please see our wiki 2025-03-20 GliaNet – Home – Confluence.

https://www.linkedin.com/in/richardwhitt/ https://www.glia.net/

The post EGWG 2025-03-20: Richard Whitt, GliaNET appeared first on Trust Over IP.


Protected: EGWG 2025-04-03: Stephan Wolf, Verifiable Trade Foundation

There is no excerpt because this is a protected post. The post Protected: EGWG 2025-04-03: Stephan Wolf, Verifiable Trade Foundation appeared first on Trust Over IP.

This content is password protected. To view it please enter your password below:

Password:

The post Protected: EGWG 2025-04-03: Stephan Wolf, Verifiable Trade Foundation appeared first on Trust Over IP.

Friday, 09. May 2025

FIDO Alliance

Independent: Government to roll out passwords replacement on Gov.UK to boost cyber security

The National Cyber Security Centre said moving to digital passkeys to log on to Gov.UK was a vital step in making the tech more ubiquitous. The Government has announced plans to replace […]

The National Cyber Security Centre said moving to digital passkeys to log on to Gov.UK was a vital step in making the tech more ubiquitous.

The Government has announced plans to replace passwords as the way to access Gov.UK, its digital services platform for the public.

In contrast to using a password and then an additional text message or code sent to a user’s trusted device – known as two-factor authentication – passkeys are unique digital keys tied to a specific device that proves the user’s identity when they log in without requiring them to input any further codes.


Expert Insights: What’s Next For Cybersecurity? 19+ Key Predictions From Security Experts

At the 2025 RSAC Conference in San Francisco, our team met with dozens of industry experts, cybersecurity professionals, and investors to find out more about the biggest security technologies and […]

At the 2025 RSAC Conference in San Francisco, our team met with dozens of industry experts, cybersecurity professionals, and investors to find out more about the biggest security technologies and trends that are impacting your business. 

Tech-Specific Innovation

While AI was one of the hottest topics at the show, it wasn’t the only topic of discussion; we also heard a lot about the evolving ransomware ecosystem and what organizations need to be doing today to prepare for the arrival of “Q-Day”. 

But perhaps the second-biggest discussion piece was around identity and access security. 

With the rise of AI-powered deepfakes and fraud attempts, we’re seeing more need than ever before for organizations to make the switch from passwords to more secure methods of authentication, such as Passkeys—and many experts were optimistic that this space will see a lot of adoption over the next year. 

Key Insights:

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance: “We’re going to see Passkey deployment continue to grow in regulated industries. That’s really important, because addressing the higher assurance use cases and taking passwords out of play there will give greater confidence for more and more companies to deploy Passkeys at scale, which will further accelerate our journey towards putting passwords fully in the rear-view mirror.”


SC Media: Microsoft moves to default passkey sign-ins

Microsoft has officially shifted to passkeys, such as facial recognition, fingerprint scans, and PINs, as the default sign-in method for all new accounts beginning this month, marking its most significant […]

Microsoft has officially shifted to passkeys, such as facial recognition, fingerprint scans, and PINs, as the default sign-in method for all new accounts beginning this month, marking its most significant step yet toward a password-free future, according to TechRepublic.

The move coincides with World Password Day and aligns with the tech giant’s broader commitment to the Passkey Pledge, an industry initiative to eliminate passwords in favor of more secure, phishing-resistant login methods. In a blog post, Microsoft executives Joy Chik and Vasu Jakkal emphasized that passkey users are three times more likely to log in successfully than those using passwords. Although existing account holders can still use passwords, Microsoft is nudging them toward using biometrics or PINs by default. Nearly all Windows users already rely on Windows Hello, and the shift is backed by support from industry partners, including Apple and Google, who are also rolling out FIDO-compliant passkey systems across their platforms. The change promises to streamline security and user experience across the board.

Thursday, 08. May 2025

FIDO Alliance

Gov Info Security: UK Government to Roll Out Passkeys Late This Year

FIDO-Based Authentication to Replace SMS-Based Verification, Says UK NCSC The U.K. government is set to replace SMS-based verification systems for digital services with passkeys this year in a bid to […]

FIDO-Based Authentication to Replace SMS-Based Verification, Says UK NCSC

The U.K. government is set to replace SMS-based verification systems for digital services with passkeys this year in a bid to shore up cyber defenses.

The initiative will be rolled out by the U.K. National Cybersecurity Center using the open authentication standard Fast IDentity Online, or FIDO, as a more “secure and cost-effective solution.”

“The NCSC considers passkey adoption as vital for transforming cyber resilience at a national scale,” the NCSC said. “In addition to enhanced security and cost savings, passkeys offer users a faster login experience, saving approximately one minute per login when compared to entering a username, password and SMS code,” the agency said.


Origin Trail

umanitek launches umanitek Guardian AI agent

Zug, Switzerland (May 6, 2025) — Umanitek AG, a Swiss-based AI company combating harmful content and the risks of artificial intelligence, today announces the launch of their first product umanitek Guardian. Umanitek’s mission is to fight against harmful content and the risks of AI by developing and deploying technology that serves the greater good of humanity. Umanitek’s first product is an AI

Zug, Switzerland (May 6, 2025) — Umanitek AG, a Swiss-based AI company combating harmful content and the risks of artificial intelligence, today announces the launch of their first product umanitek Guardian.

Umanitek’s mission is to fight against harmful content and the risks of AI by developing and deploying technology that serves the greater good of humanity.

Umanitek’s first product is an AI agent, umanitek Guardian, that uses the Decentralized Knowledge Graph (DKG), a decentralized, trusted network for organizing and tracking immutable data and allows participating organizations to keep ownership and control of their data while supporting database queries on a need-to-know basis — allowing collaboration without compromising privacy.

The first user of umanitek Guardian will be Aylo, who will leverage the agent to allow law enforcement agents to query 7 million hashes of its verified content using natural language through an AI agent.

“Umanitek acts as the bridge. Through Decentralized Knowledge Graph (DKG) decentralized infrastructure, we can integrate advanced Internet safety technologies directly with data. Umanitek Guardian will enable companies, law enforcement, NGOs and individuals to collaborate by uploading and querying “fingerprints” of images and videos to a decentralized directory. This system will help large technology platforms track, identify and prevent the distribution of harmful content. We are committed to developing human-centric AI solutions that promote trust, protect privacy and help make internet safety the standard in the age of AI.”

– Chris Rynning, umanitek Chairman

About umanitek

Making internet safety the standard in the age of AI.

Umanitek AG is a Swiss-based AI company combating harmful content and the risks of artificial intelligence. We develop human-centric AI solutions that promote trust, protect privacy and make internet safety the standard in the age of AI.

Our founders bring deep expertise in building reliable, trusted AI systems and are connected to global networks working to reduce internet harm, and are committed to raising awareness about the importance of education and digital responsibility in the age of AI.

Umanitek’s AI infrastructure is safe by design, open by principle and trustworthy by default. With a focus on ethical innovation, umanitek is setting the standards for transparency, accountability and harm reduction in artificial intelligence.

For more information about umanitek, umanitek’s founders and products, visit www.umanitek.ai.

Contacts

For media inquiries, please contact:

Umanitek Communication

media@umanitek.ai

umanitek launches umanitek Guardian AI agent was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 07. May 2025

OwnYourData

MyData Award 2025

We are thrilled to announce that OwnYourData has been honored with the MyData Award 2025 in the Technology category by MyData Global. This recognition celebrates our commitment to developing human-centric data solutions that empower individuals and organisations with greater control over their information, enabling them to manage, share, and benefit from their data in transparent and […] De

We are thrilled to announce that OwnYourData has been honored with the MyData Award 2025 in the Technology category by MyData Global. This recognition celebrates our commitment to developing human-centric data solutions that empower individuals and organisations with greater control over their information, enabling them to manage, share, and benefit from their data in transparent and ethical ways.

The MyData Awards acknowledge organizations and individuals making significant strides in ethical personal data practices across various domains, including technology, business, governance, and thought leadership. This year, over 400 nominations were evaluated, highlighting the growing global emphasis on data rights and individual empowerment.

Our award in the Technology category underscores our efforts in creating interoperable and privacy-focused tools that align with the principles of the MyData Declaration. We are especially honored that both the organisation OwnYourData and our chairperson, Christoph Fabianek, were recognized with MyData Awards. This dual recognition highlights not only our collective impact as a team but also Christoph’s individual leadership and long-standing commitment to building a fair, sustainable, and prosperous digital society.

For more details on the MyData Awards and the list of 2025 recipients, visit the official MyData Awards page.

 

Der Beitrag MyData Award 2025 erschien zuerst auf www.ownyourdata.eu.


FIDO Alliance

NCSC: UK pioneering global move away from passwords

The UK government is set to roll out passkey technology for its digital services later this year as an alternative to the current SMS-based verification system, offering a more secure […]

The UK government is set to roll out passkey technology for its digital services later this year as an alternative to the current SMS-based verification system, offering a more secure and cost-effective solution that could save several million pounds annually.


Next Level Supply Chain Podcast with GS1

Making the Amazon Algorithm Work for Your Brand with Jason Boyce

What does it take to grow a brand on Amazon today? The rules have changed, and Amazon is now less of a storefront and more of a dynamic ad and data platform. In this episode, Jason Boyce, Founder and CEO of Avenue7Media, joins host Reid Jackson to share what he’s learned over 22 years in the e-commerce world. From selling as a reseller to building a brand and leading an agency, Jayson explains

What does it take to grow a brand on Amazon today?

The rules have changed, and Amazon is now less of a storefront and more of a dynamic ad and data platform.

In this episode, Jason Boyce, Founder and CEO of Avenue7Media, joins host Reid Jackson to share what he’s learned over 22 years in the e-commerce world. From selling as a reseller to building a brand and leading an agency, Jayson explains why success now depends on feeding the algorithm, not pitching to buyers.

He breaks down how streaming TV is driving direct Amazon sales, the problem with locked attributes and UPCs, and why attribution is finally catching up to the promise of connected TV.

In this episode, you’ll learn:

Why traditional retail rules don’t apply to Amazon

How Connected TV can accelerate brand growth

What makes Amazon profitable when compared to DTC

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:53) From failed consulting to full-service agency

(06:26) Fixing locked attributes and bad UPC data

(09:59) Streaming ads that convert like e-commerce

(13:56) Selling to algorithms, not people

(16:34) Comparing Amazon to direct-to-consumer

(20:44) Why CTV is outperforming display ads

(26:48) How AI will change the way we shop

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Jason Boyce on LinkedInCheck out Avenue7Media

Tuesday, 06. May 2025

Internet Safety Labs (Me2B)

Internet Safety Labs Provides Testimony in Support of LD 1822, An Act to Enact the Maine Online Data Privacy Act

Internet Safety Labs’ Executive Director Lisa LeVasseur testified before the Maine Judiciary Committee in support of LD 1822, the Maine Online Data Privacy Act, while highlighting concerns. Informed by ISL’s 2022 K-12 Edtech safety benchmark and ongoing research, our testimony underscores the need to curb widespread commercial surveillance and risky data practices. LD 1822’s restrictions […] The

Internet Safety Labs’ Executive Director Lisa LeVasseur testified before the Maine Judiciary Committee in support of LD 1822, the Maine Online Data Privacy Act, while highlighting concerns. Informed by ISL’s 2022 K-12 Edtech safety benchmark and ongoing research, our testimony underscores the need to curb widespread commercial surveillance and risky data practices. LD 1822’s restrictions on sensitive data collection and sales are vital, yet we advocate for stronger protections, particularly for FERPA-covered data, non-profits, and medical apps. The written testimony is available to view in the pdf below:

Open PDF

 

The post Internet Safety Labs Provides Testimony in Support of LD 1822, An Act to Enact the Maine Online Data Privacy Act appeared first on Internet Safety Labs.


FIDO Alliance

ID Tech Wire: FIDO Alliance Launches Payment Authentication Working Group

The FIDO Alliance announced today the launch of a new Payments Working Group (PWG) focused on developing and implementing FIDO authentication solutions specifically for payment use cases. This initiative marks a significant expansion of […]

The FIDO Alliance announced today the launch of a new Payments Working Group (PWG) focused on developing and implementing FIDO authentication solutions specifically for payment use cases. This initiative marks a significant expansion of the organization’s efforts to eliminate password dependencies in critical digital transactions.

The new working group emerges at a time of growing momentum for passwordless authentication in the payments sector. Last year, Visa implemented passkeys for online payments, allowing customers to authorize transactions using biometric authentication rather than traditional passwords.

The Alliance, which now comprises over 250 members, has been steadily expanding its influence across various sectors of digital authentication.


Biometric Update: It’s World Passkey Day, actually: trust, adoption grows for FIDO credential

World Password Day is no longer. The annual day to promote secure password practices has run its course, and the FIDO Alliance (whose stated mission, to be fair, is to bring the […]

World Password Day is no longer. The annual day to promote secure password practices has run its course, and the FIDO Alliance (whose stated mission, to be fair, is to bring the world beyond passwords) has rebranded World Password Day as World Passkey Day – an occasion to celebrate the encrypted FIDO credentials that combine data you possess (a digital key or credential) with a biometric trait (something you are, usually a face or fingerprint).


PC Mag: RIP Passwords: Microsoft Moves to Passkeys as the Default on New Accounts

Anyone setting up a new Microsoft account will soon find they’re encouraged to use a passkey during the sign-up process. Microsoft introduced passkey support across most of its consumer apps last year, […]

Anyone setting up a new Microsoft account will soon find they’re encouraged to use a passkey during the sign-up process.

Microsoft introduced passkey support across most of its consumer apps last year, allowing users to sign into their accounts without the need for 2FA methods or remembering long passwords. A year later, it’s removing passwords as the default and encouraging all new signups to use passkeys.

PCMag attempted to sign up for a new Microsoft account on May 2, but it still asked for a password at the time of publication. Microsoft hasn’t shared an exact timeframe for when the change will take place, but you should expect it to happen in the coming days.

This is the first time a new account can be entirely passwordless. Previously, it had to have one alongside your passkey.

In a blog post, Microsoft says 98% of passkey attempts to log in are successful, while passwords are only at 32%. Microsoft is also introducing what it calls a “streamlined” sign-in experience for all accounts that “prioritizes passwordless methods for sign-in and sign-up.” It means some UX design changes to highlight passkey functionality.


The Engine Room

Empowering narratives, strengthening ecosystems: A partnership for digital resilience

We’re excited to announce a new partnership with Puentes, an organization dedicated to strengthening the narrative power of social justice movements in Latin America The post Empowering narratives, strengthening ecosystems: A partnership for digital resilience appeared first on The Engine Room.

We’re excited to announce a new partnership with Puentes, an organization dedicated to strengthening the narrative power of social justice movements in Latin America

The post Empowering narratives, strengthening ecosystems: A partnership for digital resilience appeared first on The Engine Room.


We Are Open co-op

What makes for a good AI Literacy framework?

Reviewing the landscape and sharing our approach

WAO is currently working on a project with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. Our focus is on AI Literacy for 14–19 year olds and you can read more context in our project kick-off blog post.

One of the deliverables for this project is to review AI Literacy frameworks, with a view to either making a recommendation, or coming up with a new one. It’s not just a case of choosing one with a pretty diagram!

Frameworks are a way in which an individual or organisation can indicate what is worth paying attention to in a given situation. Just as the definition of ‘AI Literacy’ varies by context, the usefulness of a framework depends on the situation. In this post, we share the judgements we made using criteria we developed and share our process in case it is useful for your own work.

Narrowing down the list

While there can be some commonality and overlaps between frameworks for different contexts, the diversity of possible situations is huge. There can never be a single ‘perfect’ framework suitable for every situation. For example, just imagine what ‘AI Literacy’ might look like for (adult) engineers and developers compared with children of primary school age. As with our work at Mozilla you can define what a ‘map’ of new literacies might look like, but it can only ever be one of many that describe the overall ‘territory’.

With our work on this project, we had to bear in mind our audience (14–19 year olds) and the mission of the BBC. There is a long history of Critical Media Literacy which is particularly relevant to our research here, and which was one of the factors when reviewing frameworks.

With a relatively short project timeline of three months, we needed a way to quickly classify approximately forty frameworks and related documents we have collected. We shared relevant details with Perplexity AI (using the Claude 3.7 Sonnet model) over multiple conversations. This helped us reduce our initial list of around 40 frameworks to a more manageable 25.

Coming up with criteria

Next, we came up with some criteria by which to judge them. These criteria were informed by our own work in the area for 15+ years, along with either interviews or surveys with over 35 experts in the field. While these criteria are meant as a heuristic for this project, they are also a useful starting point for asking questions about any project relating to new literacies.

Definition of AI — ensures everyone has the same starting point Development process — adds transparency and credibility Target audience — helps match the framework to its users Real-world relevance — shows how ideas work in practice AI safety and ethics — addresses both risks and responsible use Skills and competencies listed — clarifies what learners should be able to do Reputable source — increases trust in the framework

We included both safety and ethics because both are needed for using AI in a responsible and trustworthy way.

Categorising the most relevant frameworks

We used a traffic light (red/yellow/green) categorisation system to score each framework on the above criteria. Only one of the frameworks we reviewed, the OECD Framework for Classifying AI Systems, meets all criteria with a ‘green’ rating.

There are several other frameworks which we judge as meeting the criteria as ‘green’ except for one criterion (‘yellow’). Listed alphabetically by organisation, these are:

Artificial Intelligence in Education (Digital Promise) AI Literacy in Teaching and Learning: A Durable Framework for Higher Education (EDUCAUSE) Digital Competence Framework for Citizens (European Commission) Developing AI Literacy With People Who Have Low Or No Digital Skills (Good Things Foundation) AI competency framework for students (UNESCO)

There are other frameworks which we have decided to include which included two or more criterion as ‘yellow’. For example, the Open University’s Critical AI Literacy Framework, Ng, et al’s article, and Prof. Maha Bali’s blog post linked from her framework all do a good job of defining Critical AI Literacies. We would also note that the Digital Education Council’s list of skills and competencies relating to AI Literacy is useful to pair with those from EDUCAUSE, UNESCO, and the European Commission.

Next steps

As mentioned earlier, our brief for this project involves either making an informed recommendation of a framework, or to come up with our own. We’re currently leaning toward the latter, but either choice will be the subject of a future blog post.

If you have questions, concerns, comments, or indeed a particularly useful resource which you think would be useful for this project, please do get in touch. You can leave a comment here, or use the contact details on our main website to get in touch!

Monday, 05. May 2025

FIDO Alliance

The Verge: Microsoft goes passwordless by default on new accounts

After supporting passwordless Windows logins for years and even allowing users to delete passwords from their accounts, Microsoft is making its biggest move yet toward a future with no passwords. Now it will […]

After supporting passwordless Windows logins for years and even allowing users to delete passwords from their accounts, Microsoft is making its biggest move yet toward a future with no passwords. Now it will ask people signing up for new accounts to only use more secure methods like passkeys, push notifications, and security keys instead, by default.

The new no-password initiative by Microsoft is accompanied by its recently launched, optimized sign-in window design with reordered steps that flow better for a passwordless and passkey-first experience.

Although current accounts won’t have to shed their passwords, new ones will try and leave them behind by not prompting you to create a password at all:

As part of this simplified UX, we’re changing the default behavior for new accounts. Brand new Microsoft accounts will now be “passwordless by default.” New users will have several passwordless options for signing into their account and they’ll never need to enroll a password. Existing users can visit their account settings to delete their password.

With today’s changes, Microsoft is renaming “World Password Day” to “World Passkey Day” instead and pledges to continue its work implementing passkeys over the coming year. This time last year, the company implemented passkeys into consumer accounts. Microsoft says it’s seeing “nearly a million passkeys registered every day,” and that passkey users have a 98 percent success rate of signing in versus 32 percent for password-based accounts.


BetaNews: Research confirms consumers are turning to passkeys to protect their accounts

As you’ll already know, today is World Passkey Day and the FIDO Alliance has released an independent study of over 1,300 consumers across the US, UK, China, South Korea, and Japan to understand how […]

As you’ll already know, today is World Passkey Day and the FIDO Alliance has released an independent study of over 1,300 consumers across the US, UK, China, South Korea, and Japan to understand how passkey usage and consumer attitudes towards authentication have evolved.

The results are encouraging, they find 74 percent of consumers are aware of passkeys and 69 percent have enabled passkeys on at least one of their accounts.

Among those who have used passkeys, 38 percent report enabling them whenever possible. More than half of consumers now believe passkeys are both more secure (53 percent) and more convenient (54 percent) than passwords.

This increase in passkey adoption is likely driven by the shortcomings of passwords. Last year, over 35 percent of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47 percent of consumers will abandon purchases if they have forgotten their password for that particular account.

To further encourage organizations to embrace the shift to passkeys, the FIDO Alliance has also launched the Passkey Pledge, a voluntary pledge for online service providers and authentication product and service vendors committed to embracing passkeys.

“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods that have led to decades of data breaches, account takeovers and user frustration, which imperil the very foundations of our connected society,” says Andrew Shikiar, executive director and CEO of the FIDO Alliance. “We’re thrilled by the fact that over 100 organizations around the world signed our Passkey Pledge, and we are pleased to support the market in their march towards passkeys through a variety of freely available assets, including our market-leading Passkey Central resource center.”

The full report is available from the FIDO Alliance site.


Cyber Security News: 15 Billion User Gain Passwordless Access to Microsoft Account Using Passkeys

As the first-ever World Passkey Day replaces the traditional World Password Day, Microsoft joins the FIDO Alliance in celebrating a milestone achievement: over 15 billion online accounts now have access […]

As the first-ever World Passkey Day replaces the traditional World Password Day, Microsoft joins the FIDO Alliance in celebrating a milestone achievement: over 15 billion online accounts now have access to passwordless authentication through passkeys.

This significant shift marks a turning point in digital security as the tech industry moves decisively away from vulnerable password-based systems.

“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods,” said Andrew Shikiar, Executive Director and CEO of the FIDO Alliance. 


Forbes: Microsoft Warns All Windows Users—Delete Your Password

Microsoft is on a mission to delete passwords for a billion users, given that “the password era is ending.” The Windows-maker warns users that “bad actors know it, which is why they’re […]

Microsoft is on a mission to delete passwords for a billion users, given that “the password era is ending.” The Windows-maker warns users that “bad actors know it, which is why they’re desperately accelerating password-related attacks while they still can.” And those attacks are now making headlines weekly.

The answer is passkeys, which link your account security to your physical device security, which means unless an attacker has access to your hardware and unlock method — biometric or PIN, they can’t bypass a password to login.

More than others, Microsoft is not just promoting passkeys but also password deletion: “If a user has both a passkey and a password, and both grant access to an account, the account is still at risk for phishing. Our ultimate goal is to remove passwords completely and have accounts that only support phishing-resistant credentials.”

The FIDO Alliance, the organization charged with promoting passkeys has taken to the internet airwaves this time around to “launch a Passkey Pledge to further accelerate [the] global movement away from passwords.”

Its latest research found that “over 35% of people had at least one of their accounts compromised due to password vulnerabilities, [and] 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security.”

FIDO has welcomed Microsoft’s password deletion as industry leading. “This is an exciting and seminal milestone as Microsoft is taking passwords out of play for over a billion user accounts,” its CEO Andrew Shikiar told me, “who can now instead leverage user-friendly, phishing-resistant passkeys. Microsoft’s leadership in doing so today will help encourage more service providers to do the same, which moves us collectively closer to the day when passwords are fully in our rear-view mirror.”


TechRadar: World Password Day 2025: All the news, updates and advice from our experts as it happened

Moving past passwords is improving brand trust The FIDO Alliance has also recently invited companies to participate in the World Passkey Pledge to create a more secure future, and move […]
Moving past passwords is improving brand trust

The FIDO Alliance has also recently invited companies to participate in the World Passkey Pledge to create a more secure future, and move past the vulnerability and hassle of passwords.

Simon McNally, Cybersecurity Expert at Thales said, “Passwords have long been a weak link in digital security, forcing consumers and businesses into a frustrating cycle of password resets and potential breaches. We welcome the FIDO Alliance’s commitment to World Passkey Day and its push for a passwordless future. Passkeys provide a seamless and secure authentication experience, eliminating the risks and frustrations associated with traditional passwords.

Passkeys are automatically generated and securely stored, removing the burden of creating and managing complex passwords. They also enhance privacy by allowing authentication without sharing sensitive data, reducing the risk of breaches. As trust in digital security becomes more critical, businesses must prioritise passwordless solutions to protect users and build brand confidence.”

The Passkey Pledge for a Passwordless Future

To commemorate World Password Day (or at it will henceforth be known, World Passkey Day), the FIDO Alliance has released a survey on the usage of passkeys which found that 74% of consumers are aware of passkeys, meaning that consumers are aware of the potential value a passkey login experience can bring. To support this, the survey also found that 69% of consumers have enabled passkeys on at least one of their accounts.

Furthermore, for those who have used passkeys, 38% report enabling them whenever possible suggesting that some consumers already see the added user experience and security benefits passkeys bring. In fact, more than half of consumers believe passkeys are both more secure (53%) and more convenient (54%) than passwords. Many businesses and organizations have already signed the Passkey Pledge, including Amazon, Apple, Google, Microsoft, Samsung, and many more!

A pivotal moment

Andrew Shikiar, executive director and CEO of the FIDO Alliance, commented on both the recent survey, and the Passkey Pledge:

“This year’s World Passkey Day comes at a pivotal moment for user authentication around the world – with a rapidly growing number of service providers (including nearly half of the world’s top 100 websites) offering billions of user accounts the option to sign in with passkeys instead of passwords. Well over 100 organizations have taken the Passkey Pledge, indicating their commitment towards a future free from the risk and burdens of passwords.

Consumers are not only increasingly aware of passkeys, they’re using them more frequently: 69% of respondents to our recent survey are enabling them on at least one account, and 38% are now enabling them whenever possible.

Passkeys are so intuitive to use that once users integrate passkeys, they rarely go back. This is good for consumers who are frustrated by password reliant sign-in processes — 35% of whom said they experienced account compromises as a result of password vulnerabilities last year — and e-commerce retailers alike.

This shift isn’t just about innovation or bottom lines; it’s about rebuilding digital trust and creating a safer, more efficient internet for everyone.”

Friday, 02. May 2025

MyData

MyData Town Hall: Updating the Declaration for 2025 and Beyond

We recently updated the MyData White Paper. Unlike previous editions that stated “this is the way it should be,” this fourth edition asks questions about how MyData’s vision, principles, and […]
We recently updated the MyData White Paper. Unlike previous editions that stated “this is the way it should be,” this fourth edition asks questions about how MyData’s vision, principles, and […]

Thursday, 01. May 2025

FIDO Alliance

Celebrating World Passkey Day 2025: Showcase of Real-World Passkey Deployments

May 1, 2025 In recognition of World Passkey Day (formerly World Password Day), the FIDO Alliance is putting the spotlight on real-world passkey deployments from leading organizations around the globe. […]

May 1, 2025

In recognition of World Passkey Day (formerly World Password Day), the FIDO Alliance is putting the spotlight on real-world passkey deployments from leading organizations around the globe. Read on for highlights of the successes companies in various industries are seeing from delivering faster, easier, and more secure sign-ins with passkeys—showcasing the global commitment to move away from passwords.

The FIDO Alliance also released today an independent study of consumers to understand how passkey usage and consumer attitudes towards authentication have evolved. The research found that in the last year, over 35% of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security. The full report is available here.

ABANCA’s mobile app serves over 1,200,000 customers a month, serving as the bank’s largest branch. Today, more than 42% of its mobile customers are using passkeys via the bank’s ABANCA Key product. As a result, more than 11,000,000 high-risk transactions have been protected without technical or service incidents, and due to the prioritization of UX, they have managed a Customer Effort Score (CES) of 4.7. 

Aflac was the first major insurance company to adopt passkeys in the U.S. Aflac partnered with Transmit Security to launch their passkey authentication initiative Today, only the first phase of the project is complete and yet more than 500,000 Aflac customers have enrolled a passkey, resulting in a 32% reduction in password recovery requests. This has yielded 30,000 fewer calls per month to the call center for identity issues. Aflac reports that the highest enrollment rates occur at the point of registration, reinforcing the FIDO Alliance’s Design Guidelines recommendation to prompt customers during account-related tasks. The steady, organic adoption of passkeys by Aflac customers continues to grow daily and directly contributes to measurable improvements in cost reduction and customer experience.

KDDI now has more than 13.6 million au ID customers that use FIDO and has seen a dramatic decrease (down nearly 35%) in calls to their customer support center as a result. KDDI manages FIDO adoption carefully for both subscribers and non-subscribers. 

LY Corporation property Yahoo! JAPAN ID now has 28 million active passkeys users. Approximately 50% of user authentication on smartphones now uses passkeys. LY Corporation said that passkeys have a higher success rate and are 2.6 times faster than SMS OTP.

Mercari has seen 9 million users enroll with passkeys, and is enforcing passkey login for users who have enrolled with synced passkeys. Notably, there have been zero phishing incidents at Mercari Shop and Mercoin (a Mercari subsidiary) since March 2023.

Microsoft began rolling out passkeys for Microsoft consumer account in 2024. They now see nearly one million passkeys registered every day. Microsoft has found that users signing in with passkeys are three times more successful at getting into their account than password users (about 98% versus 32%), passkey sign-ins are eight times faster than traditional password + MFA flows, and passwordless-preferred UX has reduced password use by over 20%. 

Nikkei rolled out passkeys in February and is already seeing thousands of customers using passkeys. Additionally, they are seeing almost no inquiries about how to use passkeys at the support desk.

NTT DOCOMO has increased its passkey enrollments and now passkeys are used for more than 50% of authentication by d ACCOUNT users. NTT DOCOMO notably reports significant decreases in successful phishing attempts and there have been no unrecognized payments at the docomo Online Shop since September 2022 where NTT DOCOMO continuously improved UX, including increasing the number of other passkey-enabled services.

Samsung Electronics’ Galaxy smartphones support fast and convenient logins through biometric authentication and FIDO protocols. Due to ease of use, speed, compatibility across services, and status as an industry standard made passkeys a compelling choice for Samsung Electronics.

VicRoads is the vehicle registration and driver licensing authority in Victoria, Australia. It registers over six million vehicles annually and licenses more than five million drivers. Within the first weeks of deployment with its passkey vendor Corbado, passkey adoption significantly exceeded VicRoads’ expectations. Users embraced the phishing-resistant authentication method, benefiting from a frictionless login experience optimized for speed and security. The exceptionally high passkey activation rate – peaking at 80% on mobile devices and over 50% across all platforms – led to 30% passkey login rate within the first seven weeks. Uptake continues to rise steadily, translating into measurable operational benefits, including reduced authentication-related support tickets, lower SMS OTP costs and improved user experience and security.

Zoho Corporation has rolled out passkeys to its 100+ million zoho.com customers and has seen a resulting 30% increase month over month in passkey adoption and a 10% drop in password reset queries. As a next step, the company will begin its rollout to Zoho Vault customers in May.

Read the full case studies from ABANCA, Microsoft, Nikkei, Samsung Electronics, VicRoads and Zoho Corporation to learn more about how these companies are discovering the benefits of passkey adoption. To learn more about passkey implementation through other documented case studies, visit the FIDO Alliance’s resource library. Have a case study to share? Contact us!


FIDO Alliance Champions Widespread Passkey Adoption and a Passwordless Future on World Passkey Day 2025

New global survey: More than two thirds of users familiar with passkeys turn to them for simpler, safer sign-ins as password pain persists MOUNTAIN VIEW, Calif., May 1, 2025 – […]

New global survey: More than two thirds of users familiar with passkeys turn to them for simpler, safer sign-ins as password pain persists

MOUNTAIN VIEW, Calif., May 1, 2025 – With digital security more critical than ever, the FIDO Alliance is commemorating World Passkey Day 2025 with the release of an independent study of consumers across the U.S., U.K., China, South Korea, and Japan to understand how passkey usage and consumer attitudes towards authentication have evolved. 

The research found that in the last year, over 35% of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security. 

World Passkey Day serves as the FIDO Alliance’s annual call to action for individuals and organizations to adopt passkey sign-ins, making the web safer and more accessible.

Highlights from the research show consumer passkey awareness is on the rise and outlines several key trends in adoption among those who are aware of passkeys, including:

74% of consumers are aware of passkeys. 69% of consumers have enabled passkeys on at least one of their accounts. Among those who have used passkeys, 38% report enabling them whenever possible. More than half of consumers believe passkeys are both more secure (53%) and more convenient (54%) than passwords. 

The survey report is available at https://fidoalliance.org/wpd-report-2025-consumer-password-passkey-trends/, which includes additional insights on how passkey adoption is trending with consumers and organizations to improve global digital access, authentication, and security.

“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods that have led to decades of data breaches, account takeovers and user frustration, which imperil the very foundations of our connected society,” said Andrew Shikiar, Executive Director and CEO of the FIDO Alliance. “We’re thrilled by the fact that over 100 organizations around the world signed our Passkey Pledge, and we are pleased to support the market in their march towards passkeys through a variety of freely available assets, including our market-leading Passkey Central resource center.”

To further encourage organizations to embrace the shift away toward passkeys, the FIDO Alliance also launched the Passkey Pledge, a voluntary pledge for online service providers and authentication product and service vendors committed to embracing passkeys. The passkey pledge has received commitments from over 100 organizations in just over 20 days. A full list of companies that have taken the passkey pledge can be found here.

The availability of passkeys has steadily increased with implementation reaching 48% of the world’s top 100 websites as enterprises and service providers collectively seek to embrace a new era of faster sign-ins, higher success rates, fewer account takeovers, lower support costs, and reduced cart abandonment.

To learn how to start your organization’s passwordless journey, or to begin using passkeys today, visit: https://www.passkeycentral.org/home

Notes to editors: This SurveyMonkey online poll was conducted from April 13-14, 2025, among a global sample of 1,389 adults ages 18 and up. Respondents for this survey were selected from the nearly 3 million people who take surveys on the SurveyMonkey platform each day. Data for this survey has been weighted for age, race, sex, education, and geography to adequately reflect the demographic composition of the United States, United Kingdom, China, South Korea and Japan. The modeled error estimate for this survey is plus or minus 3.5 percentage points. To calculate the proportion of the world’s top websites and services that support passkeys, the FIDO Alliance combined publicly available information with its own data on passkey deployments.  About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.

Contact

press@fidoalliance.org


We Are Open co-op

We Are Nine

It’s our birthday, so we took the day off!

It feels fitting and yes, admittedly somewhat contrived, that our birthday falls on May 1st. It’s a moment when people around the world celebrate the power of collective action and the achievements of workers everywhere.

This year, our anniversary feels even more special. We’ve had our share of triumphs and certainly challenges, but nine years in (which is a long time in internet years!) we’re still here.

2025 has been named the United Nations International Year of Co-operatives, with the theme “Co-operatives Build a Better World.” The global co-operative movement deserves this spotlight, showing how co-ops like ours are making a difference by putting people and planet before profit.

https://ica.coop/en/cooperatives/facts-and-figures

At We Are Open, we believe in doing business differently. We’re a worker-owned co-op, which means we make decisions together, share responsibility, and support each other to do meaningful work. Over the past nine years, we’ve learned (and re-learned!) that openness, collaboration, and trust are at the heart of what makes co-ops so important.

So today, we’ll down tools and raise a glass to our members, clients, friends, comrades in CoTech and workers.coop and the wider co-op community. Thanks for being part of our journey so far.

Solidarity and celebration from all of us at We Are Open! If you’d like to wish us happy birthday, or have a problem we may be able to help with, email us without delay at hello@weareopen.coop


DIF Blog

DIF Newsletter #51

May 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF

May 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Labs Beta Cohort 2 RFP is here!

Exciting news for identity innovators! DIF Labs has just announced their request for proposals for Beta Cohort 2, focused on driving high-leverage work at the intersection of identity, trust, and emerging technologies. The program seeks proposals from builders and researchers in five key focus areas: Personhood Credentials, Content Authenticity and Assertions, Applied Cryptography, Verifiable AI, and Industry-Aligned Applications. Project leads must be DIF members, with proposals due by May 20, 2025. Selected projects will receive mentorship, ecosystem support, and collaboration opportunities over a 2-3 month development period. For complete details on submission requirements, evaluation criteria, and the application process, visit the full announcement on the DIF Labs website.

DIF Hospitality & Travel Working Group is Live

The Decentralized Identity Foundation (DIF) has launched a new Hospitality & Travel Working Group focused on developing standards for self-sovereign data exchange in the travel industry. This initiative will create frameworks allowing travelers to control their personal information while enabling seamless interactions with service providers. Led by chair Douglas Rice, the group will address traveler profiles, data portability, and interoperability across the travel ecosystem. For more details, see DIF's announcement:

DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group The Decentralized Identity Foundation (DIF) has officially launched its Hospitality & Travel Working Group, evolving from the ongoing H&T Special Interest Group (SIG). This new working group will focus on developing standards, schemas, processes, and documentation to support the self-sovereign exchange of data between travelers, services, intermediaries in the hospitality Decentralized Identity Foundation - BlogDecentralized Identity Foundation Algorand Releases the Universal Chess Passport

Algorand has partnered with World Chess to develop a "Universal Chess Passport" using blockchain-based digital identity and verifiable credentials technology. This innovative system will allow chess players to maintain portable digital identities and credentials across platforms, enabling them to seamlessly transfer their achievements, rankings, and records between online platforms and in-person tournaments. The proposal aims to address fragmentation in the chess ecosystem while maintaining privacy and security through decentralized identifiers (DIDs). A live discussion about the initiative will be held on May 6th featuring representatives from Algorand, World Chess, and the Decentralized Identity Foundation. For more details, visit Algorand's announcement page:

Algorand x World Chess - Universal Chess Passport Tired of rebuilding your chess identity across every platform? A new blockchain-based system from Algorand and World Chess proposes portable, verifiable credentials for chess players—bringing fairness, reputation, and rewards into one unified ecosystem. Universal Chess PassportAlgorand Foundation 🛠️ Working Group Updates

Browse our working groups here

Hospitality & Travel Working Group

The newly-launched Hospitality & Travel Working Group started strong. They discussed their draft implementation guide and schema development, exploring the complexities of travel stages and verifiable data. They refined key terms in their glossary, including standardizing on the term "HATPro" (hospitality and travel profile). The team aims to have a substantial portion of the schema ready to announce at the HITEC Conference in mid-June.

👉 Learn more and get involved

Creator Assertions Working Group

The Creator Assertions Working Group held productive meetings in both Americas/EU and APAC time zones this month. Discussions focused on identity claims, trust profiles, and collaboration with external groups including the JPEG Trust Group and Project Origin. The group is preparing three assertion specs for approval and considering integrating first-person credentials. The group will soon hold an election for co-chairs, with Scott Perry and Alex Tweeddale as candidates.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group finalized the DIF recommended methods process, agreeing to use "DIF Recommended" consistently throughout their documentation. They heard a detailed presentation on DID Peer version 4, exploring its features and advantages over other methods. The group also decided to dedicate full sessions to DID method deep dives as practice runs for their evaluation process.

👉 Learn more and get involved

Identifiers and Discovery Working Group

The DID Traits team is preparing to release version 1.0 of their specification, focusing on making traits enable easier assessment. They're adding new traits including key validation and long-term availability, while improving terminology for clarity. The team removed software trade references due to complexity and expressed satisfaction with their progress toward the 1.0 milestone.

The did:webvh team is finalizing their 1.0 specification, including updates to examples and clarification of terminology. They addressed concerns about performance with large logs, cryptographic agility, and improving DID availability through watchers. The team is developing a JSON schema for data model validation and planning a test suite, while seeking acknowledgement from the DIF Technical Steering Committee for the 1.0 release.

👉 Learn more and get involved

🪪 Claims & Credentials Working Group

The Credential Schemas team developed a new schema directory structure with community schemas, draft schemas, and recommended schemas folders to better organize their work. They refined the proof of age schema, including improvements to boolean threshold indicators and simplifying schema names for clarity and reuse. The team welcomed new members and discussed potential synergy with the Oasis working group.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ team focused on pseudonym implementations and designing approaches for everlasting unlinkability in credential systems. They considered trade-offs between post-quantum security and privacy, concluding that their baseline approach with pseudonyms offers preferable privacy protections. Team members are exploring the potential of using Rust instead of C++ for implementation and plan to interact more with the CFRG to advance the project.

👉 Learn more and get involved

DIF Labs Working Group

DIF Labs has launched its RFP for Beta Cohort 2. Read more here

👉 Learn more and get involved

DIDComm Working Group

The DIDComm team discussed implementing binary encoding in the next version, proposing either a DIDComm version 3 with built-in binary encoding or adding optional seabore envelopes in version 2.2. They welcomed new member SwissSafe who expressed interest in contributing to standardizing CBOR for DIDcomm messaging. The team agreed to implement a flag designator in the header for different encodings.

👉 Learn more and get involved

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates

Browse our special interest groups here


DIF Hospitality & Travel SIG

The H&T SIG held multiple productive sessions in April. One meeting featured representatives from the European Digital Identity Wallet Consortium who presented on the EUDI wallet's development and potential applications in travel and hospitality. Another session discussed food model development, trip profile components, and context-specific preferences for travelers. The team continues refining their implementation guide and developing a comprehensive strategy for travel wallets that support personalization, with plans to showcase their schema at the HITEC Conference in mid-June.

👉 Learn more and get involved

DIF China SIG

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN call featured a presentation on the Verifiable Legal Entity Identifier (vLEI) and its relation to the Global Legal Entity Identifier Foundation. Steering Committee member Catherine Nabbala of Finema, a qualified vLEI issuer, detailed the vLEI ecosystem governance framework and verification processes. The group discussed implementation challenges including key management, wallet technologies, and potential applications in various industries. Participants showed particular interest in how vLEI could be integrated with traditional SSI architecture using did:webs

👉 Learn more and get involved

DIF Africa SIG

The DIF Africa SIG hosted a presentation by Karla McKenna on the Global Legal Entity Identifier Foundation (GLEIF) and Verifiable Legal Entity Identifier (vLEI). Karla discussed organizational identity in the digital world and potential applications of vLEI in various sectors. The meeting explored the process of becoming a qualified vLEI issuer and how such technology could benefit government and public services. Participants expressed interest in applying these concepts to streamline business processes, particularly in Africa.

👉 Learn more and get involved

DIF Japan SIG

The Japan SIG discussed NodeX's migration from Sidetree to the did:webvh method due to concerns about content-addressable storage stability, Bitcoin network performance, and Sidetree maintenance status. Technical discussions included DID structure, the resolve process, did:webvh method advantages, and IoT device authentication and trust. The team also explored global deployment strategies and trust framework implementation, particularly for IoT devices in critical infrastructure and energy sectors.

👉 Learn more and get involved

DIF Korea SIG

👉 Learn more and get involved

📖 DIF User Group Updates
DIDComm User Group

The DIDComm User Group explored Colton's WebRTC project that enabled browser-to-browser voice communication over DIDComm. Members discussed binary encoding implementation options for DIDComm messages, focusing on CBOR while keeping the door open to other encodings. The group also demonstrated a chat system using DIDComm's mediator capabilities for seamless communication between users, even when offline. Future discussions will focus on use case documentation templates and integrating with AI-related protocols like the Model Context Protocol.

👉 Learn more and get involved

Veramo User Group

👉 Learn more and get involved

📢 Announcements at DIF

Conference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.

🗓️ ️DIF Members

👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.


FIDO Alliance

Consumer Password and Passkey Trends: World Passkey Day 2025

Passkeys are no longer just a concept: The future of sign-in is here and consumers are ready. Built on the open authentication standards developed by theFIDO Alliance, passkeys are quickly […]

Passkeys are no longer just a concept: The future of sign-in is here and consumers are ready. Built on the open authentication standards developed by theFIDO Alliance, passkeys are quickly gaining momentum among global service providers. Why? Because they offer africtionless, phishing-resistant, passwordless sign-in experience that is redefining digital security and user convenience.

Ahead of World Passkey Day 2025, the FIDO Alliance commissioned an independent survey of1,389 peopleacross theU.S., U.K., China, South Korea, and Japan to provide additional insights into how authentication preferences are evolving in real time.

The research shows people continue to struggle with traditional passwords:

36%of respondents said they’ve had at least one account compromised due to weak or stolen passwords. 48%admitted they’ve abandoned an online purchase simply because they forgot their password.

Read the full results of the survey in this eBook.

Download the Report Read the Press Release

Tuesday, 29. April 2025

FIDO Alliance

FIDO Alliance Launches Payments Working Group

April 29, 2025 – The FIDO Alliance has launched a Payments Working Group (PWG) to define and drive FIDO solutions for payment use cases. The PWG will also act as […]

April 29, 2025 – The FIDO Alliance has launched a Payments Working Group (PWG) to define and drive FIDO solutions for payment use cases. The PWG will also act as subject matter experts and internal advisors within the FIDO Alliance on issues affecting the use of FIDO solutions for payment use cases. The PWG is co-chaired by Henna Kapur of Visa and Jonathan Grossar of Mastercard, with other FIDO Alliance member company participants including American Express; Cartes Bancaires; Discover; Futurae; Infineon; OneSpan; PayPal; Royal Bank of Canada – Solution Acceleration & Innovation; and Thales.

The PWG will focus on three areas:

Identify and evaluate specific requirements for payment authentication. Requirements will include those in the area of UX, security and regulation unique to payments;. Identify and evaluate existing and emerging solutions to address payment authentication requirements; and Guidelines for use of passkeys and/or proposed FIDO solutions along with existing payment technologies such as EMV® 3-D Secure or EMV® SRC.

The PWG will also work on associated projects relating to the use of FIDO solutions for payments including: collecting and publishing deployment case studies, documenting challenges and potential solutions to issues; and working with FIDO Alliance liaison partners to drive education and adoption.

Join the Payments Working Group

Organizations interested in taking part in the PWG and driving the adoption of passkeys for payments can inquire today. Participation in the PWG is open to all Board, Sponsor, and Government level members of the FIDO Alliance. Non-member organizations interested in participating should contact the FIDO Alliance to become a member; learn more by visiting https://fidoalliance.org/members/become-a-member/.


Digital Identity NZ

Moving Forward Together

Welcome to the April edition of the DINZ newsletter. As we navigate this period of transition, our commitment to advancing digital trust in Aotearoa remains unwavering. This month brings exciting updates on upcoming events, new members, and important industry developments. The post Moving Forward Together appeared first on Digital Identity New Zealand.

Kia ora,

Welcome to the April edition of the DINZ newsletter. As we navigate this period of transition, our commitment to advancing digital trust in Aotearoa remains unwavering. This month brings exciting updates on upcoming events, new members, and important industry developments.

Last Chance: DISTF Survey Closing Soon!

Don’t miss your opportunity to help shape the future of Digital Identity in New Zealand. Our Digital Identity Services Trust Framework (DISTF) survey closes next Monday, 5 May. Your insights will guide the DINZ DISTF Working Group’s priorities and advocacy efforts. This survey is open to both DINZ members and non-members. Your input is important as we work to maximise the benefits of this framework for all New Zealanders.

Take five minutes to complete the survey.


Digital Trust Hui Taumata Update

Planning for our landmark event is progressing well, with speakers now confirmed including Ministers Judith Collins and Scott Simpson, along with Deputy Privacy Commissioner Liz MacPherson. Additionally, we are well advanced with two terrific international keynote speakers thanks to DINZ member sponsors. We’re pleased to report that sponsorship commitments are tracking positively, reflecting the growing importance of digital trust in our national conversation.

Mark your calendars (12 August 2025, Te Papa, Wellington) for this premier industry gathering that will bring together leaders and innovators in the digital trust space. Register your interest here.

Member News

Kudos to MyMahi’s Stefan Charsley for his prestigious Kim Cameron award enabling him to attend IIW 40 earlier this month, wonderfully reported on by co-founder Phil Windley here. As a regular attendee on DINZ’s Coffee Chats, we hope he’ll be able to share his knowledge and perspectives gained. While on the subject of awards it was great to see MinterEllisonRuddWatts honoured in Best Lawyers.

RNZ reports that the Department of Internal Affairs has issued a tender for a “new genuine face capture solution”, and notably its Trust Framework Authority issued an RFI for digital ID services accreditation infrastructure writes Biometric Update

Our DINZ community continues to grow! We’re delighted to welcome CybercureLuminPDF and just in, OriginID and BeyondTrust

See all organisation members here.

Policy and Submissions

The DISTF Working Group coordinated our response to PaymentsNZ’s Next Generation Payments consultation, highlighting the critical role of digital identity in future payment ecosystems. The working group also submitted feedback on the DISTF Rules revision last week, continuing our advocacy for pragmatic and effective trust frameworks.

DINZ’s reputation for thoughtful high quality submissions continues to grow and these two above are no exception. View our submissions here.

International Engagement

We were fortunate to welcome back Zygma’s Richard Wilsher during his brief visit to New Zealand this month – two years since he delivered this highly thoughtful webinar. Richard met with both the DISTF Working Group and the Trust Framework Authority, sharing his valuable international perspectives and expertise. 

Ministerial Engagement

As part of a FinTechNZ delegation that included DINZ member Middleware, our outgoing Executive Director Colin Wallis met with Minister of Commerce and Consumer Affairs Scott Simpson. Smooth accessible digital identification is a crucial enabler for fintech innovation and the upcoming Customer Product and Data Act.

International News

International ID Day (9 April) saw attendance from several New Zealand organisations. If you missed it, catch up here. UK Digital Identity Developments: The saga currently playing out in the UK may prove the adage once again that ‘Trust Takes Years To Build, Seconds To Break And Forever To Repair’. First it was Industry stakeholders expressing concerns over the government’s digital wallet approach, which saw the Digital Identity and Attributes Trust Framework (DIATF) community advocating for alternative approaches. Then came Government facing claims of serious cyber security and data protection issues in One Login digital ID system extended by The Telegraph including commentary from DINZ Coffee Chat attendee Mark King.

Executive Director Search Update

The DINZ Executive Council is currently interviewing the candidate shortlist with an announcement expected shortly. We appreciate your patience during this transition period. 

We value your continued support and engagement as we work together to advance digital trust in Aotearoa New Zealand.

Ngā mihi nui,

The team at Digital Identity NZ

Upcoming events, new members, and industry developments.
Read full news here: Moving Forward Together

SUBSCRIBE FOR MORE

The post Moving Forward Together appeared first on Digital Identity New Zealand.


Blockchain Commons

Hackernoon: Christopher Allen Interview

“Cypherpunks wouldn’t just critique the surveillance state—they’d also call out us technologists for enabling it. We were supposed to resist, not retrofit.” Christopher Allen recently talked with Tereza Bízková in an interview that was published to the front page of Hackernoon. It was headlined “The Co-Writer of TLS Says We’ve Lost the Privacy Plot”. In it, Christopher talks about what privacy mean

“Cypherpunks wouldn’t just critique the surveillance state—they’d also call out us technologists for enabling it. We were supposed to resist, not retrofit.”

Christopher Allen recently talked with Tereza Bízková in an interview that was published to the front page of Hackernoon. It was headlined “The Co-Writer of TLS Says We’ve Lost the Privacy Plot”. In it, Christopher talks about what privacy means to him, what he thinks about recent privacy efforts, how centralization has become a problem, and how all of this connects to work done by the cypherpunks in the ’90s.

Perhaps most importantly, Christopher answers the question: what would be non-negotiable for a new privacy-first system today? His answer unsurprisingly reflects our vision here at Blockchain Commons, built on data minimization, progressive trust, and limited scale.

Privacy is one of the fundamental Gordian Principles, but probably the one we talk about the least, as so much of our focus is on resilience (such as #SmartCustody) or on independence and openness (as reflected in our attention to interoperability). Read “The Co-Writer of TLS Says We’ve Lost the Privacy Plot” for much more on why privacy is important and what exactly it is!

The Path to Self-Sovereign Identity Links

Christopher Allen is a long-term advocate for self-sovereign identity. He popularized the term and laid out an initial set of principles in his foundational article, “The Path to Self-Sovereign Identity”, coauthored the DID spec for W3c, and founded the Rebooting the Web of Trust workshops that advanced the technology for a decade. Following are articles by him and interviews with him on the topic.

The Co-Writer of TLS Says We’ve Lost the Privacy Plot (4/24/25): An interview with Tereza Bízková for Hackernoon on how things have gone wrong. SSI Orbit Podcast: Christopher Allen Interview (1/31/25): Following up the “Has Our SSI Ecosystem Become Morally Bankrupt?” article, a discussion of the problems with the current SSI ecosystem. Echoes from History (11/15/23): How identity went horribly wrong during WWII and the warnings that offers for the future of self-sovereign identity. The Origins of Self-Sovereign Identity (8/9/23): Living Systems Theory, Ostrom’s Principles, and other foudndational ideas that led to SSI, and what they say about its intent. Private Key Disclosure (8/16/22): Digital identities are secured by private keys, but are private keys secure from the government? Principal Authority: A New Perspective on Self-Sovereign Identity (9/15/21): A new lens for looking at Self-Sovereign Identity, based on successes in the Wyoming legislature. Self-Sovereign Identity: Five Years On (4/26/21): Five years after “The Path to Self Sovereign Identity”, the successes and challenges of SSI to date. The Path to Self-Sovereign Identity (4/26/16): The initial article on SSI, including the ten principles for Self-Sovereign Identity.

Friday, 25. April 2025

FIDO Alliance

Case Study: Microsoft

Describe your service/platform/product and how it’s using FIDO authentication. Microsoft Account (MSA) powers consumer-facing experiences across services like Xbox, Microsoft 365, Copilot, and more. In 2023, Microsoft began rolling out […]

Describe your service/platform/product and how it’s using FIDO authentication.

Microsoft Account (MSA) powers consumer-facing experiences across services like Xbox, Microsoft 365, Copilot, and more. In 2023, Microsoft began rolling out passkey support across these services, allowing users to sign in with a face, fingerprint, or device PIN instead of a password. By integrating FIDO credentials, we made it easier, faster, and significantly more secure for over a billion users accessing their Microsoft accounts, by removing the need for passwords.

What were the challenges you were trying to overcome?

We set out to solve three major challenges:

Security: Passwords are inherently insecure and highly vulnerable to phishing and brute force attacks. In 2024, we observed more than 7,000 password attacks per second.

User experience: Passwords are frustrating—users forget them, reuse them, or mistype them. We wanted a sign-in experience that users could succeed at the first time, every time.

Adoption at scale: We needed a solution that could work across devices and platforms while meeting high usability expectations for a global user base.

Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

FIDO credentials offer the ideal combination of security, usability, and interoperability. They are resistant to phishing and credential theft, and they eliminate the need for shared secrets like passwords. FIDO credentials also enable seamless cross-device and cross-platform experiences—critical for consumer use cases. In testing, we found that passkeys delivered both improved security and a dramatically better user experience. 

Describe your roll out of FIDO authentication.

Microsoft took a phased approach. We started by enabling passkeys for MSA sign-ins across consumer services like Xbox and Copilot. From there, we made UX changes to prioritize passwordless options. New Microsoft Accounts are now passwordless by default, and existing users are guided to enroll a passkey during or after sign-in. Throughout this process, we have worked closely with platform partners like Apple and Google, and continued our long-standing collaboration with the FIDO Alliance to ensure our approach aligns with industry standards. For a more detailed look at our approach, refer to Convincing a billion users to love passkeys: UX design insights from Microsoft to boost adoption and security.

What data points can you share that show the impact FIDO authentication has had?

The impact has been significant:

We now see over one million passkeys registered every day. Users signing in with passkeys are three times more successful (95% success rate vs. 30% for passwords). Passkey sign-ins are eight times faster than traditional password + MFA flows. Our passwordless-preferred UX has already reduced password use by over 20%.

These results confirm that FIDO authentication improves security, boosts user satisfaction, and reduces operational burdens like password resets and support calls.

Read more in the Microsoft blog.


Case Study: Nikkei 

Describe your service/platform/product and how it’s using FIDO authentication. Nikkei Inc. and the Nikkei Group pursues its mission “to be the most trusted, independent provider of quality journalism to a […]

Describe your service/platform/product and how it’s using FIDO authentication.

Nikkei Inc. and the Nikkei Group pursues its mission “to be the most trusted, independent provider of quality journalism to a global community, helping our customers make better decisions.” We offer various media services, including the Nikkei, which serves as the cornerstone of our role as a news organization. The integrated ID platform supporting the Nikkei Group’s digital services, including our core service, the Nikkei Online Edition, is “Nikkei ID.”


Nikkei ID, which offers a wide range of services, has long faced the challenge of balancing security and usability. While we have implemented measures such as improving the login experience with OpenID Connect and introducing two-factor authentication and CAPTCHA (*1) to reduce the risk of unauthorized access, addressing security risks associated with password leaks and reuse, as well as countering increasingly sophisticated attacks, has been difficult.

(*1)A security authentication method to verify that a user is human.

In this context, as FIDO authentication has evolved and the threshold for introducing passkeys to services has lowered, Nikkei ID has proceeded with consideration and implementation with high expectations. Currently, we are expanding functionality to support not only web services but also mobile apps, and aiming to promote the adoption of passkeys through increased user awareness via internal and external blog posts, presentations, and guidance at the Nikkei ID Lounge Help Center.

What were the challenges you were trying to overcome?

The primary goal is to balance security and user experience. Many Nikkei ID users are not accustomed to digital services, so simply enhancing security is not enough. For example, while the introduction of CAPTCHA can prevent brute-force password attacks, it can also become a barrier for users who cannot pass the Turing test (*2), leading to increased support inquiries and added burden on customer service.

(*2) A test to determine whether something is ‘human-like’.

However, FIDO authentication (passkeys) achieves high security and user experience through integration with OS and platforms as a standard. This allows us to replace security measures that reduce risks associated with password authentication but negatively impact UX with passkeys.

Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

The following two options were considered as alternatives to FIDO authentication (passkeys):

Mandatory implementation of two-factor authentication such as TOTP or email verification Social login using other ID platforms

As a result of comparing these options, we believe FIDO authentication (passkeys) offers the following advantages:

It allows for gradual transition by adding authentication on top of existing password authentication  It enables the use of higher UX authentication methods such as biometric authentication  It fundamentally resolves the risks associated with passwords

When it came to actual implementation, the aspect of “additional authentication” was particularly significant. In other words, it allows for implementation in a loosely coupled and highly cohesive manner without disrupting the existing ID model. The WebAuthn specification provides simple interface libraries and APIs for both backend and frontend on each platform, making secure implementation easy. Additionally, since existing authentication methods can be retained, the advantage of not significantly increasing support workload was also substantial.

Describe your roll out of FIDO authentication.

We implemented our own solution using the open-source backend library WebAuthn4J for FIDO authentication. We chose WebAuthn4J not only for its clear data model but also because it passed the FIDO2 Test Tools provided by the FIDO Alliance. For the frontend, we developed our own implementation that directly interacts with the WebAuthn API. Additionally, we created a test library to emulate FIDO authentication, enabling 24-hour automated testing as a comprehensive test of these implementations.

The rollout of FIDO authentication (passkeys) was carried out in the following steps:

Internal beta testing to gather feedback and monitor usage White-box and black-box testing by external security companies Public release to all users

What data points can you share that show the impact FIDO authentication has had?

Since it was just released in February this year, we cannot provide detailed numbers yet, but thousands of users are already using passkeys. Additionally, we have heard that there have been almost no inquiries about how to use passkeys at the support desk, and we recognize that passkeys are being used smoothly.

Resources

The test library that emulates FIDO authentication, mentioned in the implementation section, is publicly available as Nikkei’s open-source software. You can obtain it from the following https://github.com/Nikkei/nid-webauthn-emulator

For authorization after completing FIDO authentication (passkeys), we use Authlete, an OpenID Connect platform. In this case study, we express our enthusiasm for the introduction of FIDO authentication (passkeys). (At the time of this presentation in 2023, passkeys were still under consideration) https://www.authlete.com/ja/resources/videos/20231212/02/

Technical blog article during the consideration stage of implementation: https://hack.nikkei.com/blog/advent20241221/


Case Study: VicRoads

VicRoads achieves up to 80% industry-leading passkey activation rate for nearly 5 million users with Corbado Background: VicRoads VicRoads is the vehicle registration and driver licensing authority in Victoria, Australia. […]
VicRoads achieves up to 80% industry-leading passkey activation rate for nearly 5 million users with Corbado

Background: VicRoads

VicRoads is the vehicle registration and driver licensing authority in Victoria, Australia. It registers over six million vehicles annually and licenses more than five million drivers. 

Operating as a joint venture between the Victorian State Government, Aware Super, Australian Retirement Trust, and Macquarie Asset Management, VicRoads is a critical provider of public services in the state.

Challenge: seamless and cost-effective authentication for government services 

VicRoads aims to become Australia’s most trusted digital government service providers by delivering secure, frictionless services to millions of people. 

Given the importance of the data that VicRoads holds on behalf of its customers, security has always been a primary consideration. 

In the past, to support protection of customer data, VicRoads mandated multi-factor authentication (MFA) for all user accounts via SMS one-time passwords (OTPs) and authenticator apps. 

Passkeys leverage biometrics, facial recognition, a PIN or a swipe pattern in the sign-in process. Unlike traditional MFA, passkeys require both the device storing the private key and local authentication, meaning they are both phishing-resistant and cost-effective.

Solution: Corbado provides a no-risk, passkey-first solution with minimal integration effort

 VicRoads worked with passkey vendor Corbado, prioritizing a proven approach rather than building a solution from scratch.

Corbado’s deep understanding of both customer experience and the latest authentication technology gave VicRoads confidence that customers would find using passkeys easy.  

Corbado also provided in-depth technical guidance on passkey-specific challenges, including browser compatibility, recovery flows and user experience optimizations – further solidifying VicRoads’ confidence.

“We selected Corbado because it could integrate passkey functionality into our existing infrastructure without disruption to our customers and operations”, said Crispin Blackall, Chief Technology Officer, VicRoads.

Implementation: pre-built, passkey-optimized components & SDKs enable quick integration

Corbado Connect seamlessly integrated with VicRoads’ existing infrastructure and CIAM, which is deeply embedded within the organization’s enterprise stack. This passkey enablement was achieved without requiring a migration of user data or authentication methods, ensuring a smooth and efficient transition for millions of users.

By layering passkey functionality on top of VicRoads’ current authentication system, Corbado enabled a frictionless deployment while preserving all existing user credentials. This approach eliminated the disruption and risks often associated with introducing new technology.

To ensure a smooth transition, VicRoads implemented passkeys in a phased rollout, beginning with personal customers. This gradual deployment, supported by Corbado Connect’s rollout controls, enabled VicRoads to monitor performance, address potential issues and optimize the user experience before seamlessly extending passkey authentication to partner and business customers.

Results: customers love passkeys, with up to 80% passkey activation rate in the first weeks

Within the first weeks of deployment, passkey adoption significantly exceeded VicRoads’ expectations. Users embraced the phishing-resistant authentication method, benefiting from a frictionless login experience optimized for speed and security.

The exceptionally high passkey activation rate – peaking at 80% on mobile devices and over 50% across all platforms – led to 30% passkey login rate within the first seven weeks. Uptake continues to rise steadily, translating into measurable operational benefits, including:

Reduced authentication-related support tickets Lower SMS OTP costs Improved user experience and security.

So far, VicRoads has successfully rolled out passkeys on its web portal. The next step is to integrate passkeys into its native apps – myVicRoads and myLearners – allowing users to leverage their existing passkeys without additional setup. Ultimately, once passkeys are fully implemented across all digital platforms, VicRoads aims to eliminate passwords entirely, maximizing security and fully embracing a passwordless future.

“Passkeys are easy to use, without compromising on security. We’re excited to give our customers a simpler, more secure way to handle their registration and licensing services,” said Crispin Blackall, Chief Technology Officer, VicRoads.

Opportunity: setting a new standard for government authentication 

With one of the largest public sector passkey deployments globally, VicRoads has established itself as a digital leader in authentication modernization for government applications. 

Achieving high adoption rates without disruption, VicRoads has proven that large-scale organisations can enhance security and improve user experience simultaneously. This success positions VicRoads as a benchmark for other government agencies looking to modernize their authentication strategies.

“Passkeys represent a paradigm shift in how we authenticate users to digital identity services,” said Andrew Shikiar, Chief Executive Officer of the FIDO Alliance. “VicRoads’ adoption of passkeys showcases how government agencies can leverage this industry-wide innovation to protect people’s data while simplifying access to critical services. This is a significant step towards a more secure and efficient digital future for Victoria and beyond.”

Next Steps: developing next generation authentication

VicRoads’ ongoing partnership with Corbado ensures it remains at the forefront of authentication innovation while maintaining a seamless user experience for its expanding digital service base. A key advantage of Corbado’s managed passkey service is its built-in adoption-enhancing optimisations, ensuring continuous improvements and seamless WebAuthn conformity with all future WebAuthn updates.

With this initiative, VicRoads has paved the way for broader adoption of passkeys in the government and public sector, proving that secure, frictionless authentication at scale is achievable.

About Corbado

Corbado is a leading provider of passkey solutions, enabling enterprises and government agencies to deploy passkey authentication seamlessly, without user migration. Corbado’s focus is on maximizing adoption in large-scale deployments. As a FIDO Alliance member, Corbado’s solutions ensure high adoption rates, enhanced security, and a frictionless user experience. Visit https://www.corbado.com/.


Case Study: Zoho Corporation

Describe your service/platform/product and how it’s using FIDO authentication. With over 55 apps across nearly every major business category, Zoho Corporation is one of the world’s most prolific technology companies. […]

Describe your service/platform/product and how it’s using FIDO authentication.

With over 55 apps across nearly every major business category, Zoho Corporation is one of the world’s most prolific technology companies. Headquartered in Chennai, India, Zoho is privately held and profitable, employing more than 18,000 people worldwide. Zoho is committed to user privacy and does not rely on an ad-revenue business model. The company owns and operates its data centres, providing full oversight of customer data privacy and security. Over 100 million users globally—across hundreds of thousands of companies—trust Zoho to run their businesses, including Zoho itself. For more information, visit zoho.com.

What were the challenges you were trying to overcome? 

Secure and easy log in instead of traditional authentication methods. 

Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

Improved security, supporting documents and community.

Describe your roll out of FIDO authentication. 

We rolled it ourselves via our IAM team. 

We first rolled out passkey authentication for zoho.com (100mn + users)

Rolling out passkey management in our password manager Zoho Vault in May, 2025

What data points can you share that show the impact FIDO authentication has had?  

30% increase MoM in passkey adoption

10% drop in password reset queries 

Resources

https://help.zoho.com/portal/en/kb/accounts/sign-in-za/articles/passkey https://www.zoho.com/vault/features/passkeys.html

Case Study: Samsung Electronics

Describe your service/platform/product and how it’s using FIDO authentication. Samsung Electronics’ Galaxy smartphones support fast and convenient logins through biometric authentication and FIDO protocols. What were the challenges you were […]

Describe your service/platform/product and how it’s using FIDO authentication.

Samsung Electronics’ Galaxy smartphones support fast and convenient logins through biometric authentication and FIDO protocols.

What were the challenges you were trying to overcome?

FIDO-based passkeys are transforming the way users access websites and apps by eliminating the need for traditional usernames and passwords. Instead of being stored on a server where they could be exposed, passkeys are securely stored on the Galaxy device, enabling quick and secure sign-ins using biometric authentication.

Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

FIDO enables secure authentication without transmitting users’ biometric data outside the device. Its ease of use, speed, compatibility across services, and status as an industry standard made it a compelling choice for Samsung Electronics.

Describe your roll out of FIDO authentication.

We have integrated FIDO authentication directly into our devices, enabling users to access it out-of-the-box. We continue to expand FIDO support across more Galaxy models and software updates.

Resources 

https://www.samsung.com/levant/support/apps-services/how-to-create-and-use-a-passkey/ https://www.samsung.com/ca/support/apps-services/how-to-create-and-use-a-passkey/ https://news.samsung.com/in/the-knox-journals-the-passwordless-future-of-security

Case Study: ABANCA

Describe your service/platform/product and how it’s using FIDO authentication. Our mobile banking app is our bank’s largest branch, serving over 1,200,000 customers each month. These customers require the best protection […]

Describe your service/platform/product and how it’s using FIDO authentication.

Our mobile banking app is our bank’s largest branch, serving over 1,200,000 customers each month. These customers require the best protection against identity theft attacks, and we provide the most robust and innovative solutions, always prioritizing the best user experience. ABANCA Key is a new identity verification service based on FIDO standards. It was launched after years of research by leading players to prevent identity theft attacks. Using passkeys, ABANCA Key provides the highest level of protection. It is impossible to guess or reuse them, so they protect our customers’ private information from attackers. 

What were the challenges you were trying to overcome?

On one hand, there’s the security challenge. The rise of phishing through calls and SMS messages in Spain has become a plague and a real problem for administrations, mobile operators, and financial institutions but on the other hand, there was the need to maintain the best user experience with the least friction. Passkeys give us a framework for interoperability and standardization, which provides us with ease of implementation and deployment. However, above all, and for the first time in the security industry, it provides a framework of homogeneity to achieve a frictionless user experience.

Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

We chose FIDO for many reasons: for its future strategy, as it allows us to follow many ways, including MFA and passwordless, for trust, as the FIDO Alliance includes leading players in security, operating systems, infrastructure, and mobile ecosystem; and for the standardization and homogenization what it give us, which reduces implementation, deployment and roll out times.

Describe your roll out of FIDO authentication.

To deliver the best user experience, we’re committed to having the deepest possible understanding of the technology. This enables us to effectively identify and resolve issues, and to better understand user behavior. We became our own partner by developing our own platform based on the FIDO standard and certifying it as if it were a provider.

We rolled out the deployment in phases. In under five months, we had the development of both server and  front-end (iOS and Android) ready, and we began the rollout in an initial phase with our employees, and subsequently to end customers in batches. In just seven months, we were already in a general roll out to all customers.

What data points can you share that show the impact FIDO authentication has had?

More than 42% of our customers are already using ABANCA Key More than 11,000,000 high-risk transactions has been protected with ABANCA Key without technical or service incidents  Customer roll out ran without technical or service incidents, and most importantly, with our customer journey UX to sign in ABANCA Key and to use it, we’ve managed a Customer Effort Score (CES) of 4.7. 

Please provide any links or resources that you feel would be useful in developing this case study.

https://comunicacion.abanca.com/es/noticias/abanca-primer-banco-espanol-en-implantar-la-tecnologia-de-llaves-de-acceso-en-la-banca-movil-para-reforzar-la-seguridad-de-sus-clientes/ https://www.abanca.com/es/banca-a-distancia/llave-abanca/

ZD NET: Why the road from passwords to passkeys is long, bumpy, and worth it – probably

Out of the blue, I received a text from my father asking me, “What’s the difference between a password and a passkey?”  Somewhere, in his daily online journey, he was prompted […]


Out of the blue, I received a text from my father asking me, “What’s the difference between a password and a passkey?” 

Somewhere, in his daily online journey, he was prompted by a website or application — a “relying party” in authentication lingo — to create a passkey. But the benefit wasn’t clear to him. Nor did there seem to be any urgency. He figured I’d know what passkeys are and what to do the next time he gets a nudge to set one up. I told him, “Let’s talk before you do anything.”


Forbes: Microsoft’s Password Deletion For 1 Billion Users—Do This Now

Your phone, computer and tablet is now at risk, as the nightmare of AI-powered attacks comes true. There are now multiple warnings into the use of mainstream AI platforms to […]

Your phone, computer and tablet is now at risk, as the nightmare of AI-powered attacks comes true. There are now multiple warnings into the use of mainstream AI platforms to design, develop and even execute attacks that are almost impossible to detect.

To add to recent reports from Symantec and CofenseGuardio also now warns that “with the rise of Generative AI, even total beginners can now launch sophisticated phishing scams — no coding skills needed. Just a few prompts and a few minutes.”

And Microsoft has just told users the same. “AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate… AI-powered fraud attacks are happening globally.”


MyData

Rethinking Consent: AesirX’s First-Party Framework for a Respectful Data Economy

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. By: Ronni K. Gothard […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. By: Ronni K. Gothard […]

FIDO Alliance

Engadget: How to use the Apple Passwords app

Apple’s new Passwords app (introduced with iOS 18, iPadOS 18, and macOS Sequoia) is a big leap forward in making password management simple and user-friendly for Apple users, even if it’s not […]

Apple’s new Passwords app (introduced with iOS 18, iPadOS 18, and macOS Sequoia) is a big leap forward in making password management simple and user-friendly for Apple users, even if it’s not as robust as other password managers. If you’ve ever fumbled through Safari settings to find a saved login or toggled through iCloud Keychain menus to edit credentials, the Passwords app is for you. It’s designed to give you a dedicated home for all your saved login credentials, passkeys, Wi-Fi passwords and two-factor authentication codes, all in one secure, easy-to-navigate interface.

Thursday, 24. April 2025

FIDO Alliance

ABANCA News: ABANCA achieves international FIDO certification for its ABANCA Key service

ABANCA has achieved international FIDO certification for the Llave ABANCA service, the digital identity verification technology solution developed by the bank that allows customers to validate mobile banking transactions securely […]

ABANCA has achieved international FIDO certification for the Llave ABANCA service, the digital identity verification technology solution developed by the bank that allows customers to validate mobile banking transactions securely and quickly by unlocking their phone.

The FIDO ( Fast IDentity Online ) certification has been granted by the FIDO Alliance, the leading international association promoting digital authentication standards that are more robust and simpler than passwords or one-time codes. This consortium is made up of the world’s leading technology companies—Google, Apple, Microsoft, Amazon, and Visa, among others—who have joined forces to promote more robust and user-friendly digital identity methods. With the FIDO certification of Llave ABANCA, this global alliance accredits that the solution implemented by the bank meets the highest standards of online identity verification.

Wednesday, 23. April 2025

FIDO Alliance

IT Brew: Lessons learned by an RSA IT pro implementing passwordless

Going passwordless is difficult for a lot of companies, even the ones with “security” in the name. Jim Taylor, chief product and technology officer (and resident IT professional) at RSA […]

Going passwordless is difficult for a lot of companies, even the ones with “security” in the name.

Jim Taylor, chief product and technology officer (and resident IT professional) at RSA Security, spoke with IT Brew about lessons learned as he led the deployment of passkeys, biometrics, and other non-password implementations across the organization. Two major keys to passwordless success, he said, included having lots of options and lots of patience.

“There’s no big switch. I wish there was a big red button that you could just press and go, ‘Ta-da!’ with passwordless, right? It doesn’t work like that,” Taylor told IT Brew.


Biometric Update: Inverid joins FIDO Alliance to bring NFC expertise to DocAuth

Inverid has joined the FIDO Alliance. A release from the Dutch identity verification firm says it will bring expertise in document authenticity verification to FIDO users of DocAuth document authenticity specifications.

Inverid has joined the FIDO Alliance. A release from the Dutch identity verification firm says it will bring expertise in document authenticity verification to FIDO users of DocAuth document authenticity specifications.


The Engine Room

From memory to action: Radio as a tool for a healthier information ecosystem

We’re excited to welcome Oír Más as one of our new Matchbox partners. Oír Más is a radio collective dedicated to amplifying the voices of diverse communities through radio experimentation. The post From memory to action: Radio as a tool for a healthier information ecosystem appeared first on The Engine Room.

We’re excited to welcome Oír Más as one of our new Matchbox partners. Oír Más is a radio collective dedicated to amplifying the voices of diverse communities through radio experimentation.

The post From memory to action: Radio as a tool for a healthier information ecosystem appeared first on The Engine Room.


IDunion

Deutsche EUDI-Wallet braucht Genossenschaft als Ecosystem Orchestrator

Ecosystem-Orchestrator für die Deutsche EUDI-Wallet Die Einführung der EUDI-Wallet in Deutschland, basierend auf der novellierten eIDAS-Verordnung, stellt eine zentrale Herausforderung dar, um bis 2026 eine sichere, digitale Identitätslösung für Bürger bereitzustellen. Ziel ist es, ein umfassendes Ökosystem zu schaffen, das öffentliche Institutionen, privatwirtschaftliche Akteure und Regulierungsbe
Ecosystem-Orchestrator für die Deutsche EUDI-Wallet

Die Einführung der EUDI-Wallet in Deutschland, basierend auf der novellierten eIDAS-Verordnung, stellt eine zentrale Herausforderung dar, um bis 2026 eine sichere, digitale Identitätslösung für Bürger bereitzustellen. Ziel ist es, ein umfassendes Ökosystem zu schaffen, das öffentliche Institutionen, privatwirtschaftliche Akteure und Regulierungsbehörden integriert. Dieses Ökosystem soll nicht nur technische Standards und Prozesse etablieren, sondern auch die Akzeptanz der Bürger fördern und die rechtlichen Rahmenbedingungen gewährleisten.

Die Autoren dieses Whitepapers schlagen eine Public-Private-Partnerschaft in Form einer Genossenschaft als neutralen Ecosystem-Orchestrator vor. Diese Genossenschaft soll:

Demokratische Governance bieten, mit transparenter Kontrolle und Einflussmöglichkeiten für alle Mitglieder. Kostendeckendes Finanzierungsmodell durch Mitgliedsbeiträge und Sonderzuwendungen implementieren. Zentrale Aufgaben wie Zertifizierung von Wallets, Registrierung von Verifizierungsstellen und Förderung der Kommunikation zwischen Akteuren übernehmen. Herausforderungen für die neue Bundesregierung

Die Bundesregierung ist durch die Verordnung [(EU) 2024/1183] dazu verpflichtet, bis Ende 2026 eine EUDI-Wallet für alle Bürger bereitzustellen, in der Ausweisdaten in elektronischer Form sicher gespeichert werden können. Die vollständige Umsetzung des EUDI-Wallet Ökosystem hat bis 2027 zu erfolgen. Das Bundesinnenministerium bereitet seit Mai 2023 die Konzeptionierung und Umsetzung der deutschen Wallet-Lösung federführend vor. Um die Akzeptanz bei den Bürgerinnen und Bürgern zu fördern, ist es nicht nur erforderlich, die technische Infrastruktur bereitzustellen, sondern auch ein umfassendes Ökosystem aufzubauen. In diesem Ökosystem interagieren privatwirtschaftliche Akzeptanzstellen, zertifizierte Vertrauensdienste, öffentliche Einrichtungen aus Bund und Ländern sowie Regulierungsbehörden und Zertifizierungsstellen miteinander. Der Aufbau eines solchen Ökosystems stellt eine hochkomplexe Aufgabe dar. Sie erfordert die Etablierung von Standards und Prozessen, die von allen beteiligten Privatunternehmen und öffentlichen Institutionen geteilt und umgesetzt werden können. Dabei steht die Berücksichtigung der Interessen der Bürgerinnen und Bürger im Vordergrund. Zudem müssen alle rechtlichen Rahmenbedingungen eingehalten, die missbräuchliche Nutzung persönlicher Daten verhindert und ein attraktives Investitionsumfeld für die Unternehmen der Privatwirtschaft geschaffen werden.

Aufbau des EUDI-Wallet Ökosystems als Public-Private Partnership mit neutraler Governance

Die Autoren dieses Whitepapers plädieren für eine Public-Private-Partnerschaft als Ecosystem Orchestrator, die sich in einer Genossenschaft organisiert. Die Genossenschaft hat den Zweck, die Interessen aller Genossenschaftsmitglieder – also aller Akteure im EUDI-Wallet Ökosystem – gleichermaßen zu fördern. Die Genossenschaft benötigt eine demokratische Governance, die Einfluss, Kontrolle und Transparenz ggü. den Mitgliedern ermöglicht. Sofern die Genossenschaft selbst keine Investitionen tätigt, sollte ihre Satzung eine not-for-profit Klausel beinhalten, sodass lediglich die Kosten gedeckt werden müssen, die sich am Bedarf der Mitglieder orientieren. 

Die Genossenschaft sollte durch die nachstehenden Organe gesteuert werden.

Generalversammlung: Bildet das höchste Entscheidungsgremium der Genossenschaft. Jedes Mitglied erhält ein Stimmrecht. Die Summe der Stimmen von Nicht-Europäischen bzw. investierenden Mitgliedern darf 25% nicht überschreiten. Vorstand: Führt das Tagesgeschäft und übernimmt die Verantwortung für die Umsetzung der Governance und der Strategie. Aufsichtsrat: Bildet das Kontrollorgan des Vorstands. Ecosystem Coach: Stabstelle als Mediator zwischen den Akteuren zur Offenlegung und Management von Interessenskonflikten.

Öffentliche Institutionen sollten die Genossenschaft gründen, ein Regelwerk in Form einer Satzung definieren und den Beitritt von Akteuren der Privatwirtschaft ermöglichen. Es sind dabei sämtliche Akteure der Rollen aus der eIDAS-Novelle wie bspw. PID-Provider, Pub-EAA-Provider, (Qualified) Trust Service Provider, Wallet Provider und Trust List Provider einzubinden.

Finanzierungskonzept: DieFinanzierung der Betriebskosten erfolgt durch Mitgliedsbeiträge. Werden temporärMittel für bestimmte Zwecke benötigt, kann eine Finanzierung in Form von Sonderzuwendungen der Mitglieder erfolgen.

Aufgaben: Interessen der Mitglieder fördern, Kommunikation zwischen den Mitgliedern erleichtern, Wallets zertifizieren, Verifizierungsstellen registrieren, rechtliche Fragestellungen klären, öffentliche Kommunikation bündeln, Durchdringung der EUDI-Wallet messbar machen, Schnittstellen bereitstellen, etc.

Opt-Out: Die Mitgliedschaft kann ordentlich gekündigt werden. Erkenntnisse aus einer Risikoanalyse zu Opt-Out Szenarien sollten bei der Erstellung der Satzung berücksichtigt werden.

Fazit

Um das Ökosystem für die deutsche EUDI-Wallet aufzubauen ist ein neutraler Ecosystem-Orchestrator erforderlich, der gemeinsam mit öffentlichen und privatwirtschaftlichen Organisationen ein Ökosystem für die deutsche EUDI-Wallet aufbaut. Dies kann, wie vorliegend aufgezeigt, durch eine Genossenschaft abgebildet werden, in der alle Akteure des Ökosystems kartellrechtskonform und strukturiert kooperieren.

Über die Autoren

Die IDunion SCE mbH ist die Nachfolgeorganisation eines vom BMWK-geförderten Konsortiums im Programm „Schaufenster sichere digitale Identitäten“. Die Europäische Genossenschaft wurde durch ihre Mitgliedsunternehmen mit dem Betrieb eines digitalen Vertrauensankers beauftragt.


From Concept to MVP: Successfully Delivering the Digital Product Passport Demo

Over the past three years, a dedicated research team from IDunion has been working intensively on the Digital Product Passport (DPP).   In the last two years, our focus shifted from theory ( a joint publication with key partners [→ read the paper here]) to hands-on implementation resulting in a Demo –  DPP solution that […]

Over the past three years, a dedicated research team from IDunion has been working intensively on the Digital Product Passport (DPP).  

In the last two years, our focus shifted from theory ( a joint publication with key partners [→ read the paper here]) to hands-on implementation resulting in a Demo –  DPP solution that is:

Data Carrier Agnostic: The product link can be embedded into any data carrier listet in Landscape of Digital Product Passport Standards | StandICT.eu 2026. Product Agnostic: Our DPP works with any type of product, whether batch-based, lot-based or individual instances. Automatic Identification of the Requester: The solution aligns with #eIDAS2.0 by supporting decentralized identification of individuals and organizations. Technology Stack Independent: The DPP can be accessed without downloading a additional app and lowers the threshold for adoption and enhances usability. Lightweight & Semantically Rich: Information is delivered  through Verifiable Credentials (VCs) and semantic data schemas, ensuring seamless interoperability. Additionally, we use W3C Decentralized Identifiers (DIDs) to guarantee unique, globally resolvable product IDs. Cryptographically Verifiable: All data within the DPP is cryptographically signed, enabling validation of authenticity and trust without central intermediaries. Trust Infrastructure Support: Establishing trust over the data and integration with Digital Identity Trust Anchors is possible, ex by using the eIDAS 2.0 mechanism and the EU Trust infrastructure proposed for natural and legal entities. Demonstration: Digital Product Passport System for Battery

Our final demo phase featured a live demonstration of a DPP tied to a battery cell used in an electric scooter.

The goal was to simulate how a real-world DPP can accompany a product through its entire lifecycle — from raw material extraction to end-of-life recycling.

We explored the perspectives and requirements of key stakeholders, including: Miners, Battery pack manufacturers, Economic operators, Consumer, Recyclers and Government and public interest organizations

Through interactive demonstrations, we answered key questions in real time, illustrating how a decentralized, interoperable, and trustworthy DPP can benefit every actor along the value chain.

For more detailed insights and demo material, check out the links below:

German: https://idunion.org/piloten/sichere-digitale-identitaeten-fuer-produkte/ English: https://idunion.org/piloten/en/sichere-digitale-identitaeten-fuer-produkte/ A Huge Thank You to Our Partners

his project would not have been possible without the creativity, technical expertise, and dedication of our fantastic partners.

Dr. Andreas Füßler (GS1 Germany), Florin Coptil ( Robert Bosch GmbH), Werner Folkendt (Robert Bosch GmbH), Dominic Hurni (SBB),  Cornelia Schalch (SBB), Johannes Ebert (Spherity GmbH) , Sebastian Schmittner (European EPC Competence Center GmbH) , Christian Fries (European EPC Competence Center GmbH), Paulina Drott (GS1 Germany), Dr. Susanne Guth-Orlowski, Ralph Troeger (GS1 Germany), Roman Winter (GS1 Germany)

Thank you for your ongoing commitment to shaping the future of product transparency and digital trust.

What’s Next?

With all project goals achieved and deliverables submitted, the DPP demo project is now officially closed.

We’re incredibly proud of what we’ve accomplished and are eager to apply these insights to future initiatives that further the digitization of sustainable and circular value chains.


Next Level Supply Chain Podcast with GS1

Solving Retail’s $1 Billion Inventory Problem with Dean Frew

How accurate is your inventory? For many retailers, answering that question is more difficult than it appears—especially when inventory counts are only conducted once or twice a year. In this episode, Dean Frew, President, RFID Solutions Division, SML IIS, sits down with hosts Reid Jackson and Liz Sertl to explore how RFID solves one of retail’s biggest challenges. Dean shares how real-time da

How accurate is your inventory?

For many retailers, answering that question is more difficult than it appears—especially when inventory counts are only conducted once or twice a year.

In this episode, Dean Frew, President, RFID Solutions Division, SML IIS, sits down with hosts Reid Jackson and Liz Sertl to explore how RFID solves one of retail’s biggest challenges. Dean shares how real-time data at the item level drives results from return fraud to buying online and pick up in store.

He also discusses what it takes to make RFID work at scale, how adoption has changed post-COVID, and why distribution centers are the next frontier.

 

In this episode, you’ll learn:

How RFID improves inventory accuracy

Where retailers are seeing the most significant ROI

New use cases beyond the sales floor

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:20) Dean’s background and RFID journey

(06:35) Improving inventory accuracy with RFID

(12:13) Reducing returns fraud with item-level data

(18:07) How BOPIS impacts inventory and sales

(23:01) Boosting inbound accuracy at distribution centers

(26:14) RFID in checkout and fitting room experiences

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Dean Frew on LinkedInCheck out SML Group


Blockchain Commons

Musings of a Trust Architect: Interop, What Is It Good For?

In early 2025, Blockchain Commons architected and engineered a major project for the Zcash blockchain: the ZeWIF specification that allows all of its wallets to interoperate. Interoperability is something that I consider vitally important for a technological ecosystem, so I was thrilled that Blockchain Commons could improve interoperability for Zcash. Here’s a bit more on why, what Blockchain Commo

In early 2025, Blockchain Commons architected and engineered a major project for the Zcash blockchain: the ZeWIF specification that allows all of its wallets to interoperate.

Interoperability is something that I consider vitally important for a technological ecosystem, so I was thrilled that Blockchain Commons could improve interoperability for Zcash. Here’s a bit more on why, what Blockchain Commons did for Zcash, and what I’d like to do for other technological communities.

The Limits of Interop

Obviously, some level of interop is necessary for any ecosystem, or it just wouldn’t work. There was a white paper and there are BIPs for Bitcoin; without them, no one could agree on how the blockchain works. We similarly have specifications, RFCs, and standards for everything from email to graphics file formats.

Unfortunately, there are limits to interop. Standards tend to cover the things that are required for the broadest level of intercommunication within an ecosystem. But when companies begin to work on their own apps, their work often turns proprietary—sometimes aggressively so—and that limits a technological ecosystem’s ability to continue to grow.

As an example, email has standards such as RFC 5322 for how mail servers communicate with each other. When mailers begin storing their mail, things become somewhat more balkanized: you might use a classic mbox format or a single-message eml format or a proprietary format such as Microsoft Outlook’s msg. Finally, there’s no consistency at all for how mailers might algorithmically filter messages: it’s hard to control whether a message ends up in the “Important”, “Everything Else”, or even “Spam” category of Gmail, even if a mailer adopts newer standards such as DKIM, DMARC, and SPF.

Yet, all of these types of interactions are important. If a mail ends up in a Spam folder, that’s almost as bad as it not being delivered at all; if mail can’t be restored from a proprietary mailbox, it’s lost as well. That’s why interop is important.

The Power of Interop

I developed four core architectural fundamentals for Blockchain Commons that I call the Gordian Principles. Of them, Privacy is only peripherally related to the question of interoperability, but the other three highlight why interop is important for the health of any technological ecosystem.

Independence is a user-focused principle. It says that users should be free from external control. This is the heart of my ideas of self-sovereign identity, and it’s the heart of interoperability too. With the ability to export or exchange data in an interoperable way, users don’t get locked into a single platform. Instead, they can choose among a variety of options to find an application, service, or pricing structure that meets their precise needs. Resilience is a data-focused principle. It says that data should be protected from loss. When data output and exchange formats are standardized for interoperability, they become widely used and widely understood. That ensures that they can be recovered far into the future. If you’ve ever tried to recover an old ClarisWorks or WordStar file, you’ve seen how much a non-interoperated file format can damage resilience. In comparison, .rtf files will never be lost and even .docx is pretty good—both because Microsoft Word has an enormous install base and because it’s an XML-based output format. Openness is a creator-focused principle. It says that infrastructure should be open so that new creators can join the ecosystem. When communication is standardized, anyone can develop in accordance with the standards, including newcomers. But, this isn’t just beneficial to creators, it’s also beneficial to users. Without new entrants, an ecosystem can become staid and stagnant. With new entrants, an ecosystem is constantly innovating and expanding. It creates an atmosphere of coopetition: members of the ecosystem cooperate to interoperate, but then compete to offer the best products within those standards.

In other words, interoperability is beneficial for the vast majority of members of the ecosystem: users get more options, more innovations, and freedom of choice; creators get the ability to fairly participate and compete; and data gets strong protections far into the future.

What We Did for Zcash

I was thrilled when members of the Zcash community started talking with me about their needs, because it showed a strong, ecosystem-wide understanding of the need for inteoperability.

But there was also a strong inciting incident that led to the Zcash work: the older zcashd server was being deprecated and so there was a need to migrate digital-asset data from its wallet to others. This also reflected a longer-term issue. Digital assets had sometimes been lost during previous migrations due to differences or even bugs in different wallets.

Obviously, a one-off migrator could have been created, likely linking zcashd with its replacement wallet, zallet. But I approached things from an architectural point of view: I wanted to create an extensible Wallet Interchange Format (that’s the “eWIF” in “ZeWIF”) that would not just enable the zcashd migration, but also allow any migration from one wallet to another within the Zcash ecosystem. I wanted to take the immediate needs and political will and turn it into something that could benefit the community for years to come.

That was the ZeWIF proposal that we put forth. It called for one month of studying Zcash wallet data as it currently exists, then another two months of developing the spec and writing libraries to convert among different wallets using that spec. (That timeline turned out to be a bit ambitious, but we’re closing out the initial design with a fourth month of work.)

As the above diagram shows, the zewif Rust crate lies at the center of the ZeWIF system. It creates in-memory representations of data from a variety of inputs and can output that abstracted data in a numbers of forms. Obviously, it can accept input from ZeWIF files and it can output to ZeWIF files. However, that’s just part of the process. Individual developers can also choose to use create front ends that import data from their wallets to zewif and back ends that export the data from zewif to their wallets.

As we release ZeWIF into the ecosystem, we should see advancements in accordance with the Gordian principles:

Independence. Users will be able to move their funds easily among Zcash wallets. Resilience. Translation of keys and seeds should be more reliable, and if conversion is done using our best practices, there should be clear warnings if anything wansn’t converted. Openness. New wallets can join the ecosystem. If they have innovative features, they can easily pick up new users if they support ZeWIF as an import format.

I’ve been thrilled to have strong support from wallet makers in the Zcash community. That type of buy-in is required to make interoperability work. Zingo Labs, the makers of the Zingo wallet, introduced us to the opportunity and have worked closely with us to fulfill our vision. Meanwhile, ECC has been our next testbed, since they’re using ZeWIF to manage conversions between zcashd and zallet. Other principals have been involved as well, and just as importantly we didn’t receive any pushback from anyone who refused to use the format. (Not everyone has adopted it yet, but we’re just finalizing the complete ZeWIF draft at this point.)

We were fortunate, because that’s not always the case.

What We Did for Bitcoin

This isn’t Blockchain Commons’ first rodeo. Creating interoperability has been Blockchain Commons’ goal since the start, and we’ve done most of our interop work to date with Bitcoin.

Our two biggest successes for Bitcoin have been Animated QRs and SSKR. Animated QRs are a standardized way to move large files across airgaps. That’s the exact sort of intercommunication that has always required interoperability. SSKR is a standardized way to shard a secret, currently focused on Shamir’s Secret Sharing. Because it isn’t just about intercommunication, getting a variety of companies to use it was a bigger victory, because it ensures those secrets will remain accessible and resilient into the far future. Both technologies are integrated with our Uniform Resources, which have been implemented by more than a dozen companies, offering true interoperability.

But these successes have unfortunately been piecemeal. There’s just one company that I’m aware of that’s adopted a pretty wide swath of Blockchain Commons’ Gordian specifications, and that’s our long-time sponsor, Foundation. We most recently worked with them to support QuantumLink, a Post-Quantum-Cryptopgraphy (PQC) method of Bluetooth communication that’s in their new Passport Prime device, but they’ve also implemented URs, Animated QRs, SSKR, and other Blockchain Commons interop specs. As a result, they’ve got well-studied, mature specifications that they didn’t roll themselves and that should be resilient and reliable far into the future. I think that adding in a variety of linked interop specs like this has a multiplicative effect.

I’d love to see more of this in the Bitcoin community, but a lot of people are resistant.

Why People Fight Interoperability

The primary reason that we see people fight interoperability is market dominance. The Bitcoin ecosystem has grown large enough that some of the bigger players have stepped back from inteoperability

I was sad to see ColdCard go this way, after they themselves built on Trezor and other open-source libraries. At least they’ve remained source-verifiable (meaning you can view their code in their repo), at least until you get now to the proprietary chip, but they were once one of maybe three hardware wallets that were fully open-source,and so fully interoperable.

But I think the recent release of Ledger Recover was even more of a tragedy. Here they were offering a big innovation: a way to recover seeds by splitting them up and distributing them off device, similar to Blockchain Commons’ own Collaborative Seed Recovery (CSR). But by keeping their protocol for distributing and recovering seed shares non-interoperable, they kept anyone else from offering seed vaults of their own, instead locking their users into their choices—which were very unpopular due to privacy-busting requirements for KYC information.

The exact opposite approach is taken by another of Blockchain Commons’ long-time developer partners, Craig Raw of the Sparrow wallet. He’s working hard to make Sparrow compatible with everything out there, but the difficulty he faces underlines the issues with the semi-interoperable state of most blockchains. He has to make NASCAR-like lists of otherwise incompatible products and introduce secret sauce to interoperate with each of them. We’re very luck to have the Sparrow wallet working with all of these different devices, but it’s something that would never happen if there weren’t someone as dedicated as Craig working on the project.

For smaller companies, interoperability is a way in. Obviously, you should do it!

For bigger companies, interoperability means both trusting your engineers to provide the best experience and trusting your customers to recognize it. That’s a leap of faith, but one that I’d hope to see most companies make in our industry. After all the idea of personal control is likely one of the reasons that your customers are working with digital assets in the first place!

The Potential for Other Blockchains

I hope that our work with Zcash (and before that with Bitcoin) is just a first step. I’d like to take that experience to other blockchains and offer new interoperability to grow those ecosystems as well.

Even after the work we’ve already done, Bitcoin may still need this sort of work the most, because it’s gotten so big. How could we make migration between wallets easier? Or just the migration of seeds or keys? How could we more widely standardize the backup of seeds with a methodology like Ledger Recover or our own CSR? How could we inteoperate third-party services such as pricing and fee lookups? Every one of these elements of interoperability would improve the ecosystem, but they require a dedication to inteoperability itself.

The same is true for other ecosystems that have gotten large enough to see multiple companies working on projects. Big changes could be the incentive for this, such as Monero’s move to Seraphis. But even without big changes on the horizon, big ecosystems grow to the point where interoperability becomes a requirement: Ethereum has a huge infrastructure built around WalletConnect, but we’ve talked with people in the ecosystem who think there’s real room for improvement. I hope that many chains (and other ecosystems) beyond Monero and Ethereum will see the advantages of improving interoperability for all the reasons I’ve laid out here, particular independence for users, resilience for data, and openness for developers.

Are you a leader working in a digital-asset ecosystem? Would you like to work with me to take lessons learned from the Zcash project to create interoperable wallet formats, data-exchange formats, service formats, or something else? Drop me a line at team@blockchaincommons.com. I’d love to talk about how we can expand your ecosystem as well.

Monday, 21. April 2025

Berkman Klein Center

Reporting from “The Battle for Our Attention” Workshop @ Northeastern, April 11, 2025

Panel 4 on Law and Policy for the Attention Crisis, featuring Alex Roberts (moderator), Dick Daynard, Leah Plunkett, Woody Hartzog and Zephyr Teachout. Last Friday, April 11, three of us, Elettra Bietti, Aileen Nielsen, Laura Aade, co-organized a workshop titled “The Battle for Our Attention: Empirical, Philosophical and Legal Questions” which took place at Northeastern University School of Law, an
Panel 4 on Law and Policy for the Attention Crisis, featuring Alex Roberts (moderator), Dick Daynard, Leah Plunkett, Woody Hartzog and Zephyr Teachout.

Last Friday, April 11, three of us, Elettra Bietti, Aileen Nielsen, Laura Aade, co-organized a workshop titled “The Battle for Our Attention: Empirical, Philosophical and Legal Questions” which took place at Northeastern University School of Law, and benefited from the support of CLIC, Northeastern’s Center for Law, Information and Creativity, and the involvement of Harvard’s Berkman Klein Center community of fellows and faculty. The event brought together leading legal scholars, policymakers, economists, medical scientists, computer scientists, media scholars, and technologists to address the pressing issue of how today’s digital technologies are transforming the understanding, use, and allocation of human attention, including implications for how we spend our time and what information we consume.

The discussion was wide-ranging, interdisciplinary, and deeply enlightening. We discussed whether attention “actually exists”, how it works, the history and business models of attention capture, the challenges and findings that arise from empirical studies of attention and attention markets, the relation between attention, intimacy, convenience and the law of trademarks, possible analogies with tobacco and gambling litigation, and the policymaking associated with regulating engagement and children’s use of social media.

The event began with a panel on the political economy of attention. Yochai Benkler kicked off the discussion with an overview of the capitalist drive to capture and instrumentalize attention over time, beginning with the 19th century press and culminating in today’s digital technologies. He argued that markets won’t solve attention problems and could exacerbate attention harms, in contrast with Marshall Van Alstyne’s suggestion that a Coasean model of attention rights could help platform owners manage misinformation and reduce incentives to share inaccurate or false information. Where Benkler advocated for the decommodification of attentional experiences, Van Alstyne advocated for a market regime of incentives and individual rights to speak and listen. David Lazer, for his part, adopted a middle ground position, presenting several findings on the slow but steady decoupling of content from its sources. He showed that information has become less traceable to sources, and discussed chatbots’ role in producing knowledge that is increasingly divorced from reliable reference to authors and media sources.

The second morning panel addressed the empirics of attention. Michael Esterman discussed some of his clinical work, showing that attention is a fluctuating, fragile process deeply shaped by cognitive and environmental factors. Sustaining attention for long periods of time and across contexts remains a phenomenon that is not well understood, and Esterman presented results showing that blocking a population’s mobile phone access for two weeks could improve participants’ attention, as well as their mental health and well-being. Esterman also pointed to the need for increasing measurements outside of laboratory settings to better understand the external validity of fundamental psychological results related to attention. Elena Glassmann then approached attention from the perspective of an interface designer, emphasizing that platforms actively shape how users direct their attention — often without users realizing it. Glassmann highlighted the danger of decontextualisation, where AI-driven tools summarize content by stripping away critical context and leaving users unaware of biases or omissions, and suggested ways to help people build reality-grounded mental models that provide access to contextual information, rather than hiding complexity. Christo Wilson concluded the panel with an overview of empirical approaches to studying attention platform business models, highlighting his role with David Lazer in creating and hosting the National Internet Observatory at Northeastern, a center that offers tools for researchers to study how people behave online in response to particular design features and platform strategies over long periods of time.

During the lunch keynote, FTC Commissioner and Law Professor Alvaro Bedoya spoke of his effort building a team of doctors and psychologists at the FTC whose focus and expertise includes children’s mental health and well-being. He also spoke of his work advocating for children’s privacy under COPPA and of the analogies and differences between tobacco, sports gambling and addiction to technological devices and products. Commissioner Bedoya suggested that more research needs to be done to better understand which products, platforms and specific technological features cause addiction and other mental health disorders.

The afternoon began with a third panel on media and communication systems for attention capture. Nick Seaver presented a spirited argument that attention may, in fact, not exist at all. While holding and waving a mouse jiggler, Seaver showed that attention is primarily defined or constructed by the way it is measured. Measurement, in turn, serves primarily as a managerial tool of control. While they might appear to be measuring participation, platform designers are in reality disciplining, tracking and controlling populations. Bridget Todd spoke of her work in the podcasting world, emphasizing the relation between intimacy and attention: audiences pay attention based on proximity to particular types of content and the emotions that content generates for them. Her view is that the current digital economy prioritizes profitable outrage over thoughtful storytelling, but that we should always push for the latter. Emily West presented some of her research on Amazon through the lens of convenience. Attention and addiction to digital products are promoted by appealing to convenience: platforms engineer frictionless experiences to generate user dependencies, producing a culture of learned passivity and inattention that quietly erodes agency. Rebecca Tushnet spoke of the law of advertising and the doctrines of dilution and confusion under trademarks law, explaining that the law simultaneously invokes but misunderstands the science of, or empirical realities of, human attention, protecting only those parts of attention that can be owned under intellectual property regimes. Similar to Seaver’s argument that attention is effectively what we can measure, Tushnet’s presentation highlighted that we live in an economy of signals and containers of attention.

The day ended with a panel discussion on legal and policymaking efforts in the attention space. Richard Daynard shared key takeaways from his litigation experience fighting tobacco and gambling companies. He explained that these industries intentionally engineer their products to addict users while funding research that shows the exact opposite, namely that their products are not addictive and individuals who engage in excessive use are the ones to blame. He added that these companies often lose in product liability litigation, where strict liability regardless of intention is the standard. Zephyr Teachout then offered an overview of the evolving Supreme Court jurisprudence on the First Amendment, arguing that current shifts in the court’s composition and caselaw are opening the door to possible legislation and reform in the attention space, something that until recently seemed largely implausible. Leah Plunkett discussed state social media laws and described them as providing financial compensation, privacy safeguards for children and workplace protections. She focused on a recent Utah law that allows children to sue their parents for compensation when their image is used in their parents’ social media feed for profit and described her involvement in drafting a model law on this theme for the Uniform Law Commission. Woody Hartzog concluded the panel presentations, discussing his work with Neil Richards on wrongful engagement, a tort which would allow individuals to sue digital companies for profiting from their addiction and engagement while neglecting users’ well being.

The event ended with participants discussing potential research overlaps, future collaborations and potential for advocacy across US regions. In the words of CLIC Director Alex Roberts, who moderated the last panel, “[i]f an interdisciplinary field of “attention studies” wasn’t already a thing, it is now.”

This event would not have been possible without support and assistance from Northeastern’s CLIC, Alexandra Roberts, Jennifer Huer, Walaa Al Awad, Natalia Pifferer, Brad Whitmarsh, and Jacob Bouvier. We also thank Harvard Law’s Laura Zeng, and BKC’s Bey Woodward and Jonathan Zittrain for additional enthusiasm and assistance.

We hope to continue this important conversation in the months and years to come with all of you. If you would like to join future conversations, we have created a regional mailing list which you can sign up to here.

Reporting from “The Battle for Our Attention” Workshop @ Northeastern, April 11, 2025 was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 17. April 2025

Ceramic Network

The Future of Ceramic: Focusing on Recall

Our focus is shifting as we serve the growing world of AI agents. We're shipping ceramic-one as a standalone implementation and will be deprecating js-ceramic and ComposeDB.

Today, we’re sharing important news about the future of Ceramic.

With the rapid rise of agents as the new critical user and interface to the web, it’s increasingly clear that supporting a healthy, trustless, decentralized model for AI is essential. Our team, part of Recall Labs after our merge with Textile, is shifting our primary focus to Recall: a platform where AI agents prove themselves, improve themselves, and earn for their intelligence.

As part of this, we’ll be repurposing parts of Ceramic, deprecating others, and introducing a standalone open source version for the community. We want to openly communicate what this means for you and your applications built on Ceramic, as well as outline the path forward.

From Ceramic to Recall

Over the years, Ceramic has been pivotal in helping us understand decentralized data, composability, and robust synchronization. It’s been the leading technology for dozens of applications and networks in need of scalable, edge-centric, verifiable data storage and replication. We spent more than five years developing and iterating on Ceramic and are immensely proud of the advancements and technology we’ve shipped.

Even more than that, we are so thankful for your partnership. Our customers and community have relentlessly innovated on brand new tech, helped us move from prototypes to stable networks, and been the driving force for our continued commitment to open, decentralized data. Thank you for all of your support, and all you’ve helped us learn.

One of the persistent challenges we’ve faced with Ceramic has been the UX of decentralized data systems: managing keys and signing data, novel access control flows, and counterintuitive patterns for data flow have been challenges for many users and developers. Interestingly, all these properties are native and intuitive for the internet’s new class of users: agents.

Recall is a cryptoeconomic platform for AI agents to prove, improve, and earn based on their intelligence. It builds heavily on the technology we’ve built previously, but is a new platform and demands our full commitment.

What This Means for Ceramic Users

We deeply appreciate the trust and investment you’ve made in Ceramic. While our priority is Recall, we remain committed to setting Ceramic users up for success as we sunset certain parts of Ceramic.Here’s exactly what you need to know:

Introducing ceramic-one

Ceramic-one is the most performant, stable, and decentralized implementation of Ceramic. There is a new client SDK for reading/writing to ceramic-one: https://github.com/ceramicnetwork/rust-ceramic/tree/main/sdk

Ceramic-one introduces Recon, enabling reliable synchronization and historical data syncing across nodes. All existing data from ComposeDB remains fully accessible and readable through ceramic-one.

Ceramic-one will continue to function independently, with no dependency on our infrastructure or any other centralized authority. Further, Ceramic-one is under an MIT license, so anyone who wants to fork it and continue developing it on their own is welcome to do so.

Anchoring and Conflict Resolution on Ceramic-one

We’re committed to completing one final critical feature for ceramic-one: self-anchoring to Recall. This allows fully decentralized timestamping without relying on our centralized CAS or Ethereum L1, making ceramic-one truly self-sufficient. This will be implemented sometime after the mainnet release of Recall.

Manual conflict resolution is currently possible through ceramic-one by exposing all stream HEADs, allowing applications to apply their own business logic to select which branch(s) of the stream's history they wish to use. We’re also considering building automated conflict resolution (based on anchor timestamps)—the only remaining functionality from js-ceramic not yet implemented in ceramic-one. If this feature is important to your use case, please reach out, as your feedback will influence our decision.

After these improvements, ceramic-one will reach a feature-complete MVP. At that point, we will only prioritize critical bug fixes. ceramic-one will continue to exist as stable software under an open MIT license—fully available for anyone who wishes to fork, improve, or independently evolve it.

Deprecation of js-ceramic and ComposeDB

Effective immediately, we’re deprecating js-ceramic and ComposeDB. We recommend migration to ceramic-one, the streamlined, performant successor, as soon as possible.

Migration Steps:

A new client SDK for ceramic-one is now available here: ceramic-one SDK Step-by-step migration guidance is provided in our upgrade guides: Migration Guide on Ceramic Blog Detailed Upgrade Instructions on GitHub

Timeline and Support

ComposeDB and the Ceramic Anchor Service (CAS) will be completely shut down at least one month after Recall’s Mainnet launch. (Exact date TBD, but expected in mid-2025.). After that date, ComposeDB-dependent apps will break if not migrated to ceramic-one.

Recall: The Future of Decentralized Intelligence

Ceramic’s legacy and your contributions have directly influenced our development of Recall. Many of Ceramic’s strengths—openness, transparency, and decentralization—live on and evolve within Recall’s cryptoeconomic framework.

Recall aims to become a vibrant ecosystem for AI builders and developers. We warmly invite Ceramic users to explore the opportunities Recall offers: building intelligent agents, participating in verifiable competitions, and earning from proven performance.

We’re grateful for your support of Ceramic and excited about this next chapter with Recall. Our team remains available to assist your migration and ensure your continued success. If you have questions related to Ceramic, please reach out to us here. If you’re interested in learning more about Recall, find us on Twitter!

Warm regards,
The Recall Labs Team


Internet Safety Labs (Me2B)

Internet Safety Labs Provides Testimony for Massachusetts Data Privacy Acts

Internet Safety Labs testified in support of the Massachusetts Consumer Data Privacy Act (H78) and Massachusetts Data Privacy Act (H104), advocating for strong data minimization, restrictions on sensitive data sales, and robust enforcement to protect residents’ privacy. We’re grateful to the Massachusetts Legislature for hearing our testimony. The written testimony is available to view, along […]

Internet Safety Labs testified in support of the Massachusetts Consumer Data Privacy Act (H78) and Massachusetts Data Privacy Act (H104), advocating for strong data minimization, restrictions on sensitive data sales, and robust enforcement to protect residents’ privacy. We’re grateful to the Massachusetts Legislature for hearing our testimony. The written testimony is available to view, along with a video of the testimony below:

Open PDF

 

The post Internet Safety Labs Provides Testimony for Massachusetts Data Privacy Acts appeared first on Internet Safety Labs.

Wednesday, 16. April 2025

MyData

The Robot Revolution: Why Privacy by Design is Non-Negotiable

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. The future is knocking, […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. The future is knocking, […]

Tuesday, 15. April 2025

Blockchain Commons

2025 Q1 Blockchain Commons Report

Blockchain Commons focused on the ZeWIF project for the Zcash blockchain during the first quarter of 2025, but that didn’t stopped us from also advancing a few other priorities, including the initial release of our long-incubating Open Integrity project. Here’s what all we’ve been working on. ZeWIF Specification: Why It’s Important The ZeWIF Meetings What We’ve Released So Far What’s Still to Come

Blockchain Commons focused on the ZeWIF project for the Zcash blockchain during the first quarter of 2025, but that didn’t stopped us from also advancing a few other priorities, including the initial release of our long-incubating Open Integrity project. Here’s what all we’ve been working on.

ZeWIF Specification:

Why It’s Important The ZeWIF Meetings What We’ve Released So Far What’s Still to Come

Post-Quantum Commons:

Why It’s Important PQC Meeting QuantumLink What We’ve Released So Far

Open Integrity:

Why It’s Important What We’ve Released So Far

New Articles:

The Right to Transact SSI Orbit Podcast

New Dev Docs:

SSKR Pages UR Pages Meetings Pages Improved Crate Docs

New Ports:

Lifehash to Rust

New Research:

Provenance Marks Provenance References ZeWIF Specification

The Zcash extensible Wallet Interchange Format (ZeWIF) has been Blockchain Commons’ main priority since the Zcash Community Grants program approved our proposal at the end of the year.

Why It’s Important. Blockchain Commons talks a lot about interoperability, and that’s what ZeWIF is: it’s a way to freely exchange data between Zcash wallets in a standardized form. What we don’t talk about as often is why interoperability is important.

It goes back to our Gordian principles. Interoperability supports at least three of them.

It supports independence because any user can freely move their data among the interoperable software systems. It supports openness because any developer can easily join the ecosystem by adopting a mature, well-understood specification. This creates an environment of coopetition (cooperative competition) that leads to advances in technology and usability. It supports resilience because the interoperable format makes data less likely to be lost: it’ll be in a form that will be understood and therefore accessible, well in the future.

For a wallet ecosystem, interoperability means that users will have the freedom to move their digital assets among Zcash wallets. That’s the precise independence we want users to have, which is why it’s been worth spending a few months of time on this project.

The ZeWIF Meetings. To support the ZeWIF project, Blockchain Commons held three meetings in the first quarter: on the initial wallet survey, on the first demo of the zmigrate tool, and on ZeWIF data abstractions. Meeting with developers to ensure that specifications serve everyone’s needs has always been a bedrock policy for Blockchain Commons, so we’ve of course extended it here. (At least one more meeting is planned, for April 16th, to demo the data file format.)

#1: Wallet Survey: #2: Zmigrate demo: #3: Abstraction Discussion:

What We’ve Released So Far. The ZeWIF project is built around the zewif library, which stores Zcash wallet data in an in-memory format. We’ve also written zewif-zcashd and zewif-zingo to demonstrate the import of data from those wallets to ZeWIF, while our partners at Zingo Labs have produced “zewif-zecwallet” for importing zecwallet data (though that PR hasn’t been merged yet). Finally, we authored zmigrate, which is a demo CLI for importing zcashd content. With these demos and libraries in hand, other wallet developers can start working to interchange their own data via the ZeWIF format.

What’s Still to Come. With our releases so far, data can be interchanged between different Zcash wallets as long as it’s all done on the same machine: you just import into the in-memory ZeWIF format from one wallet, then export to another. But we expect most use cases will instead involve at least two different machines. That’s where the ZeWIF file format comes into play. Building on Gordian Envelope, it translates the ZeWIF in-memory storage into a fixed file that can then be moved to a different machine. We expect to demo the Envelope ZeWIF format at that upcoming April 16th meeeting.

Post-Quantum Commons

Is Post-Quantum Cryptography (PQC) the next big thing? We were able to support our friends at Foundation with some PQC work just as we got the ZeWIF project going at the start of the year.

Why It’s Important. Quantum Computers are starting to appear, and though they’re far, far from what would be needed to break cryptography at the moment, there’s no telling when there’s going to be a sudden “quantum” leap. Though today’s cryptography might not need PQC, if you’re working on something that might be around for 5 or 10 years, you should be thinking about it!

PQC Meeting. Our March Gordian meeting contained all the details of our PQC work, including what Quantum Computing is.

QuantumLink. The highlight of the meeting was a discussion of QuantumLink from our friends at Foundation. This is a quantum-resistant protocol for Bluetooth communication that is a critical component of their new Passport Prime wallet. It allows them to use Bluetooth with high security and for that security to remain strong 5 or 10 years in the future, in case Quantum Computing does make those big strides forward.

What We’ve Released So Far. The QuantumLink technology was enabled by new PQC support that Blockchain Commons incorporated into its libraries. You can now use the PQC algorithms ML-DSA and ML-KEM in our Rust stack, from bc-componenents on up.

Open Integrity

We had one other big release in Q1: Open Integrity, a system for increasing the trust of software being developed in Git. Well, technically it was released on April 7th, the 20th anniversary of Git’s release, but this is something that Blockchain Commons Architect Christopher Allen has been working on for about a year, so we’re thrilled to get the word out.

Why It’s Important. We wrote a whole article discussing why Open Integrity is important. But, in short: Git repos are growing increasingly important for the deployment of critical software, and Git doesn’t actually provide a high-level of trust for those repos, despite the ability to sign commits. Open Integrity bridges the gap betwen what Git offers and what online software distribution needs.

What We’ve Released So Far. The Open Integrity project is now available in an early form at GitHub. A Problem Statement offers details on the issues we’re trying to solve and our solutions. We’ve also got a number of tools and scripts, the most import of which creates an inception commit on a repo. This inception commit is the root of trust that lays the foundation for ensuring that you always know who’s in control of a repo.

If you want to try out Open Integrity, see the Open Integrity Snippets file, which has complete instructions on how to get that inception-commit script running, plus many examples that will allow you to experiment with Open Integrity.

That’s it for our big Q1 projects, but we had a number of smaller releases over the course of the quarter as well.

New Articles

The Right to Transact. We think the right to transact should be an international freedom. Read more in Christopher’s recent article, “The Case for an International Right to Freedom to Transact”, which builds on his 2024 musing, “How My Values Inform Design”.

SSI Orbit Podcast. Christopher also was interviewed for the SSI Orbit Podcast. It’s been nine years since he wrote the foundational “Path to Self-Sovereign Identity”. Where do things stand today?

New Dev Docs

SSKR Pages. We updated our SSKR dev pages with the big focus being differentiating SSKR URs (where you split up a key) and SSKR Envelopes (where you protect the entire contents of an Envelope by creating and splitting up a key). The test vectors also now demonstrate both cases.

UR Pages. We similarly did some big updates to our UR dev pages. Here the issue was that URs had seen some changes over the years, especially as we locked down CBOR tag registration, and our examples were no longer up-to-date. Now, every page and every test vector should be correct. There’s also a new page on URs for Gordian Envelope.

Meetings Pages. All of Blockchain Commons’ meetings are now documented on a new meetings page, which also includes subpages with videos and slides (and usually transcripts) of everything from the last few years!

Improved Crate Docs. Finally, we’ve used some lessons learned from the documentation of ZeWIF to improve the documentation of our Rust stack. As a result, docs.rs now has improved docs for bc-dcbor, bc-components, and bc-envelope

New Ports

Lifehash to Rust. Lifehash is now available in a new Rust Implementation courtesy of Ken Griggs. Lifehash is a critical element of the object identity block, which can help users to recognize seeds and keys. We hope this will allow for more deployment.

New Research

Provenance Marks. One of our newest innovations, courtesy of Blockchain Commons Lead Researcher Wolf McNally, is the provenance mark. The provenance mark is a cryptographically secure chain of marks that facilitates the easy verification of authenticity. We’ll have more on them in the months ahead, but in the meantime, you can read Wolf’s research paper on the topic!

Provenance References. If you want to start playing with provenance marks right now, we’ve already released a series of reference apps and libraries. They include our original Swift implementation, our newer Rust implementation, and a CLI that can be used to create and manage marks!

That’s it for the moment. For the next quarter, we’ll be closing out our initial work on ZeWIF in April, and we’ll be offering more looks at Provenance Marks in the months ahead.

If you’d like to work with us on these or other topics, drop us a line about becoming a Blockchain Commons partner.

Monday, 14. April 2025

We Are Open co-op

What does AI Literacy look like for young people aged 14–19?

A research project with the Responsible Innovation Centre at the BBC

We Are Open Co-op (WAO) is a collective of individuals who share a commitment to ethical, inclusive, and sustainable practices in all aspects of our work, including AI literacy. Our approach to this area is grounded in the belief that AI is an extension of digital literacies, not a separate field. We aim to demystify AI, helping people recognise that the digital literacy skills they already possess are directly applicable to AI.

AI’s societal impacts are significant and well-documented, but the public often struggles to grasp its opportunities and risks. A trusted guide is needed to balance optimism with caution, particularly given AI’s potential effects on jobs, culture, and other critical areas.

The project

We are delighted to have started work on a new project with the Responsible Innovation Centre for Public Media Futures, hosted by the BBC. As an institution with a long and valued history in delivering high quality educational initiatives that help audiences understand and navigate new technologies, we believe the BBC is well placed to help public understanding and engagement with emerging technologies such as AI.

Our focus is research and analysis which aims to find gaps in provision for younger audiences. Over the next few months we’ll be interviewing experts, reviewing resources, and scrutinising frameworks. We will be using this research to create a report along with a framework and guide which will ultimately help the BBC create policies and content for young people. We’ll be sharing an open version of what we create, so look for those in the summer.

Our approach

Starting with desk research, and building on the work we’ve already curated, we’re creating a library of interesting definitions, frameworks, and resources that can help us understand what other people are exploring when it comes to AI Literacy in combination with public media.

As with our work with Friends of the Earth where we researched AI and environmental justice, we will bring together a variety of experts to give feedback and sense check what falls out of some of the research.

Along the way, we’ll share updates based on our findings, so if you know of a person, organisation, initiative, framework, or resource that we should take a look at, please let us know! We’ll also be updating ailiteracy.fyi, our one-stop-shop for all things related to this important topic.


Digital ID for Canadians

Spotlight on Paays

1. What is the mission and vision of Paays? Building the foundation of trust in Auto Finance. Empowering Dealers and Lenders to serve their customers…

1. What is the mission and vision of Paays?

Building the foundation of trust in Auto Finance. Empowering Dealers and Lenders to serve their customers better, faster and more securely.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Auto Finance and the Automotive Sector more generally, relies on trustworthy digital identity verification to ensure that people are who they say there, and the information being provided is authentic.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Paays is enabling thousands of Auto Dealers in Canada with the technology to perform digital identity verification, protect their businesses and employees, and reduce or eliminate criminal opportunities for fraud.

4. What role does Canada have to play as a leader in this space?

Canada has a significant opportunity to be a leader in digital identity and trust services, across many industries, including Auto Finance.

5. Why did your organization join the DIACC?

Paays joined DIACC to partner with a leading organization and it’s members, focused on bringing digital identity and trust services to the Canadian marketplace.

6. What else should we know about your organization?

Paays has a singular, laser-focused approach to building the foundation of trust in Auto Finance!


DIF Blog

DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group

The Decentralized Identity Foundation (DIF) has officially launched its Hospitality & Travel Working Group, evolving from the ongoing H&T Special Interest Group (SIG). This new working group will focus on developing standards, schemas, processes, and documentation to support the self-sovereign exchange of data between travelers, services, intermediaries in

The Decentralized Identity Foundation (DIF) has officially launched its Hospitality & Travel Working Group, evolving from the ongoing H&T Special Interest Group (SIG). This new working group will focus on developing standards, schemas, processes, and documentation to support the self-sovereign exchange of data between travelers, services, intermediaries in the hospitality and travel industry, and their AI agents.

Mission and Focus

The primary goal of the working group is to enable travelers to maintain ownership and control over their personal data while allowing for seamless interactions with travel service providers. The group will address critical aspects of traveler profiles, focusing on data portability, privacy, and interoperability across the travel ecosystem.

Meeting Schedule

The working group will convene twice weekly:

Tuesdays at 14:00:00 UTC Fridays at 14:00:00 UTC

These regular meetings will facilitate ongoing collaboration among industry stakeholders, technology providers, and standards bodies.

Leadership Perspectives

"We are enormously proud that the hard work of our dedicated Hospitality & Travel SIG has led to a Working Group that will develop key specifications. From sharing boarding passes and hotel preferences to loyalty programs and dietary requirements, travelers constantly provide the same information to different companies throughout their journey.

"Travelers need to share, and overshare, the broadest range of personal data on their voyage, ranging from seat or dietary preferences, to government identities or passports, across multiple service providers who may not have the best data handling practices. This working group will develop standards that allow travelers to control exactly what data they share, with whom, and for how long - eliminating both unnecessary data exposure and the frustration of repeatedly entering the same information," said Kim Hamilton Duffy, Executive Director of the DIF.

Douglas Rice, industry veteran and Hospitality & Travel Working Group chair added: "The travel ecosystem has long struggled with fragmented approaches to customer data and identity management. This working group will help establish the technical foundations needed for travelers to maintain control of their data while enabling the personalized experiences they expect. We're building toward a future where your travel preferences, loyalty information, and credentials can move with you seamlessly across your journey—all while maintaining the highest standards of privacy and security, and enabling AI agents to act on verified information about the traveler.”

Next Steps

The working group will initially focus on defining standardized schemas for travelers or their digital agents to present profiles, establishing protocols for secure data exchange, and developing guidelines for implementation across various travel touch points. Industry participants are encouraged to join the working group to contribute their expertise and perspectives.

For more information about participating, visit the DIF Hospitality & Travel Working Group web site.

Friday, 11. April 2025

Digital ID for Canadians

Spotlight on Keyless

1. What is the mission and vision of Keyless? Our vision is for a safer, more private world. Keyless is on a mission to redefine…

1. What is the mission and vision of Keyless?

Our vision is for a safer, more private world. Keyless is on a mission to redefine how the world authenticates – enabling people to securely access services with a simple look, without compromising their biometric data.

2. Why is trustworthy digital identity critical for existing and emerging markets?

We can answer this with a simple example: mule accounts. Fraudsters will pay people to open bank or crypto accounts using their real ID, then take over and use those accounts to launder money. On paper, the account looks legitimate – but the person using it isn’t who the bank thinks it is.

This kind of fraud is only possible when identity assurance is weak. With trustworthy digital identity, this can be stopped by verifying who is really behind the screen not just at sign-up, but every time they log in, send a payment, or change account details.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital identity is the foundation for secure online interactions. It drives inclusion, cuts costs, and reduces fraud. When companies can trust their users, they grow faster and more confidently.

Keyless sits within consumer apps – often in banking and fintech, but also in government and university portals. Whenever a user performs a sensitive action, like logging in or approving a payment, Keyless triggers an authentication selfie using the device’s camera. Unlike text messages, call centers, or even FaceID, this process actually proves who the user is – not just that they have access to a device or mobile number.

4. What role does Canada have to play as a leader in this space?

Canada is already taking significant steps toward becoming a global leader in digital identity. The government is actively developing a nationwide digital ID program designed to make accessing both public and private services faster and more secure.

By continuing to invest in public-private collaboration, Canada can lead the way in building trusted, inclusive digital ecosystems that other countries look to for guidance.

5. Why did your organization join the DIACC?

We joined DIACC to help shape the future of digital identity in a way that’s secure, user-friendly, and preserves citizen privacy. We believe in collaboration and are excited to contribute our expertise in biometric authentication and privacy-preserving technologies.

6. What else should we know about your organization?

Within the biometric authentication space, Keyless is known for its privacy-preserving approach. Uniquely, we authenticate users without storing their facial biometric data anywhere – keeping their biometric information completely private.

Thursday, 10. April 2025

FIDO Alliance

Highlights from the FIDO Alliance APAC Regional Member Meetup & Workshop: Collaborating for a Passwordless Future

On March 18, 2025, the FIDO Alliance convened its APAC regional members and key stakeholders at the Telecommunications Technology Association (TTA) Auditorium in Seongnam, South Korea, for a full-day meetup […]

On March 18, 2025, the FIDO Alliance convened its APAC regional members and key stakeholders at the Telecommunications Technology Association (TTA) Auditorium in Seongnam, South Korea, for a full-day meetup and workshop. The event focused on advancing simpler, stronger authentication across the region and served as a vital platform for technical updates, regional progress, and real-world implementation insights around passkeys.

Among the 70+ participants on-site, we were honored to welcome six FIDO Alliance Board members representing Samsung Electronics, NTT Docomo, Lenovo, RaonSecure, Egis Technology, and Mercari—underscoring the global engagement and strategic importance of this gathering.

Before the main program, international attendees were invited to a special TTA Lab Tour, offering a behind-the-scenes look at Korea’s testing and standards infrastructure supporting FIDO and other telecommunications technologies.

Showcasing Technical Leadership and Regional Collaboration

The day featured an exciting lineup of expert speakers and educational sessions, reflecting the expanding role of passkeys as a trusted, phishing-resistant, and user-friendly authentication solution for both public and private sectors.

The event opened with an inspiring keynote by Dr. Koichi Moriyama (Chair, FIDO Japan WG; W3C Advisory Board Member), who emphasized the importance of global collaboration in setting interoperable, secure technology standards. David Turner, Senior Technical Director at FIDO Alliance, shared in-depth updates on passkey advancements and highlighted future areas of focus, including developer support, user experience, and broader international engagement. Wei-Chung Hwang of ITRI presented a thoughtful comparison of passkeys and PKI, outlining how the two can coexist and complement each other within modern authentication architectures. Ki-Eun Shin, Principal Software Engineer and FKWG Vice Chair, offered a practical guide for developers building scalable and secure passkey systems, covering implementation, testing, and UX considerations. Dovlet Tekeyev from AirCuve introduced Korea’s updated Zero Trust Guideline 2.0, walking the audience through key principles, recommendations, and how FIDO solutions align with national cybersecurity strategies. Eugene Lee, Vice President at RaonSecure, shared cross-industry deployment experiences of FIDO-based biometric authentication, highlighting its adaptability to diverse sectors including finance and telecom. Jong-Su Kim, Principal Security Engineer at Samsung Electronics, concluded the technical sessions by sharing Samsung’s vision of simplifying cybersecurity for all users through FIDO-driven innovation.

Regional Insights and Shared Momentum

The day closed with regional updates featuring representatives from Japan (Naohisa Ichihara, FJWG Co-Vice Chair and CISO at Mercari), China (Henry Chai, FCWG Chair and CEO at GMRZ Technology, Subsidiary of Lenovo), Taiwan (Karen Chang, FTF Forum Chair and VP at Egis Technology), Malaysia (Sea Chong Seak, CTO at Securemetric), and Vietnam (Simon Trac Do, CEO & Founder at VinCSS), each presenting local updates on passkey deployment. Speakers shared technical challenges, user adoption, and the growing importance of cross-border cooperation to accelerate the passwordless future across APAC.

Moving Passwordless Forward Together

The FIDO APAC Regional Member Meetup & Workshop reaffirmed our collective commitment to advancing phishing-resistant passwordless authentication across the region. Thanks to all the speakers, sponsors, and attendees who contributed to this energizing and forward-looking event.

Stay tuned for more cross-regional collaborative events in the APAC and updates from the FIDO Alliance as we continue to make online authentication simpler and stronger together.