Last Update 6:42 PM October 29, 2025 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Wednesday, 29. October 2025

The Engine Room

Mapping responses to TFIPV across the Majority World

In July 2025 with support from Numun Fund, The Engine Room conducted a brief research study to better understand the actors, needs and key trends in responding to technology-facilitated gender based violence (TFGBV) and intimate partner violence (IPV). The post Mapping responses to TFIPV across the Majority World appeared first on The Engine Room.

In July 2025 with support from Numun Fund, The Engine Room conducted a brief research study to better understand the actors, needs and key trends in responding to technology-facilitated gender based violence (TFGBV) and intimate partner violence (IPV).

The post Mapping responses to TFIPV across the Majority World appeared first on The Engine Room.


ResofWorld

China’s $143 billion push to dominate the global EV industry

After years of investing in Europe, Chinese EV and battery firms are turning to Asia, Africa, and Latin America
Chinese electric-vehicle companies have invested big bucks over the past decade to establish their global supremacy. Companies such as CATL and BYD, among others, poured $143 billion into foreign EV...

Next Level Supply Chain Podcast with GS1

Small Farms, Big Impact: Transforming Food Safety in Schools

A single contamination can have serious consequences for vulnerable populations, such as students. Traceability is essential to ensure food safety from the farm to the school cafeteria. In this episode, Jim White, President and Co-Founder of ENSESO4Food, and Candice Bevis, Farm Operations Manager at Spartanburg County School District 6, explain how digital traceability simplifies FSMA 204 compli

A single contamination can have serious consequences for vulnerable populations, such as students. Traceability is essential to ensure food safety from the farm to the school cafeteria.

In this episode, Jim White, President and Co-Founder of ENSESO4Food, and Candice Bevis, Farm Operations Manager at Spartanburg County School District 6, explain how digital traceability simplifies FSMA 204 compliance and strengthens confidence in the food supply chain. 

They discuss how affordable technology and GS1 standards help small farms operate with the same precision as large suppliers, connecting farms, processors, and cafeterias for a safer food system.

In this episode, you'll learn:

How to ensure accountability in food sourcing and delivery

Why simplicity and affordability matter for technology adoption

Ways schools are using visibility to improve food safety

Things to listen for: (00:00) Introducing Next Level Supply Chain (04:22) How the Trakkey partnership began (07:59) Connecting farms, processors, and schools (12:33) How digital tools simplify compliance (18:03) Teaching students where their food comes from (20:56) Making food safety simpler for farmers (23:31) Reducing waste and improving efficiency (26:32) Jim and Candice's favorite tech

If you're interested in becoming or working with a GS1 US solution partner, please visit AccuGraphix's website.

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: Jim White on LinkedInCheck out ENSESO4Food and Spartanburg County School District Six

Tuesday, 28. October 2025

Digital ID for Canadians

Leading the Way: Client IDV Success in Canada’s Legal Sector

Canada’s legal sector has achieved over 700,000 client identity verification (client IDV) transactions in a single year (October 1, 2023 – September 30, 2024), proving…

Canada’s legal sector has achieved over 700,000 client identity verification (client IDV) transactions in a single year (October 1, 2023 – September 30, 2024), proving that secure, convenient digital client IDV can operate at scale in highly regulated environments.

DIACC member organizations are leading this transformation in a $50-70M legal-sector verification market that points to an economic opportunity of $500M to $1B spanning financial services, healthcare, telecommunications, and government.

The achievement: Client IDV has moved from experimental to essential in the legal sector, positioning Canada as a global early adopter with proven solutions ready for trusted markets.

The opportunity ahead: Ontario’s success provides a model. Extending this to Quebec, Atlantic Canada, and the territories will drive economic prosperity.

Three Success Stories Point the Way Forward

700,000+ transactions prove the model works: Canadian lawyers, notaries, and clients have embraced digital verification as reliable, secure, and convenient. DIACC members are innovating verification approaches: from streamlined single-method solutions to comprehensive triple-verification systems, providers are demonstrating flexibility for different risk profiles. Market demand continues growing: 16.7% global CAGR signals sustained opportunity for Canadian providers who achieve economic scale.

Strategic Opportunity: The gap between leading jurisdictions and emerging markets creates clear pathways for growth. This requires collaborative action between industry innovators, forward-thinking regulators, and government partners.

Download the briefing here.

Leading-the-Way-Client-IDV-Success-in-Canadas-Legal-Sector-ENG

DIACC and SIROS Foundation Partner to Advance Global Leadership in Digital Trust and Interoperable Credentials

Toronto, Canada, October 28, 2025 – The DIACC and the SIROS Foundation are proud to announce a strategic collaboration to accelerate trust, interoperability, and innovation…

Toronto, Canada, October 28, 2025 – The DIACC and the SIROS Foundation are proud to announce a strategic collaboration to accelerate trust, interoperability, and innovation in digital credentials and wallets.

This partnership brings together two recognized leaders in digital trust and identity services and responsible innovation to develop solutions that empower individuals, institutions, and governments to securely verify and share information across borders and sectors. The collaboration will focus on opportunities to support and extend academic and professional credentialing, labour mobility, and workforce upskilling, building the foundation for a more connected, inclusive, and trusted digital economy.

“Digital trust and identity is not only technology capabilities, they are economic and social enablers,” said Joni Brennan, President of DIACC. “Working with SIROS allows us to connect Canada’s public and private sector leadership with global industry-needs driven best practices to deliver real-world, interoperable solutions that benefit all.”

Through this collaboration, DIACC and SIROS will:

Address challenges and opportunities related to international interoperability, digital trust and identity assurance policy, and trust in specific use cases.  Align efforts to enhance the secure and trusted exchange of digital credentials across sectors and jurisdictions. Develop and showcase real-world use cases proving the effectiveness of interoperable digital trust and identity solutions. Collaborate to advance policy alignment and technical interoperability through pilot projects and testing around use cases, including: Travel, Education, Payments, Organizational/Business credentials, Labour mobility, Professional upskilling, and Academic integration

“The SIROS Foundation is dedicated to building digital ecosystems that are secure, inclusive, and human-centred,” said Stina Ehrensvärd, Board of Directors SIROS Foundation. “By joining forces with DIACC, we can advance trust infrastructure that strengthens economic resilience and opportunities across Europe and Canada.”

Together, DIACC and SIROS are positioning Canada and its partners at the forefront of global digital trust innovation, advancing a model where secure, verifiable, and portable credentials enable greater mobility, economic growth, and public confidence in the digital age.

About DIACC
The DIACC is a non-profit coalition of public and private sector leaders working to unlock economic and societal opportunities for Canadians through secure, privacy-enhancing digital trust and identity verification solutions. 

About SIROS Foundation
The SIROS Foundation advances secure, inclusive, and responsible digital ecosystems through research, collaboration, and strategic initiatives that prioritize trust, transparency, and human-centric innovation.


DIACC Congratulates Quebec on Historic Adoption of National Digital Identity Legislation

October 28, 2025 – We proudly congratulate the Government of Quebec and the Assemblée nationale on the adoption of Bill 82, the Act respecting national…

October 28, 2025 – We proudly congratulate the Government of Quebec and the Assemblée nationale on the adoption of Bill 82, the Act respecting national digital identity and amending other provisions. This landmark legislation represents a significant milestone in Canada’s digital transformation journey and demonstrates Quebec’s leadership in advancing secure, citizen-centric digital solutions.

“The adoption of Bill 82 is a momentous achievement that positions Quebec at the forefront of digital identity innovation in Canada,” said Joni Brennan, President, DIACC. “This legislation embodies the core principles that DIACC has long championed: privacy, security, user control, and voluntary adoption. Quebec’s commitment to putting citizens in control of their personal information while enhancing access to government services sets a powerful example for jurisdictions across the country.”

DIACC particularly commends Quebec’s thoughtful approach to implementing national digital identity, which includes:

Privacy by design: Ensuring citizens only share the minimum personal information required for each service Voluntary participation: Respecting individual choice in the use of digital identity credentials Enhanced security: Reducing risks of identity theft and fraud through digital attestations Citizen empowerment: Giving individuals control over their personal information and requiring approval for each use Public consultation: Commitment to consulting the public on the use of biometric characteristics

The legislation’s emphasis on data governance, cybersecurity, and digital sovereignty aligns perfectly with DIACC’s Pan-Canadian Trust Framework and demonstrates a comprehensive approach to building trustworthy digital ecosystems.

DIACC looks forward to continuing its collaboration with the Ministère de la Cybersécurité et du Numérique and other stakeholders as Quebec implements this transformative initiative. This achievement reinforces the importance of coordinated efforts across Canadian jurisdictions to create interoperable, trustworthy digital identity solutions that serve all Canadians.

Joni Brennan
President, DIACC


ResofWorld

Most of the world’s space launches are controlled by just three countries

China, the U.S., and Russia dominate space launches. Can alternatives emerge?
The world is increasingly reaching for the stars, but only a few countries have the ability to make it there. As of 2023, over 85 nations have space exploration programs,...

Blockchain Commons

Musings of a Trust Architect: The Exodus Protocol

It’s 2025. Digital infrastructure has become the heart of not just our economy, but our culture. But it can be taken from us in a heart beat. Those of us in the decentralized space have warned against this future for a long time, but it first hit me in a truly visceral way a decade ago when I was teaching technology leadership at Bainbridge Graduate Institute. I supported my students with a powerfu

It’s 2025. Digital infrastructure has become the heart of not just our economy, but our culture.

But it can be taken from us in a heart beat.

Those of us in the decentralized space have warned against this future for a long time, but it first hit me in a truly visceral way a decade ago when I was teaching technology leadership at Bainbridge Graduate Institute. I supported my students with a powerful coordination system that tied together del.icio.us bookmarks, Google Reader, and Google Docs. My students could discover information through RSS feeds, collaboratively bookmark and annotate it, and then work with their peers using that shared data. It was an effective tool for both learning and cooperative action that was soon adopted by the whole school.

But then Yahoo sold del.icio.us and Google shut down Reader. Without warning, without a chance to migrate, our learning community’s entire digital infrastructure collapsed. A generation of learners was quietly deplatformed from the tools that had empowered them to think, share, synthesize and learn together.

By now, everyone probably has a story of digital infrastructural loss. How they lost their Google+ circles. How their internet radio turned off forever one day. How their digitally stored MP3s disappeared into the ether. It’s a pattern that’s encouraged by the perverse incentives of capitalism. A useful service becomes essential infrastructure. Companies move in to collect rent on the technology. Then, they exert their power by reducing features and increasing distractions. Eventually, it becomes non-profitable and they kill it. (Cory Doctorow calls this pattern enshittification.)

Which leads to the question that haunts me: how can we create digital infrastructure that can’t be taken from us?

Enter the Exodus

To resolve this problem, we need what I call Exodus Protocols. These are systems that free us from the control of external sources (like Google or Yahoo! or Sony) by creating infrastructure that doesn’t require infrastructure.

What in the world do I mean by that?

There’s actually a well-known Exodus Protocol: Bitcoin. It provides financial infrastructure, allowing users to transfer value among themselves, but it does so without enshrining permanent infrastructure or empowering centralized authorities.

Miners can come and go. Full nodes can exist as services, but you can also spin them up locally. Some type of network is important for miners to collect transactions and form them into blocks, but the average user doesn’t need that: they can create their own transactions air-gapped and transfer them using QR codes. It’s generally hard to censor Bitcoin, and it would be unthinkable for the entirety of it to disappear in any short amount of time.

Bitcoin demonstrated something profound: that fundamental capabilities can exist as mathematical rights rather than centralized privileges. When your ability to transact depends on a bank’s approval, it’s not a right, it’s permission. Bitcoin restored transaction as a right through autonomous infrastructure. That’s an Exodus Protocol.

Unfortunately, Bitcoin only creates an Exodus Protocol for a small (but important) corner of what we do on the internet: value transfer. It does have some cousins, such as IPFS for data storage, but there aren’t great Exodus Protocols for the vast majority of what we do within the digital sphere. We need more Exodus Protocols, to free us from dependency on centralized services, so that our carefully constructed infrastructures don’t suddenly disappear, as happened for my students at BGI. We need to empower coordination (whether it be for my students or a board of directors), collaboration (at a forum or on a shared artists’ whiteboard), and identity (to correct the missteps made by the self-sovereign identity movement).

Five Patterns for Creating Autonomous Infrastructure

An Exodus Protocol is only successful if it’s designed to actually empower through autonomous service. We don’t want to just create a new digital prison. To design for success requires five architectural principles that help to create the architecture of autonomy itself.

An Exodus Protocol must …

1. Operate Without External Dependencies

If something requires permission to operate, it’s not autonomous. If it stops working when a company fails or a government objects, it’s infrastructure built on sand.

We instead need truly independent architecture. This can be accomplished either through objects that are self-contained, with everything needed for operation either within the object or derivable through math (e.g., autonomous cryptographic objects such as Gordian Clubs); or through distributed operations without centralization, where any operator can be replaced.

₿ — Bitcoin’s approach: Bitcoin took the distributed approach, with validation, verification, and recording of value transfers done by hundreds of thousands of replaceable independent nodes. There’s no central server or authority.

2. Encode Rules in Mathematics, Not Policy

Policy means that someone else decides how a service works. They can make arbitrary or biased decisions. They can succumb to coercion, and they can censor.

In contrast, math doesn’t discriminate, doesn’t take sides, and doesn’t change its mind under pressure. Replacing policy rules with mathematical rules means introducing cryptographic proofs such as private keys and signatures. They make verification deterministic: the same inputs always produce the same outputs.

₿ — Bitcoin’s approach: With Bitcoin, the control of value is ultimately determined by who holds a private key. Sophisticated systems such as threshold signatures and secret sharding offer more nuanced control.

3. Make Constraints Load-Bearing

Constraints make systems less flexible. But, that’s not necessarily a bad thing. We just need to ensure that those constraints serve dual purposes by also supporting a system’s autonomy.

We must be aware of what constraints mean: how they’re helpful and how they’re harmful. Then we must design constraints that create the autonomous infrastructure that we want. As an example: if a credential can’t expire, then it works forever. Similarly, if it can’t be revoked, then it offers perfect access to past content.

₿ — Bitcoin’s approach: Bitcoin offers a number of constraints that strengthen its autonomy largely by building upon mathematics rather than arbitrary policy. Transactions can’t be reversed, which means: 🟥 that you can’t walk back a mistake; but also that 🟢 your funds can’t be seized by fiat. Rules changes require consensus, which means: 🟥 that important updates can require months or years of coordination; but also that 🟢 your funds can’t be threatened by an arbitrary increase of supplies.

4. Preserve Exit Through Portability

Centralized systems lock you in, which is the opposite of sovereignty. True autonomy requires not just the ability to leave, but the ability to take everything with you. Without the ability to walk away, consent collapses into coercion.

Escaping lock in requires interoperability and open standards. Data and credentials must be portable across implementations without proprietary formats that trap users.

₿ — Bitcoin’s approach: Bitcoin keys work in any wallet. Standards for the use of seeds to generate HD keys and the use of wallet descriptors further this interoperability.

5. Work Offline and Across Time

Infrastructure that requires connectivity can be denied connectivity. Infrastructure that requires specific platforms can be denied those platforms.

True autonomy works with whatever channels remain available when coercion attempts to deny others. It requires asynchronous operations, creating services that work during outages and across decades regardless of infrastructural changes.

₿ — Bitcoin’s approach: Bitcoin transactions can be signed offline and broadcast later. The protocol doesn’t care about internet connectivity for core operations.

Foundation, Not Fiat

Not every digital service needs to be an Exodus Protocol. In fact, there are definitely services where you want centralization. You want more vulnerable people to be able to recover their funds in case of fraud. You want parents to be able to act as guardians for their children.

But there are services that are irreplaceable and so would benefit from Exodus. These are digital services that store data, create identity, and manage assets that would be difficult to replace. And there are times when Exodus Protocols become more important than ever: when we’re under threat, under siege, or just struggling to survive.

We still want to offer the ease of access and usability of centralized services in those situations where they’re appropriate, but we want to build those centralized services upon a strong, unwavering foundation of Exodus. If the centralized services fail, there must still be foundations that cannot fall.

Exodus Technology

Early in the month, I introduced Gordian Clubs. They’re another example of the Exodus Protocols that I discuss in this article.

Here’s how Gordian Clubs, which are Autonomous Cryptographic Objects (ACOs) that can be used to coordinate (by sharing data) and collaborate (by updating data), fulfill the five patterns.

(This is a repeat of a list from the Gordian Clubs article.)

Gordian Clubs …

Operate Without External Dependencies. Everything you need is within the Gordian Club: data and permissions truly operate autonomously. Encode Rules in Mathematics, Not Policy. Permits are accessed through mathematical (cryptographic) constructs such as private keys or secret shares. Make Constraints Load-Bearing. Members can’t be removed from a Gordian Club Edition, but that also means permissions can’t be unilaterally revoked. Gordon Clubs don’t have live interactivity, but that means they can’t be censored by a network. Preserve Exit Through Portability. An ACO that can be freely passed around without network infrastructure is the definition of portability. Work Offline and Across Time. Gordian Clubs are meant to be used offline; archival is a major use case, allowing access across a large span of time.

There are numerous other technologies that can enable Exodus Protocols. Many of them are puzzle pieces that could be adopted into larger scale solutions. These include:

QR Codes. Data that can be printed or displayed on air-gapped devices and that can be automatically read into other systems. Bluetooth. Another method for transmitting data when networks are down. Threshold Signatures. A method of coordination (signing) that typically does not require live interactivity.

Though we haven’t previously used the term, Blockchain Commons technologies are often built as Exodus Protocols:

Animated QRs. Animation extends QRs to allow the automated transmission and reading of larger quantities of data. Gordian Envelope. Envelopes are built to allow selective disclosure of information without a network. They’re the foundation of Gordian Clubs. XID. Also built atop Gordian Envelope, XIDs (eXtensible IDentifiers) are decentralized identifiers that are truly autonomous, avoiding the failures of the SSI ecosystem. From Five Patterns to Six Inversions

The threat of our digital infrastructure being taken away from us is part of a larger issue that I call “The Six Inversions.” It’s the systematic transformation of rights into revokable privileges in the digital world.

In the physical world, we have property rights that assure us of access to infrastructure, but in the digital world, centralized entities can take away that infrastructure at any time for any reason. We have no rights to property, to justice, to transparency, or to exit. Our contractual power is neutered, and our identity is sold. As a result, digital infrastructure is unstable, which is why we need to create infrastructure without infrastructure, in the form of Exodus Protocols.

I’ll write more about the Six Inversions in the future, but for the moment I wanted to point it out as an underlying philosophy for why digital infrastructure is unreliable and must be reimagined.

It’s because we don’t have the rights that we expect from the physical world.

Conclusion

As I said, Exodus Protocols aren’t for everyone or everything, but there are situations and services where they’re critical.

When we’ve identified these cases, we can then deploy Exodous Protocol patterns: operating without dependencies, encoding rules in mathematics, making constraints load-bearing, preserving exit through portability, and working offline across time. This creates a blueprint for infrastructure that holds when everything else fails.

What is your critical infrastructure? What have you spent years building and would be hurt without? What infrastructure’s loss would damage your ability to identify yourself, to communicate, to cooperate, or to collaborate? I’d love to hear what they are and to work with you to design the next generation of autonomous infrastructure.

Bitcoin is just the beginning.

Monday, 27. October 2025

ResofWorld

The metaverse is rewriting the rules of who can be a K-pop star

In an industry studded with megastars like BTS and Blackpink, virtual idols are building powerful new fandoms even while their true identities remain obscured.
Jack, one of four virtual idols in the K-pop group Re:Revolution, sat on a stool in the metaverse — digital worlds where people interact through avatars — sporting a sleek...

Friday, 24. October 2025

ResofWorld

Do India’s EV policies discriminate against China? Experts weigh in

The jury is out on whether India’s policies to aid the growth of its domestic EV industries are anti-China.
China is questioning India’s electric vehicle subsidies on the global stage. The world’s largest EV producer, China has requested the World Trade Organization to settle a dispute with India over incentive...

Thursday, 23. October 2025

OpenID

Public Review Period for Proposed Authorization API 1.0 Final Specification

The OpenID AuthZEN Working Group recommends the following OpenID Final Specification: Authorization API 1.0: https://openid.net/specs/authorization-api-1_0-04.html A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in acco

The OpenID AuthZEN Working Group recommends the following OpenID Final Specification:

Authorization API 1.0: https://openid.net/specs/authorization-api-1_0-04.html

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a 14-day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Final Specifications.

The relevant dates are:

Final Specification public review period: Thursday, October 23, 2025, to Monday, December 22, 2025 (60 days) Final Specification vote announcement: Tuesday, December 9, 2025 (14 days) Final Specification official voting period: Tuesday, December 23, 2025, to Tuesday, January 6, 2026 (14 days) 

The AuthZEN working group page:  https://openid.net/wg/authzen/.

Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “Digital Credentials Protocols (DCP)” working group on your contribution agreement), (2) joining the working group mailing list at openid-specs-authzen@lists.openid.net, and (3) sending your feedback to the list. 

Marie Jordan – OpenID Foundation Board Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.



The post Public Review Period for Proposed Authorization API 1.0 Final Specification first appeared on OpenID Foundation.


Digital ID for Canadians

Statement on Auditor General’s Cyber Security Report

October 23, 2025 The Auditor General’s report released on cyber security of federal government networks and systems reveals vulnerabilities that demand immediate, coordinated action across…

October 23, 2025

The Auditor General’s report released on cyber security of federal government networks and systems reveals vulnerabilities that demand immediate, coordinated action across all sectors and jurisdictions. The findings underscore a fundamental truth: digital security and digital trust are inseparable and require collaboration among government, industry, and civil society.

With 6.6 trillion malicious cyber events blocked in a single year and critical gaps in asset management and incident response, the cost of fragmented approaches is clear. When 80% of eligible federal organizations opt out of available security services, it creates exactly the vulnerabilities that sophisticated threat actors exploit.

These findings demand urgency. The January 2024 breach described in the report, where poor coordination enabled prolonged unauthorized access to Canadians’ personal information, demonstrates that cyber security is not only a technical challenge but also a trust challenge. When government systems are compromised, public confidence in digital services erodes, affecting everything from healthcare access to economic participation.

DIACC President, Joni Brennan, issued the following statement: 

“DIACC has long advocated that secure digital services require interoperable, verifiable trust frameworks that bridge organizational and jurisdictional needs. The Auditor General’s findings reinforce why the DIACC’s Pan-Canadian Trust Framework exists: to provide commonly agreed-on business-problem-solving criteria and certification mechanisms that enable organizations to deliver secure, privacy-protecting services that Canadians can trust.”

The path forward requires collaboration. Federal, provincial, and territorial governments should collaborate with private-sector innovators and civil society to implement the comprehensive cyber security strategy that the Auditor General has acknowledged as sound and necessary. The way forward should also include:

Establishing and adopting trusted services for digital identity verification and access control Ensuring visibility of verifiable digital assets through proper inventory and management systems Creating seamless coordination mechanisms for incident response across organizational boundaries Accelerating the adoption of privacy-preserving and enhancing technologies that protect Canadians while enabling secure service delivery

Both governments and the private sector bring essential strengths to this challenge. Governments provide regulatory frameworks, public mandates, and the reach to serve all Canadians. The private sector contributes innovation, technical expertise, and agile solutions. When we combine these complementary capabilities and work in partnership, we can move with urgency to address the Auditor General’s findings.

DIACC stands ready to support all levels of government and industry partners in implementing the digital trust and verification infrastructure that Canadians need. The time for inaction has passed. The cyber threats documented in today’s report are absolute, sophisticated, and persistent. Our response must be equally coordinated, comprehensive, and swift.

Canada has the expertise to lead in digital trust and verification. Now we must demonstrate the will to act.

Joni Brennan
President, DIACC


DIF Blog

Steering Committee Election 2025: Candidate Statements

Quick links: Sam Curren (Indicio, US) Jan-Christoph Ebersbach (Identinet, Germany) Rouven Heck (Independent, US) Matthew McKinney (ArcBlock, US) Doug Rice (Hospitality Technology Network, US) Markus Sabadello (DanubeTech, Austria) Eric Scouten (Adobe, US) Sam Curren (Indicio, US) Interest and qualifications Sam Curren has been involved with the DIF since the organization&

Quick links: Sam Curren (Indicio, US) Jan-Christoph Ebersbach (Identinet, Germany) Rouven Heck (Independent, US) Matthew McKinney (ArcBlock, US) Doug Rice (Hospitality Technology Network, US) Markus Sabadello (DanubeTech, Austria) Eric Scouten (Adobe, US) Sam Curren (Indicio, US)
Interest and qualifications

Sam Curren has been involved with the DIF since the organization's very origins. With relevant work both in and out of the digital identity space, Sam has been involved in a number of efforts, including DIDComm and its transition from the HL Aries WG into the DIF and subsequent v2 release of the spec. Sam has an MS in Computer Science.

Answers to Questions What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

Choosing an org path and main goals (either within or without LF) is a main focus. Additional challenges involve balancing promotion of newer technologies while making strong statements about technology and its effects. This requires the balance of a mostly-unopinionated org open to development of new technologies and approaches, while making strong statements and issuing strong guidance on particular issues. This has been a past benefit, but is difficult to navigate. 

If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

I believe we as an org will need to choose what items will flourish under a fully-LF model, and focus on those items. We should seek to become the arm of the LF (or perhaps LFDT) focused on specific topics well aligned with our larger org(s). This will ease the pain of not being able to support a full ED for DIF the way we have in the past.

If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

Our new freedoms will require willing donors, and our advocacy will need to align with our best sources of income. Staying close to our sponsoring orgs will need to become a high priority. This move can allow us to make very strong statements, but will also need to be carefully navigated to maintain our ability to conduct pre-standards-body efforts.

(back to quick links)

Jan Christoph Ebersbach (Identinet, Germany)
1. What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

I believe DIF's biggest challenge over the next four quarters centers on accelerating real-world adoption of Decentralized Identity. While digital identity has emerged as a game changer—particularly with governments around the world recognizing its transformative potential—and decentralized approaches offer significant advantages over traditional digital identity schemes, we face a critical gap between our technical ideas and practical implementations. Real-world adoption is the cornerstone of success for both DIF and its members, and bridging this gap must be our primary focus.

DIF currently supports adoption through three key channels: developing specifications, providing forums for members to exchange ideas and collaborate through working groups and labs, and maintaining an active presence in conversations with relevant third-party entities. I believe the organization should continue to excel in all three areas. Specifically, I would advocate for greater standardization of our specifications accompanied by reference implementations that make them truly usable for practitioners. By delivering high quality tools alongside our technical standards, we can remove friction from the adoption process and demonstrate the practical value of decentralized identity solutions to organizations considering implementation. This also entails archiving unmaintained and incomplete works and ensuring proper guidance for contributions to strengthen our public image and attract new members.

(back to quick links)

Rouven Heck (Independent)
Interest and qualifications

My background is in computer science and banking, I spent eight years at ConsenSys, where I founded uPort and later served as the identity lead. I have been a DIF board member since its inception (as one of its founders) and serving as executive director for several years.
Currently, I work as an independent advisor and board member for a KYC identity-related project while actively conducting research and experiments in AI.
1b) Statement: 
As independent advisor, I currently play a less active role in the identity ecosystem. My main contribution to the Steering Committee and future Executive Director will be extensive background in DIF & the collaboration with Linux Foundation and other organizations. 

Questions: 
What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

DIF holds immense potential, but with the industry evolving and key players shifting their focus or exiting, it’s crucial for DIF to solidify its role within the broader ecosystem to maintain its influence. Identity remains highly relevant across many facets of the digital world—empowering individuals, combating fake content, and more—especially as the web becomes increasingly agent-driven.
To achieve this, it’s essential to pinpoint the industry's major pain points where decentralized identity concepts and technologies can offer superior solutions. Additionally, we must identify partners and members who share this mission, foster cross-industry collaboration, and position DIF as the central hub for these efforts.

If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

The key question is: how will the non-commercial ideals and mission of the DIF endure within a market-driven organization? Large corporations heavily influence the Linux Foundation, and their interests often diverge from the DIF's mission. Maintaining a strong and independent voice will be the greatest challenge, especially if management prioritizes the demands of their largest financial contributors. Ideally, the DIF can sustain its independent funding and secure donations or contributions from non-profit organizations that align with its vision. 

If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

In addition to membership fees, seek independent funding from mission-aligned non-profits. As an independent organization, DIF gains greater flexibility; however, its governance and leadership must ensure that DIF continues to operate effectively and strategically within the industry, as it may risk losing some credibility due to its association with the Linux Foundation 

(back to quick links)

Matthew McKinney (ArcBlock, US)


1. What is DIF’s biggest challenge in the next 4 quarters, and how can the organization best help its members rise to that challenge?

Our biggest challenge is converting industry awareness into adoption. While our specs are strong, I believe that potential adopters still face too much friction, and the business value isn't always clear.

To solve this, I will drive a single, shared plan with the Steering Committee and working group chairs focused on three things:

Treat our standards like products. For each priority spec, we will ship an "adoption kit" containing a live sandbox, developer libraries, and clear tutorials. Our goal is to make it possible for a developer to issue their first credential in a day and complete a verifiable action within a week. Listen to our members and act. We'll run at least quarterly surveys and monthly office hours to identify the top blockers to adoption. We will then reserve roadmap space to fix those issues and publicly report on our progress. Prove the business value. We will amplify member success stories through case studies that focus on quantifiable ROI: time saved, fraud reduced, and compliance simplified. We will adopt vibe marketing playbooks to ensure our stories are seen in the right locations at the right times. Demonstrating ROI in this investment is core to this. 

2. If DIF were to integrate more closely with the Linux Foundation, what should DIF focus on?

Our focus should be to leverage the LF's scale while protecting our speed and flexibility. In collaboration with the Steering Committee and LF counterparts, we would:

Expand our go-to-market reach. We would plug into LF’s marketing, events, and developer relations programs. This would put our members and their solutions in front of a global audience of enterprise buyers and developers, dramatically increasing top-of-funnel awareness. I would also engage our members to identify other members who can help facilitate these activities. Integrate with the enterprise stack. We would partner with other major LF projects in security, cloud, and AI. The goal is to position DIF as the default, built-in identity layer for C-suite priorities like Zero Trust architecture and supply-chain integrity. Deliver clear business solutions. We would co-publish reference architectures and live demos that map our technology directly to the challenges faced by CISOs and CTOs, making it easier for them to adopt our work. Success wouldn't be measured by page views, but by enterprise trials, certified implementations, and real-world deployments.

3. If DIF operated as a fully independent organization, what should be its focus for the next 4 quarters?

As an independent organization, we would need to be relentlessly member-centric and commercially sustainable. This requires a clear, disciplined plan co-owned by the Steering Committee and working group leads.

Focus exclusively on member value. We would use surveys and direct feedback to validate the top 2-3 "jobs-to-be-done" for our members and dedicate our resources to solving them. We would build only what our members need to succeed in production. More value will drive more "skin in the game" participation and membership. Let our products drive our growth. Our friction free adoption kits would become our primary marketing tool. We would supplement this with published ROI stories and calculators that members can take directly to their budget owners. Create sustainable revenue streams. We would introduce new value-add programs that also ensure our long-term health, such as paid conformance and certification, a public directory for verified implementers and auditors, and premium support tiers. We must be lean, transparent, and completely aligned with delivering measurable outcomes for the organizations that fund our mission.

(back to quick links)

Doug Rice (Hospitality Network, US)
Interest and qualifications

I have been an active participant and supporter of decentralized identity and DIF since the formation of the Hospitality and Travel SIG in 2020 or 2021. For more than two years I have led twice-weekly meetings of the self-attested hospitality travel profile effort, which evolved in spring 2025 to become the Hospitality & Travel Working Group. I was the initial chair of that group and now serve as one of the two co-chairs.

I spent most of my career in senior roles in the hospitality and travel industry. Identity is a critical issue for hospitality and travel and will become more so in the AI era, and I have been a vocal supporter of DIF’s role in addressing the issue. In addition, I bring significant nonprofit management experience, having founded, bootstrapped, and for 13 years led (initially as ED, then as CEO) a highly successful trade association in the hospitality tech space (4500 members globally, $2.5 million budget, 10 staff when I retired in 2015). In that role I got to know most of the senior executives in the industry around the world (hotel and tech vendor), and still have a strong network that I can and do tap into to publicize DIF’s efforts.

I currently sit on several boards and advisory boards for vendors within the hotel tech industry, many of which can benefit (at least in the longer term) from DIF’s and other SSI efforts. I have written extensively on a wide range of topics, with a style designed to explain technical solutions to a business or semi-technical audience, and have spoken hundreds of times at industry events on a wide variety of topics (including self-sovereign identity).

While I have technical pedigrees from the distant past and still understand most relevant technical concepts, I have been a business executive for the past 30 years and will not pretend to have current technical skills. But having spent my professional career at the intersection of tech-speak and business-speak, I am an exceptionally competent translator between the two languages, and have had continued success in explaining complex technical concepts to business leaders so they can evaluate them thoughtfully and meaningfully.

Specific questions

1. What do you think DIF's biggest challenges are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

I don’t have exposure to everything DIF does so I’m sure my list is not complete or properly prioritized, but based on what I have seen, I would say:

Finding more (and more effective) ways to communicate the value proposition of our work, and the business opportunities it creates, in ways that are clearer to nontechnical or semitechnical business audiences. Finding ways to increase engagement, membership revenue, and other sources of revenue. Effectively addressing the perpetual open-source standards issue of convincing users to pay for something that they can get for free. Confirming or evolving the current legal and organizational structure to ensure alignment with longer-term objectives. This may require ensuring that the objectives themselves align appropriately with the needs of members; even if they were fully aligned in the past, this needs to be continually evaluated.

2. If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

I can’t comment on this directly as I have had very limited exposure to the LF membership and staffing structure. In concept, I believe that nonprofit organizations need to carefully consider each of their operating functions and how best to achieve it, consistent with their mission. If key functions (marketing/communications, human resources, operations, legal, finance/accounting, membership, etc.) can be done more effectively in an umbrella operation, they should be. If on the other hand nuances of the organization’s objectives, membership, community, culture, industry structure, technology needs, or other issues mean that some functions are better run independently, then that choice should prevail.

There is no single answer; when I ran a nonprofit we were always evaluating the options even though we started as, and remained, independent throughout my tenure as CEO. But that was based on our particular mission, industry structure, and level of maturity. Each organization is different and changes over time.

3. If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

The first focus has to be on the sustainability of the financial model: initial funding (or sources thereof) and ongoing operations. Nothing succeeds if you run out of money. Having bootstrapped an organization from $0 revenue and grown it to $2.5 million, I understand this challenge intimately.

The secondary focus needs to be to define DIF’s position within the world of standards organizations. What are the sectors where DIF can win, where should we partner with others that are better positioned, what should we abandon? With limited volunteer resources, it’s critical to ensure that they are directed toward outcomes with the highest probability of success and adoption – and commercial success for contributors.

Within this, the ongoing relationship with LF matters. Linux and W3C are considered the gold standard for open-source ecosystems, and the migration of DIF efforts into W3C standards can amplify the visibility, credibility, and adoption of our efforts. There may be reasons for a divorce, but it needs to be amicable if DIF wants to continue to leverage LF’s credibility to spur adoption of its work.

(back to quick links)

Markus Sabadello (DanubeTech, Austria)

I think the biggest challenge for the future has to do with the question that I usually also asked during the recent series of interviews with Executive Director candidates. This question is how DIF can find the right balance between maintaining and growing its membership, and keeping its traditional open, un-opinionated, grassroots culture. The first objective may require more direct answers and more concrete technological choices to match the real-life needs of governments and corporations, while the second objective may sometimes stand in the way of that. This balance could of course be changed at some point, if we as DIF decide this together. In other words, there shouldn't be any dogmatic, religious rules about DIF's orientation. DIF members are very diverse when it comes to their motivations why they are participating, this needs to be taken into account. In terms of technical work, since everybody is talking about AI now, we should clearly make sure that we are a well-known actor when it comes to decentralized identity topics within the AI field. But I think we already have strong members and contributors in this field.

If DIF were to integrate more closely with the rest of the Linux Foundation, I think that while there will be changes in operational, financial, etc. aspects, DIF wouldn't necessarily have to change much content-wise. It might be smart however for DIF to align some of its Working Groups and Work Items more closely with other LF projects such as ToIP, Open Wallet Foundation, to reduce overlap and market confusion. We could potentially propose to move some Work Items from other LF projects into DIF, if we feel they would fit better (e.g. DID method specifications).

If DIF became bootstrapped as a completely independent organization, we would initially be busy for a while with adjusting, migrating, etc. the various infrastructure and processes. But I think such a step would also bring a certain new "freshness" to DIF, raise curiosity, and maybe new attention from actors and communities that we didn't previously expect. Content-wise, I feel like it would give us even more freedom, and a more independent perception. We should then focus on a really good website, cleaned up repositories, documentation, videos, etc., in order to make it easy for newcomers to feel comfortable.

(back to quick links)

Eric Scouten (Adobe, US) Interest and qualifications

I'm excited to stand for election for DIF's Steering Committee. Many of you know me through my role as Identity Standards Architect at Adobe. In this role, I help bridge the gap between content creators and their audiences. It is far too easy in this era to tell false stories about who created content and to use misinformation and disinformation to confuse and mislead.

The work of the content provenance ecosystem aims to provide more transparency about who is creating and distributing digital media, and to make it easier for authentically created content to stand apart from content that intends to mislead. A key part of this ecosystem is the use of individual and organizational identity credentials – tools that allow creators, publishers, and distributors to prove who they are and to establish trustworthy connections with their audiences. By combining provenance metadata with verifiable identity, we can build a stronger foundation for trust across the digital media landscape.

I’ve been honored to help lead this effort as co-chair of the Creator Assertions Working Group (CAWG) and to contribute to real-world adoption by implementing CAWG standards in open-source work sponsored by Adobe.

Earlier this year, CAWG became part of DIF, and the collaboration has already benefitted greatly from the expertise and feedback of the broader DIF membership. This partnership has strengthened CAWG’s work and, I believe, represents the kind of cross-community engagement that makes DIF so valuable.

I am eager to support DIF and to help ensure that DIF continues to be a vibrant and trusted home for innovation in decentralized identity and related ecosystems.

Response to questions

I haven't yet formed a position on whether DIF should align more deeply with the Linux Foundation, go it alone, or maintain status quo. I could make arguments for any of these paths and of course there are many factors that will go into making a well-considered choice. Top of mind for me is thinking through financial viability for DIF and its members – in other words, how do we encourage enough members to sign up for paid memberships to pay our staff and our bills, traded against the need to make participation appealing and feasible for a wide variety of companies, non-profits, and individual members.

(back to quick links)


FIDO Alliance

FIDO Webinar: Designing Passkeys for Everyone: Making Strong Authentication Simple at Scale

Attendees joined this webcast to hear from members of the FIDO Alliance’s UX Working Group explore the critical UX considerations in designing and deploying passkeys at scale, from initial user […]

Attendees joined this webcast to hear from members of the FIDO Alliance’s UX Working Group explore the critical UX considerations in designing and deploying passkeys at scale, from initial user onboarding to seamless cross-device synchronization. 

Speakers from Google, Microsoft and HID discussed how to address the challenges of simplifying complex security concepts for everyday users, and gain valuable insights into the future of authentication. 

Speakers shared insights about the key UX decisions, user research findings, and design strategies that are shaping the adoption of passkeys, and how the FIDO Alliance is working to make online security both powerful and effortless.

The Design Guidelines for Passkey Creation and Sign-ins are available at https://www.passkeycentral.org/design-guidelines/

Speakers included:

James Hwang, Microsoft Mitchell Galavan, Google Adrian Castillo, HID

Digital ID for Canadians

FCT’s Client ID Verification Achieves DIACC PCTF Certification: Enhancing Digital Trust for Legal Professionals

Toronto, October 23, 2025 – We are thrilled to announce that FCT Client ID Verification has been certified against the Pan-Canadian Trust Framework ™(PCTF) Verified…

Toronto, October 23, 2025 – We are thrilled to announce that FCT Client ID Verification has been certified against the Pan-Canadian Trust Framework ™(PCTF) Verified Person Component at LOA2. Established in 2012, DIACC is Canada’s largest multistakeholder organization, exclusively addressing digital trust and identity verification, fostering confidence and consistency in the market through its internationally recognized framework and standardized third-party conformity assessment program.

Legal professionals today face a daunting task: safeguarding sensitive client information while navigating ever-changing regulatory landscapes as technology evolves rapidly. To address this, FCT’s Client ID Verification solution has successfully achieved certification against the DIACC Pan-Canadian Trust Framework (PCTF), ensuring its solution meets the highest bar for security and trust.​ 

The PCTF enhances digital trust relationships across the Canadian ecosystem based on legal, policy, and technical requirements to which organizations agree to adopt, which includes best practices, policies, technical specifications, guidance, regulations, and standards, prioritizing interoperability, privacy, security, and trustworthy use of digital identity and personal data.

“Earning certification against the Pan-Canadian Trust Framework (PCTF) demonstrates FCT’s dedication to protecting Canadians’ personal information while supporting the legal community’s shift toward digital transformation,” said Joni Brennan, President of DIACC. “This
achievement shows how trusted digital identity solutions can simplify compliance, strengthen security, and make digital services safer for everyone.”

Purpose-built for Canadian legal professionals, FCT’s Client ID Verification is a quick, secure, digital solution designed to authenticate client IDs in minutes, whether in person or online. By aligning with the PCTF, we can ensure personal data is managed, stored and secured with the highest standards of protection, so that legal professionals can always move forward with confidence.​ 

This certification guarantees that legal professionals’ client data is handled with the highest level of trust and security. FCT’s Client ID Verification meets KYC obligations, providing consistent quality that reduces risk, builds trust and helps legal professionals adopt digital workflows to meet regulatory requirements. ​ 

​Seamless workflow integration is at the heart of our solution. By integrating into the tools legal professionals already use and minimizing manual data entry, ID verifications can be completed efficiently while maintaining accuracy and compliance. Clients retain control of their information on their own devices, stored in their secure digital wallet, streamlining the verification process and reducing duplicate data entry errors. ​ 

“FCT’s completion of the DIACC Pan-Canadian Trust Framework (PCTF) certification demonstrates our commitment to delivering best-in-class identity verification and fraud detection products that empower legal professionals to work with confidence. Legal professionals trust FCT to help them navigate an increasingly risky environment. FCT’s Client ID Verification meets the highest national standards and delivers on our core brand promise of supporting legal professionals with secure digital workflows that drive efficiency and build trust with our clients at every step.” – Ryan Lambert, Vice President, data and strategy.

With FCT’s Client ID Verification, legal professionals can feel confident knowing they are using a solution built to the highest national standards, helping them manage sensitive information responsibly, reduce audit failure risk and keep pace with regulatory changes. By providing clear benchmarks for ID verification solutions, we are working to support the digital transformation of the legal sector, while ensuring functional benefits like compliance, reduced risk and workflow efficiency. ​ 

About DIACC
Established in 2012, DIACC is a non-profit strategic alliance of public and private sector members committed to advancing full and beneficial participation in the global digital economy by promoting PCTF adoption and conformity assessment. DIACC prioritizes personal data control, privacy, security, accountability, and inclusive people-centred design. To learn more about DIACC, please visit https://diacc.ca/

About FCT
Since introducing title insurance to Canada in 1991, FCT has continued to lead the way with innovative solutions that simplify and streamline the real estate process. Today, they have significantly expanded their offerings to become leaders in property intelligence, residential and commercial solutions, residential lending solutions, and asset recovery. Their diverse lines of business support partners and customers across the entire real estate lifecycle.

About FCT Client ID Verification
It provides a quick, secure digital solution for legal professionals to meet their FINTRAC and KYC obligations effortlessly. Lawyers and Notaries can authenticate their client’s identity using technology that performs many types of validation, including biometric matching between a live selfie and portraits extracted from identity documents, analyzes government-issued identity documents, including cryptographic verification of ePassports, to check against a multitude of compliance data points, including credit bureau and mobile account verification directly on their smartphone. In minutes, the verified credential is issued and stored in the client’s mobile phone’s secure digital wallet, and the report can be accessed directly by the legal professional from FCT’s ID verification portal.

Media Contact:
Joni Brennan
President
Digital ID and Authentication Council of Canada (DIACC)
communications@diacc.ca


OpenID

OpenID Connect Core 1.0 now published as ITU standard

We’re pleased to announce that Recommendation X.1285, incorporating the OpenID Connect Core 1.0 – errata set 2 specification, has been officially published by the International Telecommunication Union. Following the formal adoption in April 2025, which we announced in May, the specification is now publicly available. This publication marks a significant milestone as the first OpenID […] The post

We’re pleased to announce that Recommendation X.1285, incorporating the OpenID Connect Core 1.0 – errata set 2 specification, has been officially published by the International Telecommunication Union.

Following the formal adoption in April 2025, which we announced in May, the specification is now publicly available.

This publication marks a significant milestone as the first OpenID Foundation specification to achieve ITU standard status, placing OpenID Connect Core alongside other internationally recognized identity specifications, such as OSIA (ITU-T X.1281) and FIDO2 (ITU-T X.1277.2 and ITU-T X.1278.2).

What this means

The publication of ITU-T X.1285 provides implementers with even greater confidence in the stability and international recognition of OpenID Connect Core. With the standard now used by billions of people across millions of applications, this ITU recognition enables:

Enhanced global interoperability Stronger foundation for regulatory compliance Increased adoption in jurisdictions requiring internationally recognized standards Clear pathway for national policies and digital identity frameworks. Thank you

This achievement represents years of collaborative work between the OpenID AB/Connect Working Group, and national delegations worldwide. We extend our gratitude to everyone who contributed to making this vision a reality.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

 

The post OpenID Connect Core 1.0 now published as ITU standard first appeared on OpenID Foundation.


ResofWorld

Malawi’s new farmhand: AI that speaks the local language

Ulangizi AI combines ChatGPT’s power with traditional know-how and government data to help crops — and families — flourish.
Alifosina Mtseteka’s family has been farming for generations in central Malawi, but she recently ran into a pest infestation that had her stumped. The okra she grows along with beans...

Wednesday, 22. October 2025

EdgeSecure

EdgeCon Autumn 2025

AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm On October 9, 2025, EdgeCon Autumn, hosted in partnership with Rider University, brought together higher education technology… The post EdgeCon Autumn 2025 appeared first on Edge, the Nation's Nonprofit Technology Consortium.
AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm

On October 9, 2025, EdgeCon Autumn, hosted in partnership with Rider University, brought together higher education technology leaders and professionals from across the region for a day dedicated to accelerating institutional modernization. From cybersecurity and cloud strategy to campus networks and AI-driven student support, the event offered deep dives into the most pressing challenges and opportunities facing colleges and universities today. Through an engaging keynote panel and a full slate of breakout sessions, attendees explored emerging technologies, exchanged actionable insights, and built meaningful connections with peers and industry-leading vendors committed to driving transformation across the higher ed landscape.

Responsible Innovation in the Age of AI

As artificial intelligence and emerging technologies rapidly transform our world, innovation must evolve beyond efficiency and novelty to reflect deeper human, ethical, and environmental priorities. Among the morning’s breakout sessions was Designing for the Whole: A Multidimensional Framework for Responsible Innovation in the Age of AI presented by Michael Edmondson, Associate Provost, New Jersey Institute of Technology (NJIT). He introduced a multidimensional framework for responsible innovation organized around four core domains: Performance & Design, Creative & Cognitive Dimensions, Human-Centered Values, and Ethical & Governance Principles. Each dimension includes four key attributes—from Functionality and Originality to Empathy and Integrity—that collectively offer a holistic model for evaluating and guiding innovation in the AI era.

Protecting your Data and Reducing Institutional Risk

The limitations of legacy on-premise ERP systems are increasingly evident as cybersecurity threats grow and data regulations evolve. In Protecting your Data and Reducing Institutional Risk: SaaS ERP vs. On-Premise System, Stephanie Druckenmiller, Executive Director, Enterprise Technologies, Northampton Community College, and Bryan McGowan, Workday Principal Enterprise Architect, Workday, explored how shifting to a modern SaaS ERP like Workday can strengthen data protection, reduce institutional risk, and ensure long-term compliance. Druckenmiller and McGowan compared SaaS and on-premise systems in terms of governance, cybersecurity, and regulatory alignment, to show how a unified cloud platform enables real-time visibility, audit readiness, and consistent policy enforcement.

The session also highlighted how AI and automation built into SaaS ERPs proactively detects and mitigates risks—capabilities often lacking in older systems. Attendees learned how Workday supports institutional resilience through faster recovery, ongoing updates, and simplified compliance with emerging legal standards, and gained practical strategies for protecting data and managing risk in the years ahead.

“Very informative. The vendors were all great”

– Cherri Green
Procurement Coordinator
Princeton Theological Seminary

Using AI to Improve Data Accessibility

Bharathwaj Vijayakumar, Assistant Vice President, and Samyukta Alapati, Associate Director from Rowan University’s Office of Institutional Research and Analytics, shared insights into one of their key initiatives: an AI-powered tool designed to provide faculty, staff, and administrators with faster, easier access to real-time institutional data without coding or complex reporting required. Rowan’s users can ask questions like, “How many students are enrolled in a specific program this term?” or “Which majors are growing the fastest?” and receive immediate, accurate answers. The goal is to eliminate technical barriers and put actionable data directly into the hands of those who need it for advising, planning, and decision-making.

While the tool runs on Python, ThoughtSpot, and web technologies behind the scenes, the user experience is designed with simplicity and usability in mind. During their session, attendees experienced live demonstrations and left with practical strategies for improving data accessibility and increasing operational efficiency within their own institutions.

Evolving Toward an AI-Enabled Data Ecosystem

For institutions aiming to keep pace with the demands of digital transformation, modernizing fragmented data systems is a critical first step. From Data Chaos to Clarity: Evolving Toward an AI-Enabled Data Ecosystem, presented by Randy Vollen, Director of Data & Business Intelligence, Miami University, and Jon Fairchild, Director, Cloud & Infrastructure, CBTS, shared insights from a recent data modernization initiative focused on building a cloud-first infrastructure, creating scalable reporting environments, and preparing for AI-driven use cases.

The presentation discussed the non-exclusive implementation approach that used commercially available platforms to support data integration across enterprise systems, including HR and financial systems. This strategy led to improved internal data coordination, more consistent access to analytics, and a solid foundation for the responsible adoption of AI and automation technologies. Their experience offered attendees a clear blueprint for driving data transformation across complex institutional landscapes and lessons learned from integrating enterprise platforms to streamline analytics.

"Appreciate these opportunities to gather and share knowledge.”

– Jeff Berliner
Chief Information Officer
Institute for Advanced Study

Cybersecurity Maturity Model Certification Framework

Bobby Rogers, Jr., Virtual Chief Information Security Officer, Edge, shared a practical, leader-focused overview of the Cybersecurity Maturity Model Certification (CMMC) framework, explaining why it matters even beyond Department of Defense-funded projects, and what higher education leaders need to do to prepare. Featuring real-world case studies, this presentation highlighted the actual risks of non-compliance and the chance to take the lead with Edge’s scalable cybersecurity solutions.

Attendees reviewed real-world examples of costly non-compliance, gained clarity on the requirements of CMMC 2.0 and its alignment with frameworks like GLBA and NIST 800-171, and explored how Edge supports institutions in navigating challenges unique to higher ed environments. The session concluded with an actionable roadmap to help campuses assess their current posture and begin preparing for future compliance requirements.

Designing Digital Learning Environments that Are Accessible, Equitable, and Sustainable

In response to the federal mandate that all public institutions comply with revised Title II of the Americans with Disabilities Act by April 2026, The College of New Jersey (TCNJ) has launched a coordinated initiative to improve the accessibility of digital course materials and online environments. TCNJ’s Judi Cook, Executive Director, Center for Excellence in Teaching and Learning; Ellen Farr, Director of Online Learning; and Mel Katz, Accommodations Support Specialist for Curriculum and Assessment, led the breakout session Beyond Compliance: Designing Digital Learning Environments that Are Accessible, Equitable, and Sustainable.

Rather than approaching compliance as a legal checkbox, TCNJ has framed the work to fundamentally improve student and faculty experiences through inclusive design, transparency, and collaboration. This presentation shared a case study in progress, tracing their institutional journey from grassroots collaboration and capacity-building to structured, strategic initiatives. The session also highlighted sustainable strategies for advancing accessibility and faculty development through systemic change and the importance of approaching accessibility not as a project with an endpoint, but as a continual part of the digital transformation of teaching and learning.

A Proactive Approach to Student Success

An expert panel from the College of Health Care Professions led by David Bent, Vice President, Digital Services, Online; Joshua Mouton, CHCP BI/Developer; and moderator, Ross Marino, Account Executive, Proactive AI Agent Specialist, NiCE, shared how the organization drove conversions and improved student outcomes with Proactive AI Agent. Attendees gained an inside scoop on their approach, including details of their initial build, guardrails, and how they’re continuously improving journeys with data-driven enhancements. They also highlighted how they used this innovative technology to not only create excellent student experiences but find opportunities for synergy within their organization. 

Empowering Decision-Making and Driving Efficiency with Tableau Online

In the dynamic environment of higher education, data-driven decision-making is not a luxury, it's a necessity. Data in Action: Empowering Decision-Making and Driving Efficiency with Tableau Online led by Community College of Philadelphia’s Moe Rahman, AVP/CIO, and Laura Temple, Associate Director, explored how their community college leveraged Tableau to transform raw institutional data into interactive, insightful dashboards across key business areas including enrollment management, finance, student services, and academic affairs. By centralizing data visualization and analysis, they were able to empower stakeholders with real-time insights that drive efficiency, support strategic planning, and uncover opportunities for process improvement.

"Erin, Adam, and the entire team were outstanding. Great sessions too.”

– Ilya Yakovlev
Chief Information Officer
York College of PA

Protecting Privacy in the Age of AI Infused Pedagogy

With the increasing adoption of AI in educational environments, there are critical privacy and security considerations that arise. Teresa Keeler, Project Manager, NJIT, led the session, Protecting Privacy in the Age of AI Infused Pedagogy, and explored various ways AI is being utilized in education, from personalized learning platforms and intelligent tutoring systems to automated assessment tools, content generation, and administrative analytics. In higher education, this extends to research support, student success prediction, and advanced pedagogical tools.

Key concerns for many organizations include data storage, access protocols, the risk of de-anonymization, and the need to align with relevant data privacy regulations. Keeler discussed this “data dilemma” and the types of sensitive student and interaction data collected by AI tools. She also delved into the cybersecurity threats posed by AI, such as data breaches and sophisticated phishing attacks, and the challenge of AI-generated misinformation and its impact on academic integrity. Attendees learned about a proactive, multi-step approach for responsible AI integration, including developing clear institutional policies, conducting vendor vetting and providing comprehensive training for faculty and staff.

Modernizing Cybersecurity in Higher Ed

Modernizing Cybersecurity in Higher Ed: How Stevens IT Transformed User Risk Management explored how Stevens Institute of Technology overhauled its cybersecurity training by replacing outdated, static modules with a real-time, adaptive approach to user risk. Jeremy Livingston, CISO at Stevens, and David DellaPelle, CEO of Dune Security, discussed the implementation of Dune Security’s User Adaptive Risk Management platform, which enabled role-based testing and tailored training for faculty, staff, and students in response to increasingly personalized social engineering threats.

The session detailed how Stevens eliminated generic compliance training in under a month, introduced individual and departmental risk scoring, and integrated the platform with Workday and Okta to monitor user behavior and access. Attendees walked away with a blueprint for shifting from traditional awareness programs to action-oriented strategies, illustrating how educational institutions can build scalable, human-centered cybersecurity defenses.

"It was a nice event to attend and great to see some faces I hadn't seen in a while.”

– Ron Loneker Jr.
Director, IT Special Projects
Saint Elizabeth University

Real-World Applications and Outcomes of the New Higher Education Paradigm

EdgeCon’s keynote panel, AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm, explored how artificial intelligence is actively transforming higher education, from teaching and research to campus services and operations. Featuring senior campus leaders Jeffrey Rubin, Senior Vice President for Digital Transformation and Chief Digital Officer, Syracuse University, and Devendra Mehta Digital Strategy and Data Analytics Officer, Fairleigh Dickinson University, the session showcased real-world case studies and data-driven strategies that demonstrate AI’s measurable impact across institutions.

The panelists shared practical insights on implementing AI at scale, highlighting lessons in policy development, digital strategy, and return on investment. Attendees gained actionable guidance on navigating the evolving AI landscape in academia, with a focus on delivering sustainable, high-impact solutions in today’s digital-first education environment.

Rethinking AI Readiness, Risk, and Responsibility

Higher education faces a pressing question: Are we truly ready to harness AI effectively, responsibly, and sustainably? In this breakout session, Nandini Janardhan, Programmer Analyst/Applications Manager, Fairleigh Dickinson University, and Sahana Varadaraju, Senior Application Developer, Rowan University, challenged institutions to go beyond AI awareness and critically assess their true readiness for responsible and sustainable adoption. They guided attendees through a comprehensive AI readiness framework covering technical infrastructure, institutional culture, and governance practices.

Participants learned to identify key barriers, ranging from financial constraints to ethical concerns, and evaluate sustainability through the lenses of equity, environmental impact, and algorithmic fairness. The session emphasized that effective AI implementation in higher education requires more than technology; it demands strategic alignment, thoughtful governance, and tailored solutions. Attendees left equipped with practical tools, including a self-assessment checklist and roadmap template, to begin or refine their institution’s AI journey.

"The breakouts were absolutely spectacular”

– Keri Salyards
Instructional Technologist
Mount Aloysius College

Turning Process, Architecture, and Data into Institutional Advantage

Strategic Foundations for AI: Turning Process, Architecture, and Data into Institutional Advantage debunked the myth that AI can be seamlessly integrated into higher education without foundational preparation. Instead, presenters emphasized that sustainable AI success starts with process clarity and disciplined system design. By mapping institutional operations across the student lifecycle and aligning enterprise architecture with mission, colleges and universities can create the strategic groundwork needed for AI to drive real impact.

Attendees learned about the importance of leadership in demanding alignment before adoption, treating data as a strategic asset through governance of the "Five Vs,” and preparing for real-time decision-making via HTAP platforms. Without these foundations, AI is a distraction; with them, AI becomes a catalyst for competitiveness, innovation, and student success.

Faculty-Informed Strategies to Improve Online Course Development

Based on qualitative research with faculty who collaborated on online course design, From Research to Practice: Five Faculty-Informed Strategies to Improve Online Course Development outlined five research-backed, actionable strategies to improve online teaching effectiveness and reduce faculty resistance. MaryKay McGuire, Ed.D, Learning Experience Designer, Siena College, and Danielle Cox, M.Ed. shared strategies to integrate adult learning theory into instructional design without overwhelming faculty, recommendations for improving collaboration between course designers and instructors, and ideas for scaling faculty support as AI and automation reshape online teaching. This session bridged the gap between institutional priorities and lived faculty experience offering a strategic and sustainable model for instructional improvement.

Solving the AI Faculty Development Puzzle

In Putting the Pieces Together: Solving the AI Faculty Development Puzzle, presenters Carly Hart, Director, Instructional Design & Technology, and Naomi Marmorstein, Associate Provost for Faculty Affairs, from Rutgers University-Camden explored the institution’s challenges and successes when implementing year-long, campus-wide AI faculty development programming. They shared how they navigated a wide spectrum of faculty attitudes, from enthusiastic early adopters to those who view generative AI as a fundamental threat to academic integrity. Their experience underscored that one-size-fits-all approaches fall short; instead, effective faculty development must address diverse pedagogical needs, discipline-specific concerns, and deeper philosophical questions around authorship, creativity, and knowledge creation in the age of AI.

Demystifying AI Adoption in Higher Education

For institutions looking to move beyond AI buzzwords and into real-world impact, the collaborative session, AI Unlocked: Resources, Policy, and Faculty Training, aim to demystify AI adoption in higher education from three critical vantage points. Dr. Forough Ghahramani, Associate Vice President for Research, Innovation, and Sponsored Programs, Edge, kicked things off with an insider’s tour of the National AI Research Resource (NAIRR) Pilot, an invaluable toolkit now available to educators and researchers nationwide. John Schiess, Technical Director. Office of Information Technology (OIT), Brookdale Community College, then explored institutional AI policy and regulation and shared actionable strategies for crafting guidelines that support innovation while managing risk.

Rounding out the session, Michael Qaissaunee, Professor and Co-Chair, Engineering and Technology, Brookdale Community College, revealed lessons learned from piloting faculty training programs designed to boost AI literacy and spark creative teaching applications. Attendees gained practical insights and walked away with curated instructional materials and resources to jumpstart their own AI journeys.

Learning Experience Design and Design Thinking Together

Brian Gall, Director, Learning Experience Design, Villanova University, examined the strategic expansion and integration of specialized design roles within Villanova University's Office of Online Programs in the session An Emerging Trend: Learning Experience Design and Design Thinking Together. Drawing from their organization’s structure, the session explored the distinct yet complementary roles of each design team member: Learning Experience Designers who focus on holistic student journey mapping and engagement strategies; Multimedia Experience Designers who create immersive, interactive content that enhances cognitive load and retention; and Instructional Designers who ensure the learning management system and learning technologies work together to achieve the goals of the faculty member.

Recognizing that not all institutions have the resources for extensive staffing, the session concluded with role hybridization models, technology solutions that amplify individual capacity, and practical strategies for implementing similar frameworks with smaller teams. Attendees also gained concrete tools for assessing their own organizational needs, building compelling cases for design team expansion, and implementing design thinking approaches regardless of team size.

"As always a great conference and networking event! Fantastic job done by the entire Edge Team! Thank You!”

– Ron Spaide
Chief Information Officer
Bergen Community College

Modernizing Virtual Desktop Delivery

Choosing the right virtual desktop solution is a critical yet complex decision for any institution. In this session, Chris Treib, Vice President of Information Technology, Geneva College, shared an insightful look into the college’s journey transitioning from VMware to Apporto. He shared Geneva College’s experience evaluating different virtual desktop approaches, the specific challenges they faced, and the factors that ultimately influenced their decision to explore alternatives.

The session offered valuable, real-world takeaways for IT leaders exploring or undergoing similar transitions. Attendees gained practical lessons on managing migration, evaluating platforms, and understanding the trade-offs involved. With a focus on outcomes and institutional fit, this session equipped decision-makers with the knowledge and confidence to assess their own virtual desktop strategies more effectively.

Path to GLBA Compliance

In the spring of 2024, Saint Elizabeth University was required to put into place compliance requirements for Gramm-Leach-Bliley Act (GLBA) to be in compliance with the FTC Safeguards Rule. During A Small University's Path to GLBA Compliance, Ron Loneker, Jr., Director, IT Special Projects at Saint Elizabeth University, presented how the university reacted to the requests of their auditors and how it was cleared by them and the Federal Student Aid Office. Following the presentation and Q&A, the session opened into an engaging discussion where other institutions shared their own challenges and experiences in working toward GLBA compliance.

Artificial Intelligence in Higher Education

As AI rapidly reshapes the academic landscape, it offers both transformative potential and pressing challenges. Artificial Intelligence in Higher Education: Threat or Opportunity? explored the four “evils” of artificial intelligence in higher education: The Hero, The Career Terminator, The Academic Cheat, and The Intel Spiller. Through real-world examples, the presentation examined how AI is impacting teaching roles, academic integrity, and data privacy, while also highlighting opportunities to enhance learning and streamline operations. This session also equipped attendees with practical strategies for ethical adoption, increased transparency, and meaningful collaboration, empowering institutions to leverage AI as a force for good rather than a disruptive threat.

Thank you Exhibitor Sponsors

The post EdgeCon Autumn 2025 appeared first on Edge, the Nation's Nonprofit Technology Consortium.


ResofWorld

A lithium bust leaves Latin American towns in the dust

With plummeting prices and slowing demand for EVs, the once-thriving mining towns of Argentina, Chile, and Bolivia are now struggling.
Tolar Grande, a windswept settlement perched at 11,500 feet above sea level in northern Argentina, once received little more than a trickle of visitors. Then, in the late 2010s, hostels...

Blockchain Commons

2025 Q3 Blockchain Commons Report

It has been both an innovative and busy quarter at Blockchain Commons. Here’s some of the main things that we worked on this summer: Join the Club: Introducing the Gordian Club The Club Made Reality Next Step: Hubert New Dev Pages: Provenance Marks Gordian Clubs Hubert Again! Other Recent Work Presentations: Swiss e-ID TABConf 7 FROST Updates: FROST & Bitcoin FROST Demos FROST Verify ZeWIF Upda

It has been both an innovative and busy quarter at Blockchain Commons. Here’s some of the main things that we worked on this summer:

Join the Club: Introducing the Gordian Club The Club Made Reality Next Step: Hubert New Dev Pages: Provenance Marks Gordian Clubs Hubert Again! Other Recent Work Presentations: Swiss e-ID TABConf 7 FROST Updates: FROST & Bitcoin FROST Demos FROST Verify ZeWIF Updates: Error Handling Back to the Stack Work in Progress: The Architecture of Autonomy Learning FROST from the Command Line XID Tutorials Join the Club!

We’ve been introducing a lot of new ideas in the last year, and our newest is the Gordian Club, an autonomous cryptographic object (ACO)—though it’s based on ideas that go back decades!

Introducing the Gordian Club. So what’s a Gordian Club? The abbreviation ACO says it all: a Gordian Club allows for the autonomous and cryptographically protected distribution of information that can be updated over time. You don’t have to depend on a network, and you can’t be censored as a result! A Gordian Club provides resilience when systems are under stress and can support use cases such as disaster relief, protecting data over extended periods of time, and keeping data private in the face of corporate or government espionage. Read our introductory Musings on the topic for a lot more.

The Club Made Reality. The Gordian Club isn’t just a theory! We’ve released a Rust library and a CLI app so that anyone can test out the functionality and see its potential. Our October Gordian Meeting also included a full demo and presentation.

Next Step: Hubert. If you don’t depend on networks, how do you exchange Gordian Clubs? The beauty is that can do so in any way you see fit. You certainly can transmit over the network, with a service like Signal offering a secure, private way to do so. But you could also mail a thumb drive or even print a QR in the newspaper. We suspect the best methods will be the most automated, so we’re designing a “dead-drop” server that you can use to exchange Clubs. We call it Hubert after the “berts” of information that were exchanged in Project Xanadu (which was the inspiration for our own work on Gordian Clubs). Hubert is one of several works that we have in process as we page over to the new quarter, so more on that at year end!

New Dev Pages

The release of a number of new innovative technologies has resulted in the addition of new pages for developers. These are the places to go for the overview of our newest work and the links to all the details.

Provenance Marks. Provenance Marks provide a cryptographically-secured system for establishing and verifying the authenticity of works. Not only can you see that something was authentically signed, but you can also trace changes through a chain of authenticated provenances. We introduced provenance marks last quarter at a Gordian Developer Meeting, but we’ve now got a provenance mark developer page that has a lot more of the details.

Gordian Clubs. Our newest work, on Gordian Clubs, has a whole hierarchy of developer pages, including pages on why we think autonomy is important, how Gordian Clubs use our stack, how ocaps support delegation, and the history of the Clubs idea.

Hubert Again! We’ve also created a single page for Hubert, our info dead-drop hub. Though one of its earliest use cases is to distribute Gordian Clubs, it might also be used in other Multi-party Computation (MPC) scenarios such as FROST.

Other Recent Work. Our developer pages on cliques and XIDs are slightly older, but if you want to review the new ideas we’ve been presenting in the last year, those are the other two pages to look at! We also just expanded our Envelope pages with a look at permits, which is a fundamental feature of Gordian Envelope, but one that’s never had its own page before.

Presentations

We were thrilled that Christopher Allen was asked to present to two different groups, just as the quarter turned.

Swiss e-ID. Christopher has been talking with officials regarding the recently approved Swiss e-ID digital identity for a few months. On October 2, following the referrendum’s approval by the Swiss people, he was invited to give a presentation on “Five Anchors to Preserve Digital Autonomy & Deomcratic Soverignty”. We’ve also published an article that synopsizes the main points.


TABConf 7. Christopher was also invited to help kick off a digital-identity track at TABConf, a technical Bitcoin conference. He made two presentations there, on “Beyond Bitcoin: Engineering Exodus Protocols for Coordination & Identity” and on “The Sad State of Decentralized Identity (And What to Do About it)”. (No videos of these ones, sadly!)

FROST Updates

We’ve been delighted to continue our work with FROST this year thanks to a grant from HRF. We made some big progress in Q3.

FROST & Bitcoin. ZF FROST is perhaps the best FROST library out there (thanks to its completeness, usability, and security review). As the name indicates, it was created with Zcash in mind. We thought that one of the big things we could do with our HRF grant was to bring that capability to Bitcoin. To support that, we issued a PR for ZF FROST to support secp256k1, then created our own branch to support the tweak needed to send Bitcoins with Taproot. Together, these two updates provide everything you need to sign Bitcoin transactions with a FROST group.

FROST Demos. Putting together the puzzle pieces for FROST signing can still be a little complex, so we’ve created some demos for how it works. The demo for Trusted Dealer was held at our August Gordian Developers meeting (also see our step-by-step walkthrough). We then produced an additional video for signing after Distributed Key Generation (again, also see the step-by-step walkthrough).

TD Signing CLI Demo: DKG Signing CLI Demo:

FROST Verify. It turns out that there aren’t great tools for verifying FROST signatures, so we created one. This is made specifically to work with the FROST cli tools that are part of the ZF FROST Package.

ZeWIF Updates

Speaking of Zcash, we’ve also done a bit more work on ZeWIF, our Zcash Wallet Interchange Format, which was a major focus at the start of the year.

Error Handling. Our biggest mark of success for a project is when it begins to come into wider use, because we’re not trying to create theory at Blockchain Commons, we’re trying to create specifications that make real-life users more independent. So when a request came over from Zcash’s Electric Coin Company to redo how we report error messages in ZeWIF, we were happy to do so. As a result, all of the ZeWIF-related crates were updated in Q3.

Back to the Stack. This led to general updates across our entire stack, to move away from anyhow for error reporting (except in apps such as CLI tools). This was part of the continual updating of our stack that we do to keep it clean and ready to use (the last was in early July, this one in September). There were also documentation updates and light code cleanups that occurred here and there as part of this work.

Work in Progress

Although that feels like a healthy amount of work for the quarter, we also put considerable work into other projects that have not yet seen completion.

The Architecture of Autonomy. We mentioned last quarter that Christopher was invited to speak at the Bitcoin Policy Summit. That got him thinking about a lot of big picture stuff concerning what works and what doesn’t for identity online. We’ve worked through a few different iterations of a major policy work on the topic, at the moment called “The Architecture of Autonomy,” but have only shared it with a few who are major movers in the area of digital identity (if that’s you, drop Christopher a line!).

Learning FROST from the Command Line. Learning Bitcoin from the Command Line has long been one of our most succesful projects, so when we pitched HRF last year on continuing our FROST support, we suggested that we create a similar (but much shorter) tutorial for FROST: Learning FROST from the Command Line. We’ve drafted a bit more than half of the course so far (chapters 1, 2, and the first part of 3), so we’re definitely not done, but if you want to get a head start, you can look at it now. We’ll be finishing this up before year’s end.

XID Tutorials. Finally, we should point again to the XID core concepts docs that we prepared in the previous quarter, the first part of our XID Quickstart. The concepts docs are solid and a great hands-on look at much of our stack. The linked tutorials are still in process. (Another topic for Fall or maybe Winter, as Learning FROST is before it in our tech-writing queue).

That’s it for this quarter. We hope you’re excited by some of the new work we’re doing (such as Gordian clubs and Hubert) and some of our newest presentations. If you’d like to help support this work, please consider becoming a GitHub sponsor. If you’d like to make a larger contribution or if you want to partner with us directly to integrate some of our tech, please drop us a line.

Tuesday, 21. October 2025

FIDO Alliance

MobileIDWorld: Google Chrome Launches Automatic Passkey Generation for Android Users

Google Chrome has introduced a new automatic passkey implementation for Android that streamlines the user authentication process by automatically generating passkeys after password-based sign-ins. The development marks a significant advancement […]

Google Chrome has introduced a new automatic passkey implementation for Android that streamlines the user authentication process by automatically generating passkeys after password-based sign-ins. The development marks a significant advancement in the broader industry transition from traditional passwords to more secure authentication methods, following similar initiatives from Apple and Microsoft.


Biometric Update: BixeLab joins FIDO Face Verification program, certifies Aware 

Aware has received FIDO Alliance Certification for Face Verification, gaining recognition for its identity verification tech including liveness detection and facial matching capabilities. The certification affirms that Aware’s identity verification platform meets the […]

Aware has received FIDO Alliance Certification for Face Verification, gaining recognition for its identity verification tech including liveness detection and facial matching capabilities.

The certification affirms that Aware’s identity verification platform meets the FIDO Alliance’s standards for biometric performance, security and fairness. Testing was conducted by BixeLab — which recently revealed a new contract, CTO and facility — is one of only three labs globally accredited to evaluate biometric systems under the U.S. National Institute of Standards and Technology (NIST) NVLAP program.

“FIDO’s Face Verification Certification represents a powerful step toward a passwordless future built on trust, accuracy, and strong security,” said Ajay Amlani, CEO of Aware, Inc. “Earning this certification demonstrates not only our technological excellence but our deep commitment to transparency and innovation in biometrics.”


Biometric Update: HID upgrades passkey, FIDO authentication capabilities with IDmelon acquisition

Texas-based HID has reached an agreement to acquire Vancouver, Canada-based logical access control provider IDmelon to upgrade its portfolio of FIDO authentication offerings. The addition of IDmelon’s technology enables HID to easily implement customers’ […]

Texas-based HID has reached an agreement to acquire Vancouver, Canada-based logical access control provider IDmelon to upgrade its portfolio of FIDO authentication offerings. The addition of IDmelon’s technology enables HID to easily implement customers’ physical access cards and mobile devices as FIDO2 security keys, according to the joint announcement.

IDmelon software users can turn existing identifiers like biometrics, physical credentials and smartphones into enterprise-grade FIDO security keys. IDmelon also provides hardware to support passkeys and other FIDO standards for secure and convenient access control.


Techstination Radio/Podcast: What you should know about passkeys for online security

Interview with FIDO’s Andrew Shikiar on what you should know about passkeys for online security.

Interview with FIDO’s Andrew Shikiar on what you should know about passkeys for online security.


WDEF News: Switching to Passkeys for Safety

CHATTANOOGA, Tenn. (WDEF) – October is Cybersecurity Month, a reminder for everyone to take small but meaningful steps to stay safe online. 

CHATTANOOGA, Tenn. (WDEF) – October is Cybersecurity Month, a reminder for everyone to take small but meaningful steps to stay safe online. 


WTVM News: FIDO’s Megan Shamas talks online safety, using passkeys

Megan Shamas shares why passkeys may be more effective than passwords during Cybersecurity Month.

Megan Shamas shares why passkeys may be more effective than passwords during Cybersecurity Month.


Hyperledger Foundation

ERC-4337 Smart Account Tutorial With Web3j

As part of my participation in theWeb3j Libraries Full Development Lifecycle project under the LF Decentralized Trust Mentorship Program, I’ve developed an ERC-4337 Smart Account tutorial. It demonstrates how to create a minimal ERC-4337-compatible Smart Account, compile and deploy it with Web3j, and interact with it using specific Account Abstraction User Operations, including function

As part of my participation in theWeb3j Libraries Full Development Lifecycle project under the LF Decentralized Trust Mentorship Program, I’ve developed an ERC-4337 Smart Account tutorial. It demonstrates how to create a minimal ERC-4337-compatible Smart Account, compile and deploy it with Web3j, and interact with it using specific Account Abstraction User Operations, including function calls and ETH transfers between Smart Accounts.


ResofWorld

The world needs more spaceports. Oman wants to help

Can a small country build a space industry from scratch?
Oman’s dream for a spaceport sits in a swath of desert overlooking the Arabian Sea. If all goes according to plan, three separate complexes here will launch everything from small...

Monday, 20. October 2025

ResofWorld

Japanese convenience stores are hiring robots run by workers in the Philippines

Filipino tele-operators remotely control Japan’s convenience store robots and train AI, benefiting from an uptick in automation-related jobs.
Inside a multistory office building in Manila’s financial district, around 60 young men and women monitored and controlled artificial intelligence robots restocking convenience store shelves in distant Japan.  Occasionally, when...

Friday, 17. October 2025

OpenID

Second Implementer’s Draft of OpenID Connect Native SSO for Mobile Apps Approved

The OpenID Foundation membership has approved the following as an OpenID Implementer’s Draft: OpenID Connect Native SSO for Mobile Apps 1.0: https://openid.net/specs/openid-connect-native-sso-1_0-ID2.html  An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This Implementer’s Draft is the product of th

The OpenID Foundation membership has approved the following as an OpenID Implementer’s Draft:

OpenID Connect Native SSO for Mobile Apps 1.0: https://openid.net/specs/openid-connect-native-sso-1_0-ID2.html 

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This Implementer’s Draft is the product of the OpenID AB/Connect Group.

The voting results were:

Approve – 57 votes Object — 0 votes Abstain – 28 votes

Total votes: 85 (out of 395 members = 21.5% > 20% quorum requirement)

  Marie Jordan – OpenID Foundation Secretary
About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net



The post Second Implementer’s Draft of OpenID Connect Native SSO for Mobile Apps Approved first appeared on OpenID Foundation.


ResofWorld

EVs are depreciating much faster than gas-powered cars

Plummeting resale values are threatening to derail the world's transition to electric transportation.
The resale value of electric vehicles is collapsing worldwide, hurting private owners and fleet operators. The crisis became especially apparent when BluSmart, India’s pioneering all-electric ride-hailing service, collapsed in April...

OpenID

OIDF applauds new FIDO and Shared Signals whitepaper

The OpenID Foundation welcomes the publication of a new whitepaper from the FIDO Alliance that examines how FIDO authentication and the Shared Signals Framework (SSF) work together to address enterprise security challenges. We recognize the significant effort by the FIDO Enterprise Deployment Working Group to illustrate how aligned our specifications are, and we welcome the […] The post OIDF app

The OpenID Foundation welcomes the publication of a new whitepaper from the FIDO Alliance that examines how FIDO authentication and the Shared Signals Framework (SSF) work together to address enterprise security challenges.

We recognize the significant effort by the FIDO Enterprise Deployment Working Group to illustrate how aligned our specifications are, and we welcome the opportunity to partner on future versions of this whitepaper and any implementer guides. This work will evolve as our respective specifications evolve, and tangible expert guidance from this FIDO report will help accelerate industry adoption and realize the benefits of these complementary security standards.

Beyond authentication to continuous security evaluation

Many enterprise identity and access management systems operate independently, which limits visibility across platforms. FIDO protocols address one part of this challenge by providing strong phishing resistant authentication using public key cryptography to eliminate password vulnerabilities. But authentication is just the first step. Organizations also need continuous visibility into session status and emerging security risks after users log in.

This is where the SSF comes in. It enables secure, real-time exchange of identity and security events across different systems and vendors. These events include risk signals, credential compromise notifications, and session revocations. When organizations integrate FIDO authentication with SSF protocols – Continuous Access Evaluation Protocol (CAEP) and Risk Incident Sharing and Coordination (RISC) – they can make timely, informed decisions throughout the user session lifecycle.

Practical applications

The paper describes practical applications across the user lifecycle, from onboarding and role changes to offboarding and account recovery. Organizations can use shared signals to automatically disable compromised accounts, strengthen step-up authentication when higher security is needed, and improve federated login with better session monitoring. SSF creates a common language for security events, allowing different systems to share information and coordinate responses even across organizational boundaries.

Industry adoption and testing resources

“Security doesn’t end at login, and this whitepaper illustrates how FIDO and Shared Signals work as complementary pieces of a comprehensive security strategy. FIDO gets you in the door securely, and SSF ensures you maintain the right level of access throughout your entire session. Together, they enable the zero standing privilege architectures that enterprises need to defend against today’s sophisticated identity-based attacks, ” said Atul Tulshibagwale, co-chair of the OpenID Foundation’s Shared Signals Working Group.

Apoorva Deshpande, an editor of the FIDO paper and contributor to the OpenID Foundation’s Shared Signals Working Group, said “FIDO has provided the gold standard for strong, phishing-resistant authentication at the front door. The OpenID Shared Signal Framework secures the entire session with continuous and real-time signals. This powerful combination enables disparate systems to work in concert, automates responses, and instantly revokes access when risk is detected, building a truly collaborative, signal-driven security architecture for the future.”

The OpenID Foundation encourages organizations to implement FIDO and Shared Signals specifications. To support this effort, draft open-source test suites for Shared Signals are now freely available to help organizations validate their implementations. Users should note that when creating a test plan, they will need to enable the “show early version tests” in order to access the SSF test plans.

Thomas Darimont from the OpenID Foundation’s Certification Team added: “The Shared Signals Framework tests, as part of the OpenID Conformance Testsuite, are built to help implementers validate SSF transmitter and receiver interoperability and strengthen their integrations with confidence. We encourage everyone building on SSF to use these tests early and often and to share feedback, so we can continue improving the ecosystem together.”

Feedback on the specifications are welcome from implementers to the OpenID Foundation’s certification team at: certification@oidf.org.  

Later this year, the OpenID Foundation will make self-certification to shared signals specifications available to implementers for self-certification at modest fees, and with discounts to members. Developers can build to the tests and prove out their implementations and demonstrate conformance to others.  

Shared signals demo at FIDO Authenticate

At FIDO Authenticate this week, attendees observed the shared signals in action. More on the findings from this demonstration, the fourth interop on shared signals in two years, will be shared separately. 

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

The post OIDF applauds new FIDO and Shared Signals whitepaper first appeared on OpenID Foundation.

Thursday, 16. October 2025

FIDO Alliance

Authenticate 2025: Day 3 Recap

By: FIDO staff The first two days of Authenticate 2025 delivered strong technical content, user insights and lots of thoughtful discussions. The final day of Authenticate 2025 went a step […]

By: FIDO staff

The first two days of Authenticate 2025 delivered strong technical content, user insights and lots of thoughtful discussions.

The final day of Authenticate 2025 went a step further taking attendees on a deep dive into really important current and emerging topics for authentication including biometrics, agentic AI and verifiable credentials.

Passkeys and Verifiable Digital Credentials are Not Competitors

A key theme across multiple sessions at Authenticate 2025 was the growing need and development of standards for Verifiable Digital Credentials.

In a session led by Christine Owen, Field CTO at 1Kosmos and Teresa Wu, Vice President, Smart Credentials & Access at IDEMIA Public Security, the roles of passkeys and verifiable digital credentials (VDCs) within the evolving landscape of secure digital identity were clarified.

They emphasized that passkeys and VDCs are not competing technologies. Instead, they are best used together to strengthen both authentication and identity verification processes. Passkeys offer privacy preservation and are resistant to phishing, while VDCs provide digital representations of identity attributes that can be selectively shared when needed.

Breaking Glass: Restoring Access After a Disaster

In a thought-provoking session, Dean H. Saxe, Principal Security Engineer, Identity & Access Management at Remitly, explored the challenges and importance of digital estate management, particularly in the context of disasters and emergencies. 

Saxe described how personal experiences and recent natural catastrophes highlight the necessity of preparing for sudden loss of access to digital assets.

A hands-on experiment conducted by Saxe tested how well a “break glass” process works when all personal devices are lost. The process included relying on physical identity documents and a safe deposit box to regain access to important accounts like 1Password, Apple iCloud, and Google services. Saxe faced unexpected obstacles, such as a missing credential and issues getting recovery codes, which illustrated the real-world difficulties of these situations.

The findings of Saxe’s experiment stressed the need for regular testing and updating of disaster preparedness plans.

“So the failure to test your backup strategy means that you do not have a valid backup strategy,” Saxe said.

From the Trenches: Passkeys at PayPal

PayPal is an early adopter of passkeys with the initial motivation being focused on reducing password reliance.

“It’s time to break free from the password prison,” Mahendar Madhavan, Director of Product, Identity at PayPal said.

PayPal launched passkeys in 2022, saw a surge in mid-2024, and now boasts more than 100 million enrolled users with a 96% login success rate. This surge has delivered results—phishing-related losses have dropped by nearly half compared to traditional password and OTP methods.

Mohit Ganotra, Identity PM Lead at PayPal explained that initial efforts zeroed in on user education and reducing friction during login. By optimizing the login experience and targeting enrollment prompts during checkouts and password recovery, PayPal now sees 300,000 incremental enrolments each month from checkout alone, plus 75,000 from automatic passkey upgrades.

“Passkeys is still a new technology, it needs to go through the adoption curve that every new technology has,” Madhavan said. “So you as a relying party need to nudge users, guide users, encourage users to adopt a passkey at various points in their journey and how you do it is, you hyper personalize the content for consumers and users, and you talk in their language.”

Safeguarding Enterprise Online Credentials Post Authentication

While passkeys solve authentication security, post-authentication remains vulnerable through bearer token theft and session hijacking. 

There are however numerous technical approaches that can help mitigate the risk, which were described in detail by An Ho, Software Solution Architect at IBM and Shane Weeden, Senior Technical Staff Member at IBM.

The session introduced two complementary technologies designed to address this vulnerability. DPoP (Demonstrating Proof of Possession) extends OAuth 2.0 to create sender-constrained access and refresh tokens for API flows, while DBSC (Device-Bound Session Credentials) binds browser session cookies to specific devices. Both technologies use asymmetric cryptography to ensure that stolen credentials become unusable by attackers, as they require proof of possession of private keys that only the legitimate client or browser holds.

“We believe that you need to look at a holistic view of your sessions,” Weeden said. “You need to look at not just how clients and users log in, but also how to maintain a form of continuous authentication with the client or browser that is utilizing that session.”

From the Trenches: Improving Experience and Security at Databricks with Passkeys  

Meir Wahnon, Co Founder of Descope, explored how Databricks approached the challenges of unifying authentication and improving security across multiple cloud-based apps.

Databricks partnered with Wahnon’s company to figure out the best approach. The fragmented login experience had made it hard for users and the IAM team to manage access and maintain full visibility. Databricks tackled this by adopting a centralized identity provider and federation to ensure a more seamless single sign-on process. A major focus was the decision to add passkeys as an optional multi-factor authentication method. This choice was driven by Databricks’ commitment to balancing strong security for customers with a smooth, low-friction user experience.

The deployment of passkeys came with careful attention to user adoption and support. Databricks made passkeys optional to minimize disruption, and included easy rollback options if customer uptake became a challenge.

“The balance between user experience and security is always a question when you build a user journey,” Wahnon said.

From the Trenches: Alibaba’s Passkey Story

Alibaba is expanding its use of passkey authentication across business units including AliExpress and DingTalk. 

Preeti Ohri Khemani, Senior Director at Infineon Technologies which works with Alibaba explained that the main goal was to improve security and user experience by reducing dependence on traditional passwords and costly SMS one-time passwords. The rollout has led to faster, more convenient logins and a smoother registration process for users.

On AliExpress, the deployment of passkeys simplified the login flow and eliminated extra steps for users. This change resulted in a reported 94% increase in login success rates along with an 85% reduction in login times. Users no longer need to manage passwords or wait for verification codes, which also lowered operational costs and security risks.

DingTalk, Alibaba’s internal messaging platform with 28 million daily active users, has similarly benefited from passkey integration. Engineers at Alibaba focused on making passkey adoption easy by sharing clear coding samples, open-source libraries, and helpful tools.

Keynotes: The Path to Digital Trust

Ashish Jain, CTO of OneSpan used his keynote to explore the ongoing challenge of establishing trust in digital interactions. Jain traced the journey from physical trust in face-to-face transactions to today’s anonymous digital world.

Ashish outlined the tension between user experience and security. He cited how complex password policies and frequent multi-factor authentication can frustrate users, yet they are essential for protection. The discussion highlighted how the industry is coming closer to a practical solution through the adoption of passkeys.

 “In the physical world, trust is emotional,” Jain said. “In the digital world, trust is an architecture.”

Keynote:  Biometrics Underpinning the Future of Digital Identity

Continuing on many of the same themes from Amlani’s keynote, Stephanie Shuckers, Director, Center for Identification Technology Research (CITeR), University of North Carolina – Charlotte and  Gordon Thomas, Sr. Director, Product Management, Qualcomm  provided more insights on the critical nature of biometrics.

Thomas noted that while face recognition remains popular, fingerprints offer enhanced privacy because they are less likely to be exposed online or through surveillance.

“It’s not really about proving who you are, but it’s about building and securing your digital identity layer by layer with trust every time you use it,” Thomas said.

Shuckers noted that there is a need for strong assurance levels in biometric technology on consumer devices. That’s where standards help ensure both user safety and usability. The FIDO Alliance’s programs test biometric systems for vulnerabilities such as deep fakes and injection attacks. These certifications are crucial for building trust in digital identity systems. 

Keynote: Microsoft Details What’s Needed to Authenticate Agentic AI

Pamela Dingle, Director of Identity Standards, Microsoft led a session on the challenges and opportunities in authenticating AI agents within enterprises. 

She stressed the importance of understanding what an agent is and pointed out that simply asking “who authenticates the agent” is not enough. Dingle highlighted the complexity that arises from having many agents running in different domains, each with unique tasks and identifiers. Administrators often struggle to see the full chain of actions, which complicates decision making and resource management.

Dingle introduced the idea of using “blueprints” and “task masters” to authenticate not just the agent but also the context and source of its tasks. She emphasized that knowing only the identifier is not enough. The future will require richer, composite data about each agent’s purpose and origin.

“The agentic AI push gives us an opportunity to build the tools enterprises need to run better.”

Keynote Panel: Digital Wallets and Verifiable Credentials: Defining What’s Next 

Verifiable credentials was a hot topic at Authenticate 2025 and it was one that was tackled in the final keynote panel.

The panel included Teresa Wu, Vice President, Smart Credentials and Access at IDEMIA Public Security, Loffie Jordaan, Business Solutions Architect at AAMVA, Christopher Goh, International Advisor, Digital Identity & Verifiable Credentials at Valid8 and Lee Campbell, Identity and Authentication Lead, Android at Google.

The discussion began with an overview of the ecosystem, emphasizing the interaction between the wallet, issuer, and relying party. This “triangle of trust” serves as the cornerstone for secure digital credential use. Panelists stressed the need for privacy, interoperability, and certification as this shift accelerates, highlighting lessons learned and ongoing challenges like fragmentation across platforms.

FIDO Alliance’s growing focus on digital credentials was described as a catalyst for industry progress. “FIDO is getting involved in the digital credential space,” Campbell said. “FIDO does an exceptional job at execution.”

That’s a Wrap!

Wrapping up the Authenticate 2025 program, FIDO Alliance Executive Director Andrew Shikiar emphasized that the event continues to grow year by years. 

For the 2025 event there were 150 sessions and 170 speakers. 

“Passkeys are driving measurable business outcomes,” Shikiar said. “One thing I thought was really cool this year about some of the presentations, it wasn’t just another ‘rah rah’ passkeys are great story, but also companies are coming back for their second time or third time, talking about progress and lessons learned and how they’re evolving, pivoting and growing.”

Speaking of growth, the Authenticate event is growing for 2026, with a new Authenticate APAC event set for June 2-3 in Singapore. Authenticate 2026 will be back in California at the same time next year.

Between now and then, the FIDO Alliance will be sharing lots of informative content and hosting educational events. Stay connected and sign up for updates.


Digital ID for Canadians

A Milestone in Enhancing Digital Trust for Lawyers: Digitally Verifying Client ID

Letter from our President In an era when identity fraud is evolving rapidly, our collective efforts to defend the integrity of client-lawyer relationships have never…
Letter from our President

In an era when identity fraud is evolving rapidly, our collective efforts to defend the integrity of client-lawyer relationships have never been more vital. Today, I’m proud to reflect on a significant step forward: the launch of the PCTF Legal Professionals Profile Final Recommendation V1.1, the first industry-specific profile under the Pan-Canadian Trust Framework (PCTF). diacc.ca

This development is more than a technical standard; it’s a symbol of what’s possible when regulators, legal professionals, and technology providers protect clients, preserve trust, and reduce risk on the front lines of legal practice.

Why This Profile Matters

The PCTF Legal Professionals Profile establishes Conformance Criteria for how lawyers and their agents expect services to conduct client identity verification (IDV) in a manner that is auditable and consistent. 

Here’s what it does:

Reduces variability and risk by requiring that third-party agents meet minimum assurance criteria when verifying client identity.  Clarifies expectations for service providers, helping them design identity solutions that align with law society requirements, avoid duplication of effort, and reduce uncertainty.  Bridges practice and regulation by creating a pathway supporting compliance, enabling lawyers to rely on trusted, certified identity services rather than reinventing idiosyncratic internal solutions. diacc.ca

In short, this profile turns what was once discretionary or opaque into something auditable, transparent, and scalable.

The Challenge We Face: Rising Fraud, Rapid Change

The timing of this launch is critical. Fraud and identity theft remain persistent threats in Canada’s digital era:

In 2023, the police-reported rate of general fraud increased by 12% compared to 2022, despite a decline in incidents of identity fraud and identity theft. Statistics Canada  In 2024 alone, Canadians lost $638 million to fraud. Canada.ca  A 2025 Equifax study found that 48% of Canadians personally know someone who was a victim of identity theft. equifax.ca

These numbers reflect only a fraction of what’s really happening; many victims don’t report fraud, and many attacks go undetected for long periods.

The legal sector: lawyers have fiduciary responsibilities, handle funds, and often deal with clients remotely. The accuracy and trustworthiness of identity verification are crucial to maintaining legal integrity.

A Shifting Regulatory Landscape

Law societies and regulatory bodies are recalibrating in response to shifting norms and evolving risks:

Jurisdictions have rescinded the pandemic-era relaxations that allowed remote client verification via video calls alone. diacc.ca The updated Client Identification and Verification (CIV) Rules now require that virtual verification utilize authentication technology capable of confirming the authenticity of government-issued IDs, rather than merely displaying them over video. Law Society of Alberta  The use of third-party agents is now more explicitly permitted, provided the agent complies with the regulatory criteria, allowing lawyers to exercise controlled flexibility in how they operationalize identity checks. Law Society of Alberta | Law Society of British Columbia 

These changes place greater demands on vendors, law firms, and regulators, but also create openings for innovation, standardization, and certainty.

Progress Through Collaboration

What makes the release of the DIACC PCTF Legal Professionals Profile especially meaningful is that it represents progress through collaboration:

Information was sought from the Federation of Law Societies of Canada to assist in creating the Legal Professionals Profile, enabling the tailoring of verification requirements to real-world legal workflows. Identity technology providers, such as Treefort Technologies, one of the first services to earn PCTF certification under the DIACC program, now have visible, auditable pathways to align with the legal sector’s expectations. Treefort’s early certification signals to lawyers and law societies that robust verification solutions are market-ready. Treeforttech  Legal professionals and firms now have a more straightforward path to choosing verification services that conform, not just on paper, but in practice. The DIACC Member Services Directory and the Trusted List of certified services are ready reference points. diacc.ca

By stepping into a coordination role, neither vendor nor regulator, DIACC has helped create common ground. Such neutral convening is rare, but essential in domains where trust, regulation, and technology must intersect.

What This Means for Canadian Lawyers on the Front Lines

For lawyers, especially those serving clients remotely or handling high-risk transactions, the implications are real:

Less friction in onboarding new clients, because lawyers can confidently outsource identity checks to trusted services. Reduced liability and regulatory risk, because the verification process is auditable and traceable. More consistency in expectations across jurisdictions would reduce the burden of navigating different local rules. Improved client confidence, clients increasingly expect digital convenience without compromising security.

Consider the example of remote identity verification in real estate law. Lawyers like “Jamie” can now verify client identity using vendor services that combine document authentication, facial matching, liveness checks, and risk assessments, without requiring clients to come into the office. diacc.ca

It’s a tangible shift: trust, remote convenience, and compliance can coexist.

The Road Ahead: Opportunities & Challenges

This milestone is a launch point, not a finish line. Here’s what remains:

Broad adoption: The value of the Profile grows only when law societies, large firms, small practices, and vendors all adopt it. Ongoing certification rigour: As bad-actor fraud techniques advance (e.g., AI-assisted deepfakes, synthetic identity attacks), the conformance criteria must evolve. Education and support: Many lawyers will require help in selecting, integrating, and monitoring identity verification services. Interoperability across sectors: Legal identity verification must increasingly interoperate with banking, government, and other trust ecosystems. Monitoring outcomes and feedback loops: We need to measure how this framework reduces fraud, speeds onboarding, builds confidence and iterates.

In Gratitude and in Resolve

To everyone who has supported this effort, from regulators to identity solution providers and legal professionals, thank you. This profile is stronger because of your feedback, engagement, and dedication.

Our work is far from done. As fraudsters refine their tactics, we must continue to refine, adapt, and collaborate. That is the spirit of digital trust: not static defence, but evolving resilience.

DIACC remains committed to serving as a neutral, trusted enabler in this journey. We will continue to expand resources, convene stakeholders, monitor real-world outcomes, and raise the bar as threats evolve.

May this milestone mark what we’ve collectively accomplished and catalyze what comes next, a legal sector where client verification is seamless, fraud is harder to commit, and trust is foundational to every digital legal interaction.

Joni Brennan
President, DIACC

Further Reading:

https://diacc.ca/2025/09/22/the-diacc-releases-its-pctf-legal-professionals-profile-final-recommendation-v1-1/ https://diacc.ca/law-societies/ https://www.statcan.gc.ca/o1/en/plus/7905-how-much-fraud-affecting-canadians-and-canadian-businesses https://www.canada.ca/en/competition-bureau/news/2025/02/fraud-prevention-month-to-focus-on-impersonation-fraud-one-of-the-fastest-growing-forms-of-fraud.html https://www.equifax.ca/about-equifax/newsroom/-/intlpress/fraud-concerns-are-escalating-with-89-per-cent-of-canadians-saying-companies-must-do-more/ https://www.lawsociety.ab.ca/amendments-to-client-identification-and-verification-rules/ https://treeforttech.com/2025/02/04/law-societies-provide-helpful-guidance-on-the-client-id-rules/ https://diacc.ca/digital-trust-stories/enhancing-legal-services-delivery-with-remote-identity-verification/

DIACC is Where Digital Trust Means Business

Contact us to be a part of the change you want to see, stay informed about developments in digital trust and identity verification, and learn how you can contribute to discussion drafts or become a member.


FIDO Alliance

Authenticate 2025: Day 2 Recap

By: FIDO Staff Following on the information-packed day one, day two of Authenticate 2025 continued the trend. Over the course of the day, users from across different geographic areas and […]

By: FIDO Staff

Following on the information-packed day one, day two of Authenticate 2025 continued the trend.

Over the course of the day, users from across different geographic areas and industry verticals detailed their experiences with passkeys. Discussion on how passkeys fit into the payment ecosystem and the intersection with agentic AI were also hot topics of discussion across multiple sessions. 

Keynotes: A Brief History of Strong Authentication

Christopher Harrell, Chief Technology Officer at Yubico, kicked off the morning keynote tracing the journey of authentication practices from basic shared secrets to the modern era. 

Harrell outlined how early systems based on shared secrets and memorized passwords often failed due to human error and simplicity. Multi-factor authentication was introduced to address these gaps by layering security, but still relied heavily on passwords or similar secrets. He noted that the evolution of the market to passkeys eliminates the vulnerabilities of shared secrets and reduces the chance of phishing, making access both safer and easier for users.

“Shared secrets were never meant for the internet, we need authentication that protects you without making you remember more,” Harrell said.

Keynotes: Passkey Adoption in the UK

The United Kingdom (UK) has taken a big leap into passkey, embracing its usage at the national level.

Darren Hutton, Identity Advisor for NHS England and Pelin Demir, UX Designer for NHS Login, detailed the adoption path and success of passkeys in the UK. The presenters shared how NHS Login serves as a nation-level identity provider for healthcare access, reaching almost the entire adult population. They discussed the evolution from passwords and OTPs to introducing passkeys. The move aimed to improve both security and accessibility for all users.

Insights from their user research revealed that although over three million users adopted passkeys within months, there were challenges. These included inconsistent user interfaces, confusion around technical terms, and accessibility barriers for screen reader users. The team found that clear guidance and familiar wording were critical to increasing adoption.

“Passkeys, is a beautiful balance of technology that brings security and usability together to create a really good service,” Hutton said.

Leaders from the National Cyber Security Center (NCSC) in the UK detailed the strong imperative to move to passkey, noting that the majority of cyber harm to UK citizens happened through abuse of legitimate credentials.

Keynote: Visa Details Payment Passkey Efforts

Ben Aquilino,VP, Global Head of Visa Payment Passkeys and Digital Identity at Visa explored the evolution of digital payment security from the earliest days of online commerce to the present. 

Aquilino used the history of Pizza Hut’s first online order in 1994 as a gateway to highlight how payment experiences have changed due to rising concerns over fraud, describing how simple early processes became more complex to counter increasingly sophisticated threats.

A significant portion of the session focused on the technological advancements used to combat payment fraud.

Visa’s recent efforts to innovate further by launching Visa Payment Passkeys. This new approach leverages passkeys and biometrics for payment authentication, aiming to offer better protection along with a seamless user experience

“Authentication doesn’t have to be a compromise between security and convenience; it can have both,” Aquilino said.

Keynote Panel: Quantifying Passkey Benefits from Early Adopters 

In a keynote panel session led by FIDO Alliance Executive Director Andrew Shikiar, industry leaders from PayPal, NTT DOCOMO and Liminal explored the ongoing shift in the authentication landscape.

Koichi Moriyama, Chief Security Architect at NTT DOCOMO and Rakan Khalid, Head of Product, Identity at PayPal, recounted the journey from initial pilots to broader adoption, detailing technical evolution and lessons learned. Khalid emphasized the impact of evolving authentication standards on customer experience, while Moriyama described Docomo’s commitment to ecosystem-wide security improvements.

A recurring message throughout was the proven effectiveness and industry momentum behind passkey authentication. Survey data from Liminal revealed that most decision-makers now rank passkeys as their top priority for authentication investments. 

“The big surprise in the survey was that passkeys really have moved from pilot to priority,”  Filip Verley, Chief Innovation Officer at Liminal said. “We’re seeing  huge adoption and nearly every adopter is very satisfied.”

Both PayPal and Docomo shared that organizational and customer metrics improved after moving away from passwords, including increased sign-in success and reduced account takeovers.

“When customers use passkey, we see about a 10-point increase in sign-in success rate over a traditional multi factor authentication.” Khalid said.

From the Trenches: Shipping Passkeys for Hundreds of Millions of users at TikTok

TikTok’s session offered a comprehensive look at its journey to implement passkeys as a login method for hundreds of millions of users. 

The team faced the challenge of introducing passkeys in a way that would not disrupt the user experience. TikTok chose to promote passkeys through a campaign on user profile pages, leading to high engagement rates and a marked increase in adoption. Most users who set up passkeys did so thanks to the visibility and education presented within the app.

Passkey login was not only made the default for users who had enabled it, but TikTok also streamlined the signup process. 

“Overall, it has been a great journey with Passkeys and TikTok,” Yingran Xu, Software Engineer at TikTok said. “Passkey remains one of the authentication methods with the highest success rate and fastest login experience.”

From the Trenches: Lessons Learned from Roblox’s Passkey Deployment

Roblox’s effort to deploy passkeys across its platform is a response to the complex security needs of a massive and diverse user base. 

With more than half of Roblox users under 13, the challenge was to design an authentication system that is easy for children while still robust enough for professionals handling accounts with significant financial stakes. The team aimed to make access secure and simple without passwords, reducing both user frustration and customer support issues tied to account recovery.

Through a phased rollout that began with passkeys in user settings and later added passkey options during account sign-up, Roblox has shown measurable progress. Eighteen percent of active users have adopted passkeys, which led to greater engagement and higher login success rates. Experiments with the user interface revealed that highlighting passkeys at pivotal moments, such as account recovery, can drive adoption as long as users are guided clearly and are not forced through abrupt changes.

Ongoing improvements focus on making passkeys easier to use and more accessible, especially as many Roblox players move between multiple device types. An adaptive login flow led to more passkey logins and fewer users defaulting to traditional passwords. There are also new protections for top game creators, who are frequent phishing targets, ensuring only secure login methods are available for valuable accounts.

“Our vision is that all Roblox users should have secure and accessible accounts without passwords, powered by passkeys,” Yuki Bian, Product Manager at Roblox said.

From the Trenches: Using Windows Hello to Enable Passkeys for SSO

Single Sign-On (SSO) is a common approach enabling users in enterprise environments to use a single credential to get access to multiple applications.

In a deep dive session, Amandeep Nagra, Sr. Director, Identity and Access Management at Crowdstrike detailed how Windows Hello for Business was implemented as a passkey solution for seamless Single Sign-On across enterprise devices. By turning device logins into trusted passkeys, users no longer needed to remember passwords or manage separate app authentications.

The solution involves generating a device-level PRT token using Windows Hello for Business pins, which enables SSO across various apps. The project saved 78,000 hours of work annually, 

“We turned the device login into your passkey—one sign-in, access to everything,” Nagra said.

From the Trenches: Modernizing Authentication with True Passwordless at Docusign

DocuSign is a leading provider of electronic agreement solutions that help individuals and businesses sign documents and manage contracts online. Security and identity verification are critical to its platform, as users rely on DocuSign to complete transactions that often involve sensitive or high-value documents, such as home purchases, business contracts, and legal agreements.

To meet rising threats and user demand for easier, safer access, DocuSign is working to make passwordless authentication the default experience.

The company’s authentication team has introduced passkeys, enabled biometrics, and streamlined account recovery methods. Their goal is to give users secure, reliable, and effortless ways to verify identity, whether that’s logging in to review paperwork or using a mobile device to approve a high-stakes deal.

Yuheng Huang, Engineering Manager at Docusign noted that the login success rate for passkeys on DocuSign is 99%. In contrast, the password login success rate is only 76%.

Going beyond just authentication Dina Zheng, Product Manager at Docusign explained that DocuSign is using a passkey with the company’s identity wallet.

“By combining capabilities with identity wallet, we’ve created a fully frictionless experience, secure enough for identity verification, yet simple enough that users barely notice the authentication step at all,” Zheng said. “This is a perfect example of how passkeys can go beyond just authentication. They’re becoming an enabler of trusted high assurance workflows across Docusign.”

Panel: Industry Perspectives on Securing Agent-Based Authentication

With the emergence of agentic AI, there are new concerns and challenges about how to secure and authenticate agents.

A panel with Lee Campbell, Identity and Authentication Lead, Android at Google,  Rakan Khalid, Head of Product, Identity at PayPal and Reid Erickson, Product Management, Network API at T-Mobile that was moderated by Eran Haggiag, CEO at Glide Identity, discussed the challenges of trust and security in agent-based authentication.

Key points included the need for phishing-resistant authentication methods like passkeys and verifiable credentials to ensure user intent and prevent fraud. The discussion highlighted the importance of standardization, context-aware authentication, and human-in-the-loop verification to mitigate risks. 

“There’s lots of work going on, lots of companies are involved, lots of standards bodies involved with every single standards body out there today having some agentic group,” Campbell said. “Everybody’s talking about it, and one of the challenges is getting everyone and all the right players in the same room to have these conversations. And I think FIDO is actually quite a good place to do this.”

The Big Finale is Coming on Day 3!

While the first two days of Authenticate 2025 were stacked top to bottom with insightful sessions, Day 3 will deliver even more content.

With even more users stories coming, discussion on verifiable digital credentials and digital trust Day 3 will not disappoint.

Not registered? Don’t miss out! Attend remotely and access all previous sessions on demand, and attend day 3 live via the remote attendee platform! See the full agenda and register.


ResofWorld

China, Taiwan, and the vulnerable web of undersea cables

In his new book, The Web Beneath the Waves, writer Samanth Subramanian examines big tech’s role in subsea cables and growing geopolitical tensions around them. This is an excerpt on the threats that Taiwan faces from China.
In February 2023, half a year or so before I visited, a couple of Chinese ships cut two domestic undersea cables running out from Taiwan’s main island. One of these,...

Wednesday, 15. October 2025

Digital ID for Canadians

DIACC Unviels Digital Trust Adoption Dashboard: Transparency for a Connected Canada

October 16, 2025 – We are thrilled to announce the launch of DIACC’s Digital Trust Adoption Dashboard, a transformative, interactive tool developed by our Adoption…

October 16, 2025 – We are thrilled to announce the launch of DIACC’s Digital Trust Adoption Dashboard, a transformative, interactive tool developed by our Adoption Expert Committee (AEC), marking a pivotal moment in Canada’s digital trust journey.

This public resource maps the evolution of digital trust programs across Canada’s provinces and territories. By combining data from government sources with an interactive, map-based interface, the dashboard reveals the current state of digital verification and authentication services, trust program adoption, interoperability, and maturity across the country.

Why This Matters for Canada

Digital trust is no longer optional; it is foundational infrastructure for a competitive, secure, and inclusive Canadian economy. Citizens deserve seamless, secure access to services. Businesses require trusted digital transactions to thrive. Governments need efficient, interoperable systems to serve their constituents effectively.

This dashboard empowers decision-makers, innovators, and policymakers to move beyond assumptions and into action. It reveals where progress is accelerating, where gaps exist, and, most critically, where strategic alignment and collaboration can unlock exponential value for all Canadians. In an era where digital capability determines economic resilience, this transparency is not just valuable; it is essential.

Built on Collaboration, Designed for Impact

Developed through rigorous collaboration among the AEC, government partners across jurisdictions, and DIACC’s dedicated team, this dashboard demonstrates the power of public-private collaboration. Each data point reflects publicly observable information, and, to the greatest extent possible, we’ve validated the data for accuracy.

The AEC will maintain and evolve this living resource, with regular reviews and updates triggered by significant program or policy developments. Regular monitoring ensures the dashboard remains not just current but also actionable, a dynamic tool that grows in value as Canada’s digital trust public sector services ecosystem matures.

A Call to Engagement

This initiative embodies DIACC’s commitment to making Canada’s digital trust landscape auditable, transparent, and interconnected. We are advancing both economic innovation and public good by providing an accessible, evidence-based foundation for informed decision-making.

As this tool evolves, we invite the Canadian public, industry stakeholders, and government partners to engage with it, share insights, and provide feedback. Your perspectives will strengthen this resource and accelerate our collective progress. Please direct your feedback and inquiries to contact@diacc.ca.

We congratulate the AEC and the entire DIACC team for delivering this critical milestone in Canada’s digital trust transformation. This tool demonstrates leadership in action, which is what our nation needs as we build the trusted digital future Canadians deserve.

DIACC’s Digital Trust Adoption Dashboard

Developed by our Adoption Expert Committee (AEC)

Purpose & Intent

Provide an interactive map showing current public-sector digital trust programs offered across Canadian provinces and territories. Enable stakeholders to see adoption, interoperability, and maturity at a glance. Support evidence-based decision-making and identify opportunities for collaboration, alignment, and private sector engagement.

How We Collected Data

Coordinated efforts with the support of AEC members and DIACC staff. Observable/visible data from public government sources (program names, departments, service destinations, adoption metrics). Structured questions (developed by AEC) to ensure consistency across jurisdictions. Targeted government consultations to validate findings and capture policy context.

Committee Role & Maintenance

Ongoing stewardship: The AEC is responsible for maintaining the dashboard’s accuracy and relevance. Review cadence: Annual reviews, plus updates triggered by major policy, regulatory, or program changes. Monitoring: Track government announcements, adoption metrics, and stakeholder input to keep data fresh and actionable.

Key Takeaways

Purpose: Deliver a transparent, observable data-based view of Canada’s digital trust landscape. Methodology: Grounded in both observable-sourced data and committee government representative consultations. Sustainability: Maintained by AEC with annual reviews and update triggers.

Utility: A living tool to support strategy, collaboration, and interoperability at the national level.

Further Reading:

The Dashboard: https://diacc.ca/dashboard/ Pan-Canadian Trust Framework (PCTF): https://diacc.ca/trust-framework/ Adoption Expert Committee (AEC): https://diacc.ca/the-diacc/committees/ Trust Framework Expert Committee (TFEC): https://diacc.ca/committees DIACC Principles: https://diacc.ca/the-diacc/principles/

DIACC is Where Digital Trust Means Business

Contact us to be a part of the change you want to see, stay informed about developments in digital trust and identity verification, and learn how you can contribute to discussion drafts or become a member.


Reimagining Canada Post: From Delivering Mail to Delivering Trust

A Letter from the President Canada Post is losing approximately ten million dollars per day. The traditional business model of delivering physical mail to every…
A Letter from the President

Canada Post is losing approximately ten million dollars per day. The traditional business model of delivering physical mail to every Canadian address is fundamentally unsustainable. The federal government has directed the corporation to reduce delivery frequency and fundamentally transform operations.

Crisis as Catalyst

This is a crisis. However, it’s also an opportunity that Canada risks missing if we consider Canada Post solely in terms of mail delivery.

The Hidden Infrastructure

Consider what Canada Post actually represents: a trusted institution with physical presence in virtually every Canadian community, deep expertise in verification and logistics, existing relationships spanning individuals and businesses, public accountability, and a mandate to serve all Canadians regardless of commercial viability.

These aren’t assets to be wound down. They are the foundations for building Canada’s digital trust infrastructure.

Where Private Innovation Falls Short

The private sector has pioneered remarkable innovations in digital trust and identity verification, user experience, and privacy-enhancing technologies. Still, private sector solutions may face inherent limitations: market forces drive them toward commercially valuable populations and geographies; profit imperatives can create tensions with privacy protection; and competitive dynamics resist the open standards that enable broad interoperability.

A Vision for Digital Trust

Canada Post could bridge this gap, not by displacing private innovation, but by complementing it. Imagine Canada Post operating as a privacy-preserving verification service that helps Canadians prove their identity online, confirm addresses for e-commerce and financial services, and authenticate business credentials. Imagine post offices serving as trusted in-person verification points where Canadians without smartphones or digital literacy can establish digital credentials with assistance. Imagine Canada Post providing the addressing infrastructure that enables secure digital commerce while protecting privacy.

Complement, Not Compete

This isn’t about government competing with private sector innovators. It’s about leveraging a trusted public institution to ensure universal access, serve populations that aren’t commercially attractive, maintain the physical-digital bridge that inclusion requires, and operate according to public interest principles rather than purely commercial logic.

Asking The Right Question

Canada Post’s transformation should be guided by a simple question: What role can a universally accessible, publicly accountable, and privacy-respecting institution play in Canada’s digital trust ecosystem? The answer isn’t “deliver less mail.” It’s “deliver digital trust services that complement private innovation while ensuring no Canadian is left behind.”

Joni Brennan
President, DIACC

Further Reading:

Pan-Canadian Trust Framework Understanding Canada’s Decentralized Approach PCTF Certification Program What Canadians Want: Privacy, Security, and Choice Verifiable Credentials and Privacy-Preserving Technologies

DIACC is Where Digital Trust Means Business

Contact us to be a part of the change you want to see, stay informed about developments in digital trust and identity verification, and learn how you can contribute to discussion drafts or become a member.


ResofWorld

AI-powered textbooks fail to make the grade in South Korea

South Korea’s AI learning program was rolled back after just four months following a backlash from teachers, students, and parents, underlining the challenges in embedding the technology in education.
Ko Ho-dam, a high school junior on South Korea’s Jeju Island, was at first curious to hear that the government planned to roll out artificial intelligence-powered textbooks. The program would...

Next Level Supply Chain Podcast with GS1

How Fragile is Your Supply Chain? Lessons from Resilient Companies

Efficiency works when everything goes to plan. But as disruptions grow more frequent and complex, resilience and preparation are what set strong supply chains apart. In this episode, logistics expert John Manners-Bell, founder and CEO of Transport Intelligence, joins hosts Reid Jackson and Liz Sertl to discuss what leaders need to know about supply chain risk, technology, and balance. With ov

Efficiency works when everything goes to plan. But as disruptions grow more frequent and complex, resilience and preparation are what set strong supply chains apart.

In this episode, logistics expert John Manners-Bell, founder and CEO of Transport Intelligence, joins hosts Reid Jackson and Liz Sertl to discuss what leaders need to know about supply chain risk, technology, and balance.

With over 40 years in the industry advising organizations like the World Economic Forum, the UN, and the European Commission, John shares hard-earned lessons from real-world crises and why efficiency is not enough. 

Listeners will gain a sharper understanding of how to prepare for disruption, enhance visibility across their networks, and utilize AI and data to build more resilient operations.

In this episode, you'll learn:

How to measure the cost of supply chain risk

Why you need to prioritize resilience in supply chain strategy

How AI helps logistics leaders anticipate risks and plan accordingly

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(04:14) Why supply chain risk is everyone's problem

(06:41) Balancing efficiency and resilience for long-term success

(11:07) Why inventory alone won't save your business

(12:51) How visibility and data transform modern supply chains

(16:24) Cyberattacks, paper backups, and recovery stories

(18:18) The rise of AI and automation in logistics

(22:12) Lessons from companies that built resilience

(25:57) The mindset every future-ready supply chain leader needs

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: John Manners-Bell on LinkedIn Check out Transport Intelligence


Blockchain Commons

Musings of a Trust Architect: Five Anchors to Preserve Autonomy & Sovereignty

ABSTRACT: How do you protect autonomy and democratic sovereignty in digital identity systems? This article suggests five foundations: protecting choice by design; building for an extended future; maintaining platform independence; requiring duties; and implementing institutional safeguards. On September 28, 2025, Switzerland adopted the use of “electronic proof of identity,” or e-IDs, to be issued

ABSTRACT: How do you protect autonomy and democratic sovereignty in digital identity systems? This article suggests five foundations: protecting choice by design; building for an extended future; maintaining platform independence; requiring duties; and implementing institutional safeguards.

On September 28, 2025, Switzerland adopted the use of “electronic proof of identity,” or e-IDs, to be issued and administered by the Swiss government.

Use of the e-ID is meant to be voluntary and free of charge. However, there’s still real concern about the use of e-ID in Switzerland. The vote passed with just 50.4% of the voters in agreement. A previous vote on the same subject failed in 2021.

And, I think there’s real cause for concern.

Fortunately, I was able to talk directly about these concerns: thanks to previous work that I’d presented on “The Architecture of Autonomy” (which I’ll talk more about here in the future), I was invited to present at a meeting on October 2 for hundreds of people interested in (or concerned about) e-ID, incuding members of Swiss government and businesses.

The video of my talk is available, but what follows is a synopsis of my major points, focusing on how to keep digital identity systems safe.

The Unique Advantages of Switzerland

Swiss e-ID is ultimately a governmental digital identity system. That means it’s not self-sovereign: the government controls issuance and maintains the system. But, that’s not disqualifying. Though I’d (obviously) prefer a self-sovereign identity system, I hope that Swiss e-ID will put us on the path to eventually produce a LESS (Legally-Enabled Self Sovereign) system.

But for now, I think Switzerland is a place where these first steps can be reasonably taken by the government. Switzerland has a strong culture of individual autonomy. They have a constitutional principle that sovereignty resides in the people. That’s exactly what’s required if you must trust a centralized governmental entity with your identity.

So why the concern? Despite the best intentions of the Swiss government, it’d be quite easy for their new e-ID system to be subverted, much as has happened with self-sovereign identity. When I talked at the October 2 meeting, my goal was therefore to present solutions that would help to avoid potential subversion, both in Switzerland and for other governments who adopt Swiss policies and technologies without having the same philosophies of autonomy at the heart of their democracy.

I did this by presenting five “anchors” that I believe must be considered when designing digital identity system if we want to preserve both personal autonomy and democractic sovereignty. (And of course, ensuring autonomy and sovereignty ultimately puts us on the path to self-sovereign identity.)

1. Preserve Choice by Design

Choice disappears when alternatives become second-class.

Though the Swiss e-ID promises to be “voluntary and free of charge,” this is the first place that things could go very wrong, because voluntary-in-theory doesn’t necessarily mean voluntary-in-practice: it’s possible to follow the technical requirements of such a precept without folowing its intent. If the use of a digital identity system is incentivized, or if critical systems become digital-only, its usage effectively becomes mandatory. But that’s not the only place that individual choice can be subverted within digital identity: the very identity system can do so too if it’s too rigid in saying what individual elements of data a user can share.

To solve the first problem requires the issuer of the digital identity (in this case the Swiss government) to ensure that practices concerning the identity system aren’t coercive, and that nothing is denied to people who don’t (or can’t!) use the digital system. The second problem requires the creation of a technical architecture focused on user agency, so that a user can choose what to share, with whom, and when. That user agency must then be supported by a great UX that makes choosing to share only what’s absolutely required the simplest answer.

2. Build a 20-Year Architecture, Not 2-Year Product

MVP thinking optimizes for shipping, not decades of democratic evolution

When I co-authored TLS, I knew there were issues in the spec that needed to be addressed. I expected that to happen in 3-5 years, but it took 20! That sort of extensive bureaucratic delay isn’t unusual in the world of international standardization. This is why any digital identity system needs to make sure it’s ready for the next 20 years! Fundamentally, we need to design for the future, not for a short-term shipping deadline.

To solve this requires focusing on two critical issues: data minimization and resilience. Data minimization ensures that a system always sends out the least amount of data required for a specific need. It’s important for the long-term health of a system because it ensures we have variability: we can adjust what’s sent out as new democratic rules come into play, without having to redesign the system from scratch. (It also answers the second issue of choice, mentioned above.) Resilience means ensuring that digital identies will survive under a variety of adverse conditions, including total network failure. It readies us for changing conditions in the future.

3. Maintain Platform Independence

Platforms profit from lock-in, not user autonomy.

Though Swiss e-ID is being created by the Swiss government, it’s ultimately going to be dependent upon platform vendors such as Apple and Google and their app stores. Depending on how a digital identity is administered, platforms might be able to surveil its usage, block its usage, or engage in other anti-democratic actions. A creator of digital identity must avoid this to maintain technical sovereignty.

Addressing this problem requires asserting Swiss jurisdiction (or more generally, the jurisdiction of the organizations creating the identity system) over the platforms. Not only must anti-democratic actions be bluntly forbidden, but there also must be independent oversight with enforcement power. Those enforcement powers must be very strong, as we’ve already seen that platforms are willing to pay hundreds of millions of dollars to avoid following similar rules such as GDPR.

4. Require Duties for Non-Governmental Parties

The bigger risk isn’t government surveillance, it’s commercial profiling.

Ultimately, a digital ID system is going to be beholden not just to the platforms that enable it, but also to the parties that utilize it, and most of those are going to be commercial entities. Those commercial entities could misuse a user’s data after they acquire it. This can damage commercial sovereignty.

To prevent this abuse requires setting duties on entities that use a digital ID system such as e-ID. These duties should include: purpose limitation, restricting use to specific needs; verify and forget, forbidding the storage of e-ID data; and unlinkability, preventing tracking data across services or silently “pinging” some log when usage occurs. Generally, entities using a digital identity system must recognize a duty to the user as the principal holder of the identity. Again, strong enforcement will be required.

5. Implement Institutional Safeguards

Democratic oversight of digital power

Swiss democracy requires both empowering government to protect citizens from private sector abuse AND constraining government overreach.

As I said in the introduction to these anchors, I think that Switzerland has a government that is actually trustworthy to manage digital identity. However, there’s a big but. That might not be true for other governments that adopt their systems. In addition, the extreme regime changes we’ve seen in the United States over the last decade suggest that we must also be concerned about future governments. To ensure institutional sovereignty requires that any digital identity system protect itself not just from platforms and businesses, but also from the very entity that’s administering it!

Addressing this issue requires a number of things. First, the digital identity must remain politically independent. If it’s governmental, like Swiss e-ID, that requires things like cross-party appointments and fixed terms for directors as well as data governance being totally separated from politics. Second, it requires that the identity governance have duties to the users such as transparent enforcement, guaranteed human review, and service-level commitments. Third, it requires real care taken be taken with revocation, as that might be the area of a digital identity where corruption could do the most damage. This should include more duties such as two-party authorization, guaranteed quick court review, and restoration of revoked identity pending appeal.

The Ultimate Vision

What does success ultimately look like for a digital identity system like Swiss e-ID?

These five anchors provide a checklist for success:

☑️ Real Choice: Digital and physical options remain equivalent. ☑️ Sustainable Architecture: Design is dynamic, supporting a 20-year future. ☑️ Technical Sovereignty: Platforms have democratic oversight. ☑️ Commercial Sovereignty: Businesses follow strict duties to users. ☑️ Institutional Sovereignty: Digital ID system ensures oversight of itself as well.

These goals help to ensure that agency over our digital identity remains with us, the people who are the principals behind those identities, and that’s my ultimate goal, whether a system is fully self-sovereign or, like Swiss e-ID, a government deployment.

If you are a member of the Swiss civil service working with e-ID, or another interested party, please feel free to mail me directly for a personal presentation and/or to answer questions on these topics.

For more, you can see my videos and slides from the October 2 presentation:

Swiss e-ID Meeting: Slides: Slides w/Annotations:

Tuesday, 14. October 2025

FIDO Alliance

Best Stablecoin Wallets for Everyday Use in 2025

The rise of stablecoins has transformed how we handle digital payments, cross-border transactions, and everyday financial activities in the cryptocurrency ecosystem. With stablecoins like USDT, USDC, and DAI gaining mainstream […]

The rise of stablecoins has transformed how we handle digital payments, cross-border transactions, and everyday financial activities in the cryptocurrency ecosystem. With stablecoins like USDT, USDC, and DAI gaining mainstream adoption, choosing the right stablecoin wallet has become crucial for anyone looking to navigate the digital economy efficiently. What makes walllet.com revolutionary is its seedless security approach. Unlike conventional wallets that require users to manage complex 12 or 24-word seed phrases, walllet.com uses institutional-grade biometric security powered by proven technologies from Apple, Google, and the FIDO alliance. 


Authenticate 2025: Day 1 Recap

By FIDO staff Authenticate 2025, the FIDO Alliance’s flagship conference, kicked off day one on strong footing as passkey adoption continues to grow. The first day of Authenticate 2025 was […]

By FIDO staff

Authenticate 2025, the FIDO Alliance’s flagship conference, kicked off day one on strong footing as passkey adoption continues to grow.

The first day of Authenticate 2025 was loaded with insightful user stories, sessions on how to improve passkey adoption and technical sessions about the latest innovations.

Mastercard: Reimagining Online Checkout with Passkeys

Mastercard presented their ambitious vision to bring contactless payment-level security and convenience to online transactions through passkeys. The company is tackling three major e-commerce pain points: fraud from insecure authentication methods, cart abandonment and false declines of legitimate transactions. 

“There is no secret for this audience that one-time passwords are largely insecure and subject to phishing attacks,” Jonathan Grossar, Vice President of Product Management at Mastercard said. “So this is one big problem that we’re trying to address.”

Mastercard’s approach includes linking passkeys to payment card identities through bank KYC verification, adding device binding layers to meet regulatory requirements like PSD2, and ensuring banks retain control over authentication decisions even when Mastercard acts as the relying party on their behalf.

“When you have a passkey, that’s very easy, you can use it right away, and we see the conversion is just fantastic,” Gorssar said.

Passkey Mythbusters: Short Takes on Common Misunderstandings

As a relatively new technology, there are still a good deal of misunderstandings about passkeys.

In an engaging session led by Nishant Kaushik, CTO of the FIDO Alliance, Matthew Miller, Technical Lead at Cisco Duo and Tim Cappalli, Sr. Architect, Identity Standards at Okta debunked several key misconceptions about passkeys including:

Misconception #1 . Passkeys are stored in the cloud in the clear: The session clarified that passkeys are not stored in plain text. Reputable credential managers use strong end-to-end encryption, so even when passkeys are synced through the cloud, service providers cannot access the actual keys.

Misconception #2. Passkeys lock users into specific vendor ecosystems: The panel explained that new standards like the credential exchange protocol (CXP) and credential exchange format (CXF) enable secure transfer of passkeys between managers. 

Misconception #3. Phishing resistance depends solely on the relying party ID: Presenters emphasized that true phishing resistance comes from verifying the origin of authentication requests, not just matching the relying party ID. Proper server-side origin checks are essential for security.

Misconception #4 Cross-device passkey use enables remote attacks: The panel showed that cross-device authentication relies on proximity checks like Bluetooth, which prevent attackers from authenticating remotely even if they possess a QR code.

Misconception #5. Passkeys are not suitable for enterprise use: The panel highlighted that managed credential managers can offer strong policy control and high assurance for workforce applications, and that flexible management models fit both personal and enterprise contexts.

Misconception #6. Device management is always required for secure workforce passkeys: It was clarified that organizations can provide managed credential managers that enforce policies without requiring complete device management, allowing for greater flexibility.

Misconception #7. Passkeys cannot be used in mixed cloud and on-prem environments: The discussion explained that the right identity provider solutions and federation strategies can enable passkeys across a variety of application types.

What’s New in FIDO2: The New Features in WebAuthn and CTAP

There’s a lot going on with the underlying FIDO standards.

In his session, Nick Steele, Identity Architect at 1Password detailed the latest FIDO2, CTAP2.2 and WebAuthn updates. Steele explained how these new standards offer easier adoption, better security, and a smoother user experience for both enterprises and individuals.

Key technical improvements:

Hybrid transport for flexible authenticator connections Signals API for better credential management Conditional passkey enrollment and improved autofill UI Stronger encryption and HMAC secret extension Broader support for smart cards and related origins

“We really want to increase the risk signalling and the trust that enterprises can get in a single go from a passkey,” Steele said.

Credential Exchange in the Wild

One of the key misconceptions about passkeys is that they lock users into a particular platform. 

Among the reasons why that’s not accurate is the Credential Exchange format effort which was detailed in a session led by Rene Leveille, Sr. Security Developer at 1Password.

Leveille explained how the credential exchange format is designed to help password managers understand and transfer numerous credential types, making it easier for users to migrate securely between different services. He highlighted how this format, paired with a secure protocol, is the foundation for cross-platform compatibility.

Leveille outlined recent progress, including the move from early drafts to a proposed industry standard in August 2025. He discussed how both Apple and Android platforms have introduced APIs that are paving the way for seamless transfers between apps. 

Emphasizing the importance of this work, Leveille stated, “It is an extremely easy way to migrate from one credential manager to another and it is secure.”

From the Trenches: eBay

Among the earliest adopters of passkeys is eBay, which has a long history with FIDO specifications.

Ilangovan Vairakkalai, Senior Member Technical Staff at eBay detailed his organization’s journey and how it has managed to increase adoption.

“Every percentage point we gain in Passkey adoption is another user freed from password frustration,” Vairakkalai said.

Passkey adoption among mobile and native app users has climbed to an impressive 55% to 60%, reflecting how intuitive, nearly invisible authentication is a win for users. Desktop adoption, while more modest at around 20%, is steadily rising as eBay continues to innovate and collaborate with browser and device makers. 

From the Trenches: Uber

Reducing user friction is a primary reason why Uber has embraced passkeys.

Ryan O’Laughlin, Senior Software Engineer at Uber Technologies detailed his organization’s journey to deploy passkeys as a secure and user-friendly login option across its global consumer platform. 

While there was some quick success there were also some early challenges. Despite passkeys offering faster and more secure logins compared to passwords, many users continued using traditional sign-in methods, raising concerns about adoption and the prevalence of phishing risks.

To address these challenges, Uber introduced usability improvements such as clearer entry points for passkey login and proactive prompts encouraging registration. Experiments showed that enrolling users right after account sign-up or login led to a marked increase in adoption.

The company also piloted features like selfie-based account recovery, aiming for secure, phishing-resistant options as part of its broader vision for a passwordless future.

“Passwords just don’t really work for our platform. People forget them,” O’Laughlin 

said. “There is a very realistic future where we don’t have password passwords at all.”

From the Trenches: BankID

In Norway, the BankID system has been around for over two decades, providing a uniform authentication system for the country’s citizens.

Heikki Henriksen, Technology Partnership Manager, Stø AS (BankID BankAxept in Norway) explained that the BankID system started off with hardware devices but in recent years has made a move to mobile, software based approaches.

BankID began moving to passkeys after most users had adopted the BankID app. The transition away from SMS-based authentication finished in 2023. Passkeys were introduced quietly—users were not told about the technical change but were moved to the stronger, phishing-resistant credentials through regular app updates.

“We never bothered talking about passkeys, we got over half of the Norwegian population to use passkeys without ever using the term passkey,” Henriksen said. “People don’t know what passkeys are. They don’t need to understand it either. So they just use Bank ID and for us technical people we know that passkeys are running the tech behind it.”

Keynotes: FIDO Alliance Details the Path Forward

A highlight of every Authenticate event is the keynote address from Andrew Shikiar, Executive Director of the FIDO Alliance.

As part of his Day One keynote, Shikiar detailed the past, present and future of the organization he leads and the standards it develops.

“Our internal estimates point to over 3 billion passkeys securing consumer accounts – actual passkeys in use,” he said. “That’s a massive number, 3 billion in less than three years time.”

Shikiar also revealed new data from a new report, the Passkey Index, which aims to help quantify the impact of the technology. Among the standout figures:

An average 93% sign-in success rate using passkeys, which is more than double that achieved with other methods. A 73% decrease in login time when using passkeys. Up to an 81% reduction in login-related Help Desk incidents reported by some organizations.

No technology conversation in 2025 is complete without mention of AI and Shikiar didn’t disappoint. He noted that the FIDO Alliance is actively addressing agentic AI by launching targeted initiatives including the creation of a subgroup focused on agentic commerce, aiming to ensure secure authentication for human-authorized agents.

“We spent the past dozen years or so contemplating how to prevent bots from authenticating, and now we have to figure out how to enable them to authenticate,” he said.

Looking ahead, the need to eliminate knowledge-based recovery methods and improve user experience was stressed. Shikiar also talked about emerging efforts for digital credentialing, with FIDO Alliance developing foundational standards and certification programs to advance the digitization of identity documents and secure mobile credentials.

“We will create foundational specifications that are applicable to the market, building from CTAP to create a new protocol for cross device credential presentation, we’ll focus on enablement and usability,” Shikiar said.

Keynotes: Google Securing the Future of Account Management

Google’s Authenticate 2025 keynote focused on how account security and user experience are improving with the adoption of passkeys. 

With more than a billion users now signed into Google services using passkeys, it is clear these solutions are quickly moving into the mainstream. Chirag Desai, Product Manager at Google emphasized that passkeys make the sign-in process faster and easier for users and provide new opportunities for businesses looking to enhance safety and streamline account access.

“Just as the world moved from horses and carriages to cars and now even self-driving cars, we as an industry need to help our customers do the same thing,” Desai said. “We need to help make that transition from passwords to passkeys, with minimal friction.”

Beyond just passkeys for authentication Rohey Livne, Group Product Manager at Google addressed the critical role of digital credentials for account creation and recovery. These digital, device-bound documents offer stronger protection than emails or SMS, enabling selective disclosure and simplifying verification. They allow organizations to move beyond fragile legacy methods and create a fully secured account lifecycle.

“We’re not really solving account creation and account recovery with passkeys,” Livne said. “And so we are essentially trying to look at how the entire account lifecycle could be aided with digital credentials.”

Keynotes: Apple Details How to Get the Most Out of Passkeys

Apple is all in on passkeys. 

“Simply put, the world would be a better place if the default credential, the one that we all reached for first, was a passkey instead of a password,” Ricky Mondello, Principal Software Engineer at Apple said.

Mondello detailed multiple approaches that Apple is using to accelerate passkey adoption including:

Account Creation API (iOS/Mac apps): Pre-fills user information (name, email/phone) to create new accounts with passkeys in one step, avoiding passwords entirely from the start. Automatic Passkey Upgrades: Seamlessly adds passkeys to existing password-based accounts without showing upsell screens when users sign in with their password manager. Already supported on Apple platforms and Chrome desktop. Prefer Immediately Available Credentials: Shows users their saved credentials (passwords or passkeys) when opening an app, eliminating the “which button do I press?” problem.

The most provocative message centered on security. Mondello argued that simply adding passkeys alongside passwords doesn’t deliver true phishing resistance. Organizations must plan to drop passwords entirely for accounts with passkeys.

“The hard truth is that to actually deliver the phishing resistance benefit to any given account, all phishable methods of signing in or recovering it need to be eliminated or otherwise mitigated,” Mondello said.

Get Ready for Day 2!

Day 2 will have even more great content across multiple tracks, with no shortage of user stories. Look for user stories from TikTok, Roblox, Microsoft, Docusign and many others, alongside technical insights for implementation.Not registered? Don’t miss out! Attend remotely and access all previous sessions on demand, and attend day 2 and 3 live via the remote attendee platform! See the full agenda and register now at authenticatecon.com.


Hyperledger Foundation

Besu’s Journey: Bringing Ethereum into the Enterprise

From its inception in 2019, Besu was designed as an EVM (Ethereum Virtual Machine) client for permissioned and public networks, delivering enterprise-grade capabilities across a mix of use cases. As it has matured, Besu has played an increasingly important role in the Ethereum ecosystem. 

From its inception in 2019, Besu was designed as an EVM (Ethereum Virtual Machine) client for permissioned and public networks, delivering enterprise-grade capabilities across a mix of use cases. As it has matured, Besu has played an increasingly important role in the Ethereum ecosystem. 


FIDO Alliance

FIDO Alliance Launches Passkey Index, Revealing Significant Passkey Uptake and Business Benefits

Passkey Index provides a composite view of passkey utilization and business impact data from leading online service providers CARLSBAD, Calif. – The FIDO Alliance today launched the Passkey Index, revealing […]

Passkey Index provides a composite view of passkey utilization and business impact data from leading online service providers

CARLSBAD, Calif. – The FIDO Alliance today launched the Passkey Index, revealing significant passkey uptake and benefits for online services offering passkey sign-ins. Launched in partnership with Liminal, the Passkey Index provides a composite view of data from leading service providers on the adoption, utilization and business impacts of passkeys.

The Passkey Index was launched today in concert with Liminal’s Passkey Adoption Study 2025, a survey of 200 organizations either actively deploying passkeys or committed to doing so in the near future. Together, these new resources provide the most comprehensive view of passkey deployments yet, and strategic intelligence for organizations wanting to modernize and de-risk their authentication technology.

The Passkey Index is available at FIDOalliance.org and Liminal’s Passkey Adoption Study 2025 is available at Liminal.co

Passkey Index Companies Report Passkey Sign-in Rates and Benefits 

The Passkey Index comprises data from companies that have deployed passkeys over one to three years, including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and TikTok across eight utilization and performance areas. 

The Index reveals that passkey eligibility is high: FIDO Alliance member companies contributing to the Index report that an average of 93% of accounts are now eligible for passkeys. The percentage of accounts with a passkey enrolled is over a third (36%), while more than a quarter (26%) of all sign-ins now leverage passkeys. 

Passkey Index companies also reported strong business benefits with passkeys: 

Passkeys reduce sign-in time by 73% compared to other authentication methods, averaging just 8.5 seconds per login. Traditional approaches including email verification, SMS codes, and social login options took an average of 31.2 seconds.  Passkey sign-ins have a 93% success rate, compared to 63% for other methods; 30% higher success rates mean fewer failed attempts and greater throughput at critical checkpoints The Index also revealed that passkey adoption led to an 81% reduction in login-related help desk incidents. Reducing help desk burden allows IT and support teams to focus on higher-value issues.

“The data in the Passkey Index marks the first time we have been able to measure the actual utilization and performance of passkeys. Thanks to this data from several early-adopting organizations, we can confidently say that passkeys are available, being used, and providing quantifiable benefits to deploying organizations,” said Andrew Shikiar, CEO of the FIDO Alliance. “The FIDO Alliance intends to grow this program over time as a benefit to service providers within our membership, a guideline for newer implementers and an industry benchmark to track ongoing growth of passkey utilization over time.”

Liminal’s Passkey Adoption Study 2025 Validates Passkey Index by the Broader Industry 

Liminal’s Passkey Adoption Study 2025 complements the Passkey Index with a look at the industry outlook on passkeys. The survey of 200 IT professionals either actively deploying passkeys or committed to doing so highlights how buyers are turning to Passkeys to modernize and de-risk authentication. It revealed the following key points:

Passkeys are a strategic priority that delivers, with 63% of all respondents ranking passkeys as their top authentication investment priority for the next year. The majority (85%) of those that have already adopted passkeys report strong satisfaction with both their decision to implement and the business results they’ve seen so far. Organizations expect passkeys to deliver both ROI and risk reduction, as 63% of respondents believe strong authentication methods like passkeys can create cost savings and efficiency gains, while they are also expected to reduce risk (56%) and fraud (58%).  Passkeys deliver behavioral and business change. After passkeys had been deployed, a significant decline in password usage was reported by 43% of respondents, while the majority (89%) said more than half of their users are expected to opt in to passkeys after being prompted, demonstrating that adoption scales quickly after deployment. Organizations are willing and ready to adopt a fully passkey-based strategy, with almost all (97%) respondents reporting that their organization is willing to fully transition to a passkey-based authentication strategy in the future. Readiness to adopt is also widespread, with 86% of respondents stating their organization’s infrastructure is already fully or mostly prepared to support passkey authentication. They perform even better than expected. Nearly half of current implementers (49%) report adoption rates exceeding 75%, outperforming initial expectations.

Shikiar added: “It is in every company’s strategic interest to reduce reliance on passwords, and this study clearly illustrates that passkeys are doing exactly that: delivering tangible business benefits through enhanced sign-in success, improved user experience and decreased risk.”   

Passkey Index methodology

In collaboration with Liminal, the FIDO Alliance conducted a confidential survey of nine of its FIDO Member Alliance organizations to gain a deeper understanding of how passkeys are being deployed across their ecosystem and the outcomes being observed. This report offers an aggregate, anonymized view of current implementation patterns, opt-in performance, utilization, and organizational efficiency gains.

Liminal’s Passkey Adoption Study 2025 methodology 

Liminal conducted a proprietary survey of authentication decision-makers to understand how passkeys are being adopted, implemented, and evaluated across digital platforms. The research focuses on 200 organizations that have already deployed passkeys or are planning to adopt them within the next two years. This study examines key performance indicators, including adoption rates, opt-in behavior, user satisfaction, implementation challenges, and buyer priorities. It offers a data-driven perspective on how passkeys are performing in the market today and where the most important opportunities for improvement and growth exist.

About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.

Contact
press@fidoalliance.org


Six Months of Passkey Pledge Progress

In April we invited organizations around the world to take the Passkey Pledge, a voluntary commitment to increase awareness and adoption of passkey sign-ins to make the web safer and […]

In April we invited organizations around the world to take the Passkey Pledge, a voluntary commitment to increase awareness and adoption of passkey sign-ins to make the web safer and more accessible.

Passkey adoption is growing rapidly, with tens of billions of user accounts now equipped with the option to use a passkey instead of relying on passwords. We launched the Passkey Pledge to help rally the industry and accelerate adoption even further, helping even more organizations to realize the dual benefits of heightened security and a frictionless user experience.

When we launched the Pledge, we set out five goals to suit a range of organizations and use-cases, with the aim of achieving them over the next 12 months. Over 200 companies responded to our call and took the pledge. As we reach the halfway point in this journey, there have already been some incredible success stories and we wanted to highlight and share some of them with the community for inspiration.

Atlancube: The company’s commitment to the Passkey Pledge “accelerated our internal development and certification timelines” culminating in its product passing interoperability testing and successfully completing both FIDO2 CTAP2.1 and U2F L1 authenticator certifications. Primarily, this will help Atlancube prepare to launch a certified hardware security key that supports passkey sign-ins. It also helped increase awareness of the importance of passkeys among its engineering and business teams, strengthening cross-functional collaboration.

Dashlane: The password manager and credential security platform has upgraded the security of user passkeys it stores, by signing passkey challenges in a remote secure environment. The company has also integrated FIDO2 security keys into its product, replacing the master password with a hardware-backed secret to encrypt the user’s vault.

First Credit Union: The member-owned financial institution in New Zealand with over 60,000 members partnered with Authsignal to implement FIDO Certified passkey infrastructure. It adopted passkeys as it was the only approach that struck the right balance between security, usability and accessibility for its diverse membership base. Since rolling out passkeys, 58.4% of its members adopted the new authentication experience, with 54.5% of all authentications now using passkeys. In addition, over 23,500 members enrolled in multi-factor authentication. Read more in the First Credit Union case study

Glide identity: Glide Identity has achieved FIDO certification for its new products, joining the ranks of certified providers delivering standards-based authentication solutions. This certification validates Glide Identity’s commitment to interoperability and positions the company to serve organizations worldwide seeking reliable, FIDO-compliant authentication solutions.

HYPR: Took the Passkey Pledge to help realize a public good in eliminating shared secrets and passwords. The company has already delivered on its pledge, deploying passkeys at scale to Fortune 500 enterprises and beyond, including two of the four largest US banks.

LY Corporation: Made its Passkey Pledge to contribute to the industry-wide adoption of passkeys. During the last six months the company has increased the number of touchpoints where passkey sign-in is triggered, as well as publishing educational content to improve user literacy about passkeys. This has resulted in improved passkey sign-in rates of 41%, and reduced SMS transmission costs by replacing SMS OPTs with passkeys.

NTT DOCOMO: Has made significant progress on its Pledge to demonstrate actions that measurably increase the use of passkeys by users when signing into their services. The company has continuously improved the user experience by improving and refining messages on passkey enrollment and error pages to make them more customer friendly. NTT DOCOMO is confident of reaching its target to increase passkey usage ratio by 10% within the year since taking the Pledge.

Secfense: Has enabled support for passkey sign-ins across enterprise environments without requiring changes to existing applications. The company has implemented large-scale passwordless rollouts in highly regulated sectors, including banking and insurance, completing projects in just a few months. These deployments replaced passwords with phishing-resistant FIDO authentication, without modifying existing systems or disrupting users, proving that full passkey adoption is possible even in legacy infrastructures.

Thales: Over the last six months, Thales has extensively promoted the benefits of passwordless authentication and passkeys to its customer base  and other organizations through sponsored events, workshops, webinars and other channels. This is part of the company’s long-standing commitment to fight against phishing and improve both security and user convenience.

We’d like to extend a big thank you to all those who signed up to the pledge and for sharing an early snapshot of the progress you’ve made. We’ll provide more insights and updates as the Passkey Pledge moves into the final 6-month stretch. It’s not too late to take the Pledge this year – we’ve already seen how much can be achieved in such a short space of time. If you’ve already taken the Pledge, tell us about your progress as we’d love to share your success with others in the future.


Passkey Index 2025

FIDO has launched the Passkey Index, which provides a composite view of data from leading service providers – including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and […]

FIDO has launched the Passkey Index, which provides a composite view of data from leading service providers – including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and TikTok – on the adoption, utilization and business impacts of passkeys. It reveals significant passkey uptake and benefits for online services offering passkey sign-ins. Read the full report here.

Read the Report


The Engine Room

COMMUNITY CALL: PSYCHOSOCIAL SUPPORT & DIGITAL SAFETY

What does psychosocial support look like in the face of spyware attacks and digital security threats? It can mean adapting care to the context, listening without rushing, and building protocols that protect both dignity and data. The post COMMUNITY CALL: PSYCHOSOCIAL SUPPORT & DIGITAL SAFETY appeared first on The Engine Room.

What does psychosocial support look like in the face of spyware attacks and digital security threats? It can mean adapting care to the context, listening without rushing, and building protocols that protect both dignity and data.

The post COMMUNITY CALL: PSYCHOSOCIAL SUPPORT & DIGITAL SAFETY appeared first on The Engine Room.


ResofWorld

AI flood forecasting allows aid to reach farmers before disaster strikes

Using Google’s Flood Hub, nonprofit trial program sends early warning and unconditional cash to workers.
Farmers on Jamira, a tiny island on Bangladesh’s flood-prone Jamuna River, are used to warring with the river that’s eating their land.  Next year, they’ll have one more weapon in...

Project VRM

Gathering the MyTerms Troops

MyTerms (IEEE P7012) is on track to be ProjectVRM’s biggest achievement—and maybe the biggest thing on the Net since the Web. I’m biased, but I believe it. And that track runs through three events next week: VRM Day, on Monday October 20. IIW, the Internet Identity Workshop, from Tuesday to Thurdsday, October 21 to 23. […]

MyTerms (IEEE P7012) is on track to be ProjectVRM’s biggest achievement—and maybe the biggest thing on the Net since the Web. I’m biased, but I believe it.

And that track runs through three events next week:

VRM Day, on Monday October 20. IIW, the Internet Identity Workshop, from Tuesday to Thurdsday, October 21 to 23. AIW, the Agentic Identity Workshop, on Friday, October 24.

All three are at the Computer History Museum in Silicon Valley. Register at those links. VRM Day is free. The others are relatively inexpensive.

Here is some of what’s going on around MyTerms.

The draft is complete and on track for publication early next year. But work can start in the meantime. Consumer Reports is with us on this. Joyce and I met with them in New York on Monday. They’ll be there next week. Sir Tim Berners-Lee, who invented the Web, devotes a chapter of his new book, This is for Everyone, to the intention economy. He also credits my book by that title (which reported on ProjectVRM progress in 2012) with inspiring his Solid Project. Joyce and I met with Tim last Tuesday as well. Thanks to work by Iain Henderson, Liz Brandt, and others, there are allied efforts going on in Europe, most notably with MyData. We can see good things starting to happen on the enterprise side, thanks especially to the recent writings of Nitin Badjatia. Dig When Customers Set the Terms: How the Intention Economy and ‘MyTerms’ Enable the Great Unwinding. Kwaai and members of the open source personal AI community are on the case as well.

Iain and Nitin will also be at the events next week. So will others from the MyTerms working group, Kwaai, and other allied efforts.

We plan to have VRM Day online by Zoom (or the equivalent—we’ll let you know); but we’ll get the best results if you’re there in person.

Hope you can make it, and see you soon.

 

Monday, 13. October 2025

EdgeSecure

Creating A Mentoring Culture Centered on Joy

Creating a Mentoring Culture Centered on Joy Photography from MENTOR Newark’s Grand Opening ceremony provided by Fresco Arts Team. Photography and Curation of the MENTOR Newark facilities provided by Tamara… The post Creating A Mentoring Culture Centered on Joy appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Creating a Mentoring Culture Centered on Joy

Photography from MENTOR Newark’s Grand Opening ceremony provided by Fresco Arts Team.
Photography and Curation of the MENTOR Newark facilities provided by Tamara Fleming Photography.

In a city full of potential, MENTOR Newark is creating pathways for young people to thrive in their community and turn their dreams into reality. As the New Jersey affiliate of MENTOR, the National Mentoring Partnership, MENTOR Newark connects youth in Newark, New Jersey, to caring mentors who provide guidance, support, and positive role modeling. “Our mission starts with three words: joy, purpose, and opportunity,” shares Thomas Owens, Executive Director of MENTOR Newark. “Too often, when working with students—especially in communities like Newark—the focus is on what we’re protecting them from: harm, crime, or failure. But that approach can carry an assumption that without us, that’s where they’d end up. What we’re really trying to do is build something different. I always say, it’s like creating a garden. First, you till the soil, make sure the nutrients are there, and water it. Then, when the plants grow, your job is simple: keep watering them and give them light. That’s what mentoring should be. It’s not about saving kids; it’s about nurturing them so they can grow into who they’re meant to be.”

“MENTOR Newark is currently collaborating with the district on what the National Mentoring Partnership calls the “National Mentoring Connector,” a system designed to connect quality mentoring programs with schools and communities efficiently. This approach represents a shift from direct service to empowering others to scale mentoring impact throughout the city.”

– Thomas Owens
Executive Director, MENTOR Newark

Empowering Youth with Opportunities
For Owens, a commitment to community service began at just eleven years old in New York. “My father and his crew started tenant patrols in the housing projects, and I was the one who always tagged along,” he recalls. “I’d be in basements in The Bronx, sitting through tenant meetings with him. I was always by his side, and by the time I was fourteen, I knew all the boroughs.” That early exposure shaped a guiding belief: “If you stay committed and do the work, you can go wherever you want to go. And that same work becomes your protection and speaks for you, even when you’re not in the room.”

Owens went on to run nonprofits in New York and later became a founding member of the Eagle Academy for Young Men of Newark, the only all boys public school in the city of Newark. “We started the school in 2012, and I remained there until my first class of sixth graders that we recruited graduated from the 12th grade. I then moved over to my current role at MENTOR Newark, formerly Newark Mentoring Movement. When I joined, I aligned the organization with the National Mentoring Partnership, and I’ve been leading it ever since. It’s given me the chance to continue this work in a way that feels deeply personal and creative. People often ask me if I’m ever going to start an art program, and I tell them, this is my art. Working with young people, building this movement, that’s my creative work.”

MENTOR Newark works with the school district, local nonprofits, and other partners to provide training, professional development, and support for the people doing the mentoring work. “If I go into a school and start a mentoring program myself, I might be able to mentor 20 kids,” says Owens. “But if I support 20 schools in building their own programs, we can reach hundreds and exponentially grow the level of mentoring around the country. This past January, we had the opportunity to take part of our team to the National Mentoring Summit in Washington, D.C. We presented our work in front of about 2,000 people during one of the plenary sessions. We showcased what we’re doing here in Newark and discussed capacity building and how to support the people and systems that make mentoring sustainable and impactful.”

The MENTOR Newark team also secured an appropriation through Senator Cory Booker’s office and approached the Newark school district with an idea. “We suggested using a portion of this money to create a Director of Mentoring Services position within the district,” explains Owens. “The collaboration of funds and shared vision with Superintendent León, led to the creation of the first-ever role in the State of New Jersey dedicated solely to mentoring at the district level. We now have someone, Jermaine Blount, who works directly with the district and attends board committee meetings and this has made it so much easier to align mentoring with the district’s everyday work. This joint approach helps both us and the school district deliver on the mission of creating joy and a mentoring culture for young people.”

“We want to encourage them to find their own joy, because my joy isn’t their joy, it’s about discovering what lights them up. We must also approach our work with intensity and purpose, while creating meaningful opportunities for joy. That’s the cycle we’re building: joy inspires purpose, purpose creates opportunity, and opportunity brings us back to joy. We want to show our students what’s possible, give them the tools and the confidence, and then let them take off on their own. Once you wind them up, they’re ready to run.”

“Too often, when working with students—especially in communities like Newark—the focus is on what we’re protecting them from: harm, crime, or failure. But that approach can carry an assumption that without us, that’s where they’d end up. What we’re really trying to do is build something different. I always say, it’s like creating a garden. First, you till the soil, make sure the nutrients are there, and water it. Then, when the plants grow, your job is simple: keep watering them and give them light.That’s what mentoring should be. It’s not about saving kids; it’s about nurturing them so they can grow into who they’re meant to be.”

– Thomas Owens
Executive Director, MENTOR Newark

Building Capacity to Expand Mentoring
The idea of building capacity within the mentoring community developed organically as MENTOR Newark worked closely with local organizations. Owens explains that many groups wanted to bring mentoring programs into schools but faced significant barriers. “When I asked why they couldn’t get into schools, the answer was often that they lacked essential components like a curriculum, background checks, or proper training. To address these gaps, MENTOR Newark stepped in to provide training, assist with background checks, and help formalize mentoring programs.”

Through a partnership with the Newark school district, MENTOR Newark could then vouch for these programs and introduce them as credible options to be integrated into schools. Although funding is often limited, Owens emphasizes that positioning these programs correctly enables them to secure their own funding over time. “MENTOR Newark is currently collaborating with the district on what the National Mentoring Partnership calls the “National Mentoring Connector,” a system designed to connect quality mentoring programs with schools and communities efficiently. This approach represents a shift from direct service to empowering others to scale mentoring impact throughout the city.”

In reflecting on what he has learned through the process of building and growing MENTOR Newark, Owens says clarity and intentionality matter. “I’ve learned to ask a lot of questions, and we must be specific and intentional about our generosity. While community interest and requests for space or support are constant, we have to make sure every decision aligns with MENTOR Newark’s core mission of building a stronger mentoring ecosystem.”

Through a collaboration with New Jersey Performing Arts Center (NJPAC), MENTOR Newark helped design a mentoring curriculum tailored for NJPAC mentors who work with younger students. “They wanted to do some small, Tiny Desk–style concerts in our main area,” shares Owens. “We also partnered with the Dodge Poetry Festival which will host two interactive events at the mentoring center that brings in poets and facilitates workshops with young people. We’re seeing more and more Historically Black Colleges and Universities (HBCUs) graduate chapters choosing to hold meetings and events at MENTOR Newark. You walk in on any given day and I’m in my office in a meeting, someone else is mentoring in another room, there's a grad chapter meeting happening down the hall, and the kids are running the whole thing.”

MENTOR Newark partnered with the Newark school district to bring in middle school students every Tuesday through Thursday andoffer them lessons on graphic arts, hospitality and the culture of HBCUs and introduce them to peer leaders. “High school students, many of whom are part of the MENTOR Newark program, are the ones delivering those lessons, using a curriculum co-developed with support from consulting firm McKinsey & Company. Since launching in February 2025, MENTOR Newark youth have led over 40 sessions, reaching more than 1,000 middle schoolers across the city, and our high schoolers were leading the way. After the conclusion of one of the sessions, I watched our students sanitizing chairs, mopping floors, and cleaning up the space, without being asked. They don’t do it because we told them to, they do these tasks because they feel ownership. This is their space, and they take it seriously.”

“I’ve been here three years, and the space keeps getting better. It feels like it was made for high school students and we’re still adding our own creativity to it. Growing up in Newark, we don’t always get opportunities like this or get to be in spaces like I’m in now. What makes it different here is the trust. They see potential in me and have given me a new perspective on life. This means everything, because outside of here, people still treat me like a kid, even though I’m 17. Since I joined MENTOR Newark, I’ve opened up more. I’ve been applying to college classes, getting into programs, and that’s because of what I’m doing here. I’m forever grateful.”

– High School Student

A New Space to Call Home
In 2025, a partnership between MENTOR Newark and Edge began with a shared opportunity. Edge had office space available after transitioning to fully virtual, and MENTOR Newark was looking for a new home to grow its mission. “We had been working with a realtor to sublease the space without much success, until a mutual business contact introduced us to Thomas and shared details about the organization's mission and their need for a physical space,” shares Amy Olavarria, Executive Director Human Resources and Administration, Edge. “After all the normal formalities, I was able to meet with Thomas and hand him the keys to their new office space. It was an amazing feeling. I’ve met several of the young adults in the program, and they’re all so mature, motivated, and well-mannered. It’s inspiring to see, and MENTOR Newark is truly an incredible organization.”

Prior to the move, MENTOR Newark’s headquarters were located on a lower level of the building, where several students played an active role in building and renovating the space. “Our students were deeply involved in creating the previous space, so I wasn’t sure how they would connect with the new one,” admitted Owens. “But kids always surprise you. When they walked in, they immediately recognized the potential and were genuinely excited. They appreciated the new amenities, including the kitchen, private bathrooms, and other features we didn’t have before. Most importantly, they now have their own dedicated area, called the Student Office, which is exclusively theirs.”

Additional support with the initiative was provided by Ashley Mays and The Newark Alliance. The organization was instrumental in helping them negotiate the agreement with Edge. As a major partner, they’ve also provided critical support that enables MENTOR Newark to maintain their new downtown space. Owens knows it takes a village to bring an intiative like this to the finish line, “We are deeply grateful to City of Newark Mayor Ras Baraka, Newark City Council President C. Lawrence Crump, and local developer Siree Morris for their continued encouragement and advocacy for our mission and the students of Newark. Their support inspires us to keep pushing forward.”

That sense of ownership extends beyond the students. At a recent open house, the broader community showed up to explore the new space. “People were moved, and saw the students taking responsibility for the space, engaging with guests, and even leading parts of the event,” shares Owens. “There was dancing, laughter, and a strong sense of pride. Many attendees were curious how it all came together, and when they realized it came through a partnership between two organizations, it made a real impression.”

“Edge is honored to have Thomas and his organization in this space,” adds Olavarria. “Even though MENTOR Newark and Edge are two nonprofits in different industries, we share a common goal: to help and serve the people in our communities and beyond. Working with MENTOR Newark also perfectly aligned with Edge’s brand promise of CONNECTED. COMMITTED. COMMUNITY. I attended their grand opening, and the atmosphere was one of home, community, and acceptance—regardless of age or background. It was a powerful feeling of unity and peace.”

“Edge is honored to have Thomas and his organization in this space. Even though MENTOR Newark and Edge are two nonprofits in different industries, we share a common goal: to help and serve the people in our communities and beyond. Working with MENTOR Newark also perfectly aligned with Edge’s brand promise of CONNECTED. COMMITTED. COMMUNITY. I attended their grand opening, and the atmosphere was one of home, community, and acceptance—regardless of age or background. It was a powerful feeling of unity and peace.”

– Amy Olavarria
Executive Director Human Resources and Administration
Edge

Inspiring New Perspectives
For one high school student, MENTOR Newark isn’t just a place to go—it’s a place that changed how they see themselves and their future. From upgraded spaces to life-changing mentorship, the program helped them feel seen, supported, and ready to grow. “I’ve been here three years, and the space keeps getting better. It feels like it was made for high school students and we’re still adding our own creativity to it. Growing up in Newark, we don’t always get opportunities like this or get to be in spaces like I’m in now. What makes it different here is the trust. They see potential in me and have given me a new perspective on life. This means everything, because outside of here, people still treat me like a kid, even though I’m 17. Since I joined MENTOR Newark, I’ve opened up more. I’ve been applying to college classes, getting into programs, and that’s because of what I’m doing here. I’m forever grateful.”

For an 18-year-old fellow, MENTOR Newark offers more than just hands-on experience, it also nurtures self-discovery and independence. “When we were located downstairs, we had the opportunity to learn how to put up sheetrock, how to caulk, and build the space from our mentor and MENTOR Newark team member David Byre Tyre—a professional artist and designer. This program opened me up to things I never imagined doing, especially not at home or in other programs. I’m learning how to be more independent, and they tell us to ‘be selfish.’ Not in a negative way, but be selfish about your goals. Know your dreams and go after them because no one else will do it for you. Whatever we want to do in school or career wise, they fully support our dreams. It feels like a family here and I’ve had opportunities I’d never get anywhere else.”

She continues, “We just had an event with Senator Cory Booker and were able to ask him questions face to face. As a young Black woman in Newark, that’s not something I ever expected. This place is shaping who I’m becoming. It’s showing me how to help my community and care for the people around me, and that’s the kind of leader I want to be.”

Learn more about MENTOR Newark at newarkmentoring.org.

The post Creating A Mentoring Culture Centered on Joy appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Sunday, 12. October 2025

Digital Identity NZ

DINZ Executive Council Elections & Annual Meeting 2025

Inspiring trust solutions that protect, empower, and help our people thrive At Digital Identity NZ (DINZ), we bring our stakeholders together to build confidence, connection, and opportunity. By bridging trust … Continue reading "DINZ Executive Council Elections & Annual Meeting 2025" The post DINZ Executive Council Elections & Annual Meeting 2025 appeared first on Digital Identity New Z

Inspiring trust solutions that protect, empower, and help our people thrive

At Digital Identity NZ (DINZ), we bring our stakeholders together to build confidence, connection, and opportunity. By bridging trust gaps and empowering ecosystems, we strengthen communities and open the door for New Zealanders to thrive on the global stage. As we approach the end of the year, it is time for nominations for the Council seats coming up for re-election. Each Council member is elected for a two-year term, with elections held annually, and results notified at the Annual Meeting in December.

Executive Council Nominations

There is now an opportunity to put yourself forward, or nominate someone else, for a role on the Digital Identity NZ Executive Council. This year we have vacancies for the following positions:

Corporate – Major (3 positions) Corporate – Other (2 positions) SME & Start-up (2 positions)

The nominees for the above positions must be from a Digital Identity NZ member organisation (including Government agencies) and belong to the same Digital Identity NZ Membership Group they are to represent on the Executive Council.  If you are unsure of your organisation’s membership category, please email elections@digitalidentity.nz 

All nominations must be entered into the online form by 5pm, Monday 3rd November 2025.

Nomination form

Digital Identity NZ Executive Council roles and responsibilities include:

Direct and oversee the business and affairs of Digital Identity NZ. Attend monthly Executive Council meetings, usually two hours in duration (video conferencing is available). Drive towards achieving Digital Identity NZ’s strategic plan by participating in Digital Identity NZ working groups and projects. Represent Digital Identity NZ at industry events and as part of delegations. Assist in managing and securing members for Digital Identity NZ. Where agreed by the Executive Council, act as a spokesperson for Digital Identity NZ on issues related to working groups or projects. Be a vocal advocate for Digital Identity NZ.

Benefits of joining the Digital Identity NZ Executive Council

Contributing to Digital Identity strategy with other passionate people with your experience Expand your networks Educating and empowering New Zealanders in the digital identity space

Online Voting

Voting will take place online in advance of the meeting, with the results announced at the Annual Meeting. Please refer to the Charter for an outline of Executive Council membership and the election process. Each organisation has one vote, which is allocated to the primary contact of the member organisation.

Annual Meeting

The Annual Meeting is scheduled for 10:00am on Thursday, 4th December 2025, and will be held via Zoom.

Register here

Notices and Remits

If you wish to propose any notices or motions to be considered at the Annual Meeting, please send them to elections@digitalidentity.nz by 5:00pm on the Thursday, 13 November 2025.

Key Dates:

13 October: Call for nominations for Executive Council representatives issued to members 3 November: Deadline for nominations to be received 10 November: List of nominees issued to Digital Identity voting members and electronic voting commences 13 November: Any proposed notices, motions, or remits to be advised to Digital Identity NZ 4 December: Annual Meeting, results of online voting announced

Background

From the beginning, we have asked that you consider electing a diverse group of members who reflect the diversity of the community we seek to support. We ask that you do so again this year.  The power of that diversity continues to shine through in the new working groups this year, particularly as we consider the importance of Te Tiriti, equity, and inclusion in a well-functioning digital identity ecosystem.

The Council has identified several areas where diversity, along with expertise in the digital identity space, could help us better serve the community. Nominations from organisations involved in kaupapa Māori, civil liberties, and the business and service sectors are particularly encouraged. We also encourage suggestions from young people within your organisations, as their viewpoint is extremely valuable and relevant to the work we perform. As an NZTech Association, Digital Identity NZ adopts its Board Diversity and Inclusion Policy, which you can read here.

The post DINZ Executive Council Elections & Annual Meeting 2025 appeared first on Digital Identity New Zealand.

Friday, 10. October 2025

The Rubric

Faster, Cheaper, and More Private: the Sequel IS Better! (did:btcr2, Part 2)

did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...
did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...

Faster, Cheaper, and More Private: the Sequel IS Better! (did:btcr2, Part 1)

did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...
did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...

EdgeSecure

The SHI TeCHS Catalog Through EdgeMarket

The SHI TeCHS Catalog Through EdgeMarket Fast Track to Technology Procurement Technology’s role in the success of public sector organizations has never been more essential. To help the community keep… The post The SHI TeCHS Catalog Through EdgeMarket appeared first on Edge, the Nation's Nonprofit Technology Consortium.
The SHI TeCHS Catalog Through EdgeMarket

Fast Track to Technology Procurement
Technology’s role in the success of public sector organizations has never been more essential. To help the community keep pace with technological change and more readily access solutions that will drive organizational success, Edge created EdgeMarket to provide safe, simple, and smart procurements that deliver positive outcomes. Among these solutions is the Technology Catalog for Hardware, Software & Services (TeCHS) through SHI, providing one of the most comprehensive and forward-thinking technology hardware, software, and services procurement vehicles available in the U.S.

In the spring of 2021, Edge conducted a competitive request for proposal process (RFP) for a technology and services catalog provider with two main goals in mind. The first was to deliver massive scope, scale, and unsurpassed value in technology and service purchasing to Edge members and co-op participants everywhere in the country. The second goal was to harness the innovative capabilities and global capacity of an IT solutions partner who could deliver truly transformative solutions, built upon their extensive catalog, deep talent, scalable logistics, and state-of-the-art facilities. “Our awarded partner, SHI, did a fantastic job presenting capabilities to facilitate ease of purchasing with scope, scale, and value, but also some really outstanding strategies and delivery capabilities for truly transformative solutions,” shares Dan Miller, AVP, EdgeMarket. “The EdgeMarket TeCHS contract offers SHI’s full line of products, services, and solutions, and it’s no surprise that this solution is one of our most popular master contract vehicles in our co-op.”

To meet the procurement needs of a greater number of institutions, Edge architected the agreement to allow for adjusted terms and customization. “We want to help our members get the solutions they need, not just another component-based contract, hence the services dimension. We also wanted to introduce any additional terms and conditions that can help an organization improve upon the baseline agreement so they can move forward with clarity and confidence.”

Dan Miller Associate Vice President, EdgeMarket, Edge

A Smarter Approach to Technology Deployment
Before issuing the RFP, Edge spoke with member institutions, including Rutgers University, about the frustrations of state contracts and the procurement challenges they were facing. Sue Ryan, Strategic Sourcing Manager, Information Technology Services, Rutgers, was an important source of information as Edge began to develop the RFP. “Sue provided valuable insight into the process, where organizations would commonly have to self-assemble the sourcing of elements for a strategic move,” explains Miller. “We learned through COVID that our members and Edge needed to move on a dime, making key decisions and activating on those decisions effectively. We knew the criteria for the contract would be breadth, scope, and scale of solution; including hardware, software and services. We also wanted a partner that had a vision for the future pathways, and who was making investments to deliver transformative solutions. They needed the ability to adapt to any new adverse reality or take advantage of any new opportunity. Edge was so impressed with the SHI proposal, as they far exceeded our expectations.”

The TeCHS contract provides quick and easy access to hundreds of hardware, software, and service categories and groups that are conveniently organized into categories. “SHI has a great catalog, deep talent, scalable logistics that are extremely impressive, and state-of-the-art facilities that they continue to invest in and grow from,” shares Miller. “All of these assets help our members gain access to tools that can fundamentally alter how they do business and enhance their day-to-day operations.”

Partnering with their customers, SHI helps organizations take a smarter approach to technology deployment and running efficient and effective IT operations. “TeCHS has really been a visionary step forward—a 21st-century technology contract,” shares Lou Malvasi, Senior District Manager, Strategic Education, R1 Universities, SHI. “This procurement contract isn’t just focused on large research universities, it’s designed to support institutions across New Jersey and nationally. Community colleges, K–12 districts, and state and local governments can all take advantage of the same streamlined procurement and deployment model. We’re actively working to expand awareness and adoption beyond the tri-state area so more organizations can tap into innovation and collaboration opportunities.”

Unifying Technology Procurement
The TeCHS catalog was designed to provide an all-encompassing solution that makes thousands of trusted technology partners and solutions available for streamlined cooperative procurement. The TeCHS contract has really been a visionary step forward—a 21st-century technology contract,” says Malvasi. “State contracts, in many cases, are still very archaic in how institutions are allowed to procure. For projects that involves a combination of hardware, software, and services, traditional procurement methods can create unnecessary complications. Historically, you had to buy hardware on one contract with one reseller, software on another, and then find a third services contract to pull it all together. The EdgeMarket TeCHS catalog creates a one-stop shop, not just for procurement, but also for deployment. Institutions gain one point of accountability across the entire lifecycle of a project. We’ve heard from a number of university procurement teams how this contract is accelerating internal projects that used to be slowed down by outdated procurement models. This solution is not just more efficient; it’s enabling real progress.”

Rutgers is using the TeCHS contract to simplify and unify technology procurement across their institution. “TeCHS affords Rutgers the ability to develop IT strategies and then use one contract to source all elements required to support those strategies at lower cost and greater ease than any other contract, including State contracts—which we walked away from,” explains Ryan. “The TeCHS contract also allows us to avoid all the terms and conditions that you constantly have to negotiate. Additionally, Rutgers has a catalog through our marketplace system and TeCHS affords us better pricing. Because of this contract, we were able to receive an additional layer of discounts on top of the normal discount that SHI would put on our catalog. As my RFPs become much more complicated, we need to have a contract where we can approach the design more holistically and have access to a wider breadth of technology partners.”

For organizations who are looking to bid a complex strategy, Ryan says you can leverage this type of contract with multiple components to help create a comprehensive approach. “For us, the Edge TeCHS contract afforded us the ability to have a vehicle that we can build a strategy design, rather than just buying software and hardware. The software is always the easy piece, but this allows us to look holistically at what avenue we can use to combine all these components and successfully build out strategies. With this contract, we’ve been able to develop some of our bigger projects.”

To help make the procurement process successful for institutions, SHI also offers professional services, adding expertise and support to each stage of the technology life cycle. “Oftentimes you have to bid out the professional service or consulting piece apart from where you are buying the products,” explains Ryan. “As you build strategic projects, having a contract like TeCHS where professional services are included, makes the process much easier and more streamlined.”

“TeCHS has really been a visionary step forward—a 21st-century technology contract. This procurement contract isn’t just focused on large research universities, it’s designed to support institutions across New Jersey and nationally. Community colleges, K–12 districts, and state and local governments can all take advantage of the same streamlined procurement and deployment model. We’re actively working to expand awareness and adoption beyond the tri-state area so more organizations can tap into innovation and collaboration opportunities.”

Lou Malvasi Senior District Manager, Strategic Education, R1 Universities, SHI

Efficiently Meeting Technology Needs
SHI’s Customer Innovation Center (CIC) is located at the company’s New Jersey headquarters and allows organizations to plan, design, explore, and validate their technology needs. Along with product and integrated solution demonstrations, the CIC offers workshops and events where individuals can explore new ideas, gain insights on the latest trends, obtain hands-on experience with new technologies, and receive guidance from SHI experts. In addition, SHI operates two IT integration centers which combine hardware and software components from multiple manufacturers into ready-to-deploy rack systems. Faced with datacenters that were becoming increasingly complex, Rutgers now utilizes the SHI Ridge Integration Center to access scalable and cost-effective services, including rack and stack services, pre-installation testing, and installation activities.

SHI has been working with Rutgers on a 10-year master network refresh project and has utilized the SHI Ridge Integration Center to bundle services and put them under one SKU in Rutgers’ e-procurement catalog. “Rutgers is in the middle of a large network master plan, where lots of pieces to the puzzle are involved,” explains Malvas. “Cisco is one piece, but there are also UPS data-center racks that they are refreshing across New Jersey. At the SHI Ridge Integration Center, we procure all the data-center equipment and bring it into our warehouse. We have quality-control experts that will actually rack and stack, and we send configuration files to the Rutgers team. Upon Rutgers’ review and approval, we rack, stack, and cable the equipment under the TeCHS contract and deliver the equipment to locations across New Jersey.”

For many R1 institutions, Malvasi says AI infrastructure is the predominant focus. “The demand for GPUs far exceeds the available supply, making it an extremely competitive space. Through SHI’s AI and Cyber Lab, schools gain access to on-premises and cloud infrastructure for testing and validation. This allows principal investigators and faculty to run real-world application scenarios before purchasing large-scale systems. We work closely with data science teams and researchers to test their applications in a six-week sprint and look at everything, including compute, power, cooling, and connectivity, to deliver a detailed output of what infrastructure is truly needed. This approach not only reduces risk and optimizes cost, but also strengthens the institution’s position when competing for large federal research grants. We want to help faculty align their technical requirements with the realities of funding and deployment.

“For us, the Edge TeCHS contract afforded us the ability to have a vehicle that we can build a strategy design, rather than just buying software and hardware. The software is always the easy piece, but this allows us to look holistically at what avenue we can use to combine all these components and successfully build out strategies. With this contract, we’ve been able to develop some of our bigger projects.”

Sue Ryan Strategic Sourcing Manager, Information Technology Services, Rutgers University

Helping Organizations Achieve their Goals
To meet the procurement needs of a greater number of institutions, Edge architected the agreement to allow for adjusted terms and customization. “We want to help our members get the solutions they need, not just another component-based contract, hence the services dimension,” says Miller. “We also wanted to introduce any additional terms and conditions that can help an organization improve upon the baseline agreement so they can move forward with clarity and confidence.”

The TeCHS contract is available to all institutions, large or small, and SHI will gladly help any organization build out a catalog. “Even if you do not have an e-procurement platform like Rutgers, we can still set up a SHI.com website for you to purchase from using a username and login,” says Malvasi. “We can publish quotes, create landing pages, and work with facilities to understand the equipment you are buying from year to year. SHI already has TeCHS pricing integrated within our catalog team, so once this is set up for the customer, the pricing will feed into that automatically. Integration with eCommerce platforms like ServiceNow and Jaeger is a little more work on the backend, but if you want a simple SHI.com catalog, you can benefit from the TeCHS pricing almost immediately.”

Looking for a streamlined solution for procuring the latest technology? Learn more about the SHI TeCHS Catalog at edgemarket.njedge.net/home/shi-techs-catalog.

The post The SHI TeCHS Catalog Through EdgeMarket appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Thursday, 09. October 2025

OpenID

Public Review Period for Proposed OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0 Final Specification

The OpenID Digital Credentials Protocols (DCP) Working Group recommends the following OpenID Final Specification:   OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0: https://openid.net/specs/openid4vc-high-assurance-interoperability-profile-1_0-05.html    A Final Specification provides intellectual property protections to implementers of the specification
The OpenID Digital Credentials Protocols (DCP) Working Group recommends the following OpenID Final Specification:   OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0: https://openid.net/specs/openid4vc-high-assurance-interoperability-profile-1_0-05.html    A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a 14-day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Final Specifications.   The relevant dates are: Final Specification public review period: Thursday, October 9, 2025, to Monday, December 8, 2025 (60 days) Final Specification vote announcement: Monday, November 24, 2025 (14 days) Final Specification official voting period: Tuesday, December 9, 2025, to Tuesday, December 23, 2025 (14 days) 

The Digital Credentials Protocols (DCP) working group page:  https://openid.net/wg/digital-credentials-protocols/.

Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “Digital Credentials Protocols (DCP)” working group on your contribution agreement), (2) joining the working group mailing list at openid-specs-digital-credentials-protocols@lists.openid.net, and (3) sending your feedback to the list. 

  Marie Jordan – OpenID Foundation Board Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.



The post Public Review Period for Proposed OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0 Final Specification first appeared on OpenID Foundation.


FIDO Alliance

White Paper: FIDO and the Shared Signals Framework

Orchestrating Agile and Secure IAM Workflows October 2025 Authors: Jacob Harlin, MicrosoftJosh Cigna, YubicoMartin Gallo, HYPRSumana Malkapuram, NetflixApoorva Deshpande, Okta Abstract In today’s fragmented enterprise security landscape identity and access […]
Orchestrating Agile and Secure IAM Workflows

October 2025

Authors:

Jacob Harlin, Microsoft
Josh Cigna, Yubico
Martin Gallo, HYPR
Sumana Malkapuram, Netflix
Apoorva Deshpande, Okta

Abstract

In today’s fragmented enterprise security landscape identity and access management (IAM) systems often operate in silos. The need for cohesive, real-time coordination across platforms is more critical than ever. This paper introduces a strategic approach that combines FIDO-based strong authentication with the OpenID Foundation’s Shared Signals Framework (SSF) to orchestrate agile and secure IAM workflows, enable stronger continuous authentication, and promote collaborative defense against identity threats.

FIDO protocols offer a robust foundation for user authentication as they leverage public-key cryptography to eliminate password-based vulnerabilities. However, authentication alone is insufficient for sustaining zero-trust principles. Once an authenticated session is established, its trustworthiness must be continuously evaluated. This broader need for continuous evaluation is where SSF comes in – enabling the secure exchange of identity and security events, such as risk signals and session revocations, across disparate systems and vendors.

This document explores how integrating SSF into IAM architectures enhances visibility and responsiveness throughout the user journey, including joiner-mover-leaver (JML) and account recovery scenarios. It also highlights how Continuous Access Evaluation Protocol (CAEP) and Risk Incident Sharing and Coordination (RISC) protocols, when layered atop FIDO2, empower organizations to make real-time, risk-informed decisions that reduce fraud and accelerate incident response.

This synthesis of FIDO and SSF represents a paradigm shift toward continuous, adaptive trust that enables organizations to move beyond static controls and toward dynamic, signal-driven security ecosystems.

Audience

This white paper is for enterprise security practitioners and identity and access management leaders whose responsibility is to protect the security and life cycle of online and identity access management. Specifically, the target audience should include those whose purviews cover activity monitoring for threat detection and response as well as IAM staff who support those goals. Additionally, IAM leadership and architects should review this document to understand opportunities the described technologies offer and the implications of implementing them.

Download the White Paper 1. Introduction

The FIDO Authentication protocol has a proven track record of securing initial session authentication by leveraging strong public key infrastructure (PKI) based cryptography. Adoption of this technology has been a leap forward as a unified approach for secure and usable session establishment, however the ability to maintain, monitor, and manage ongoing sessions has historically remained fractured. This challenge is exacerbated by the reality of today’s enterprise security landscape, where numerous security vendors and solutions often operate in silos with limited communication. These barriers hinder comprehensive security outcomes during adverse events, leading to localized mitigations rather than unified responses. 

Shared signals offer a crucial pathway to facilitate a more holistic and effective response by providing a way to exchange security events across vendor boundaries. Ongoing management and monitoring are required to adopt the full zero-trust model.  The OpenID Foundation’s Shared Signals Framework (SSF) aims to address these challenges. If you root an IAM program with a strong footing, such as FIDO based authentication, and combine it with strong ongoing activity monitoring enabled by an SSF, you can achieve substantial changes that reduce (and enable you to react to) fraud and maligned activities.

2. What is the Shared Signals Framework?

The Shared Signals Framework (SSF) standard simplifies the sharing of security events across related and disparate systems. The framework allows organizations to share actionable security events and enables a coordinated response to potential threats and security incidents. SSF is defined by the OpenID Foundation’s Shared Signals Working Group (SSWG). The SSF standards are still evolving, but evaluation of the specifications provides a clear picture of what the SSWG hopes to achieve and can inform practitioners around what can be done with these tools today. The goal of this framework is to define a common language and mechanism for communicating actionable security events in near real-time, that allows systems to respond more effectively and in a coordinated way to potential threats.

SSF helps bridge gaps between identity providers, relying parties, and other services by creating a unified way for entities to notify each other of relevant changes, such as risk signals or session status updates. 

For example, Mobile Device Management (MDM) tools can transmit a device compliance change event to indicate a user’s laptop is no longer compliant with corporate policies. When this event is received by a downstream system, that service may determine that the user’s authenticated session should be terminated until such a time as the device moves back into a healthy state. 

Note: It is important to remember that SSF security events standardize and facilitate the sharing of information. They are not directives. Recipients need to determine the actions to take in case of a security event.

The SSF standard describes how to create and manage streams, which are used to deliver notification of events to the receiver using push (RFC 9835) and poll (RFC 8936) mechanisms. From a technical perspective, SSF describes using secure, privacy protected generic webhook transit with events delivered via HTTP in streams. 

Software vendors can act as transmitters and receivers; however, they must establish independent unidirectional streams. Events are formatted as Security Event Tokens (SETs) (RFC 8417) and the entities involved are identified by Subject Identifiers for Security Event Tokens (RFC 9493). Additional Subject Members are also defined in the OpenID Shared Signals Framework Specification 1.0

Since SETs do not describe the content or semantics of events, the SSWG is developing two standard profiles under SSF: 
Continuous Access Evaluation Profile (CAEP): For sharing access relevant state changes like token revocation or device posture.
Risk Incident Sharing and Coordination (RISC): For sharing signals about “risky” behaviors, such as account compromise.

2.1 Continuous Access Evaluation Profile (CAEP)
To further simplify interoperability between various vendors, the SSWG has also defined the CAEP Interoperability Profile. This specification “defines the minimum required features from SSF and CAEP that an implementation MUST offer in order to be considered as an interoperable implementation”. (CAEP Interoperability Profile).

Federated systems commonly assert the login only during initial authentication, which can create security risks if user properties (such as location, token claims, device status, or org membership) change during an active session. CAEP aims to enhance the “verify, then trust” mantra by defining a common event profile to communicate such changes as they happen. For example, early proposed examples suggest CAEP events can be used to:

Tie risk signals to known identities (users and non-human identities (NHIs) Track sessions and behavioral changes over time Dynamically adjust access without requiring the user to re-authenticate

This list is non-exhaustive, and capabilities are expected to grow and evolve as CAEP is more widely adopted. Because CAEP is built upon SSF principles, interoperable push and poll of SETs can be sent in real-time between trusted entities. These entities can include identity providers, relying parties (RP), monitoring systems like Security Information and Event Management (SIEM) systems, MDM systems, or any security-focused software vendor. 

When an entity receives a SET, they can then evaluate the event and decide whether to revoke tokens or transmit an updated security status to other services. Monitoring systems such as MDM, endpoint detection and response (EDR)/extended detection and response (XDR), SIEMs, or any security-focused software vendor can emit/consume CAEP events. As enterprise architectures evolve, CAEP can serve as a foundational tool for zero-trust strategies, enabling continuous and adaptive access evaluation that is informed by real-time context.

2.2 Key components of the Security Token Event (SET)

At the core of SSF is the Security Event Token (SET), a JWT based envelope defined by RFC 8417, that provides the foundational format for encoding and transporting these events.

“The intent of this specification is to define a syntax for statements of fact that SET recipients may interpret for their own purposes.” (RFC 8417)

Based on this principle, SETs provide a structured, interoperable format to convey claims (statements of fact) such as account changes, credential updates, or suspicious activity, without prescribing any particular enforcement action. This allows recipient systems to evaluate and respond to events in accordance with their own policies. Each profile (CAEP, RISC, SCIM) imposes specific constraints on the base SET and its associated subject identifiers (per RFC 9493), thereby defining clear semantics and expected behaviors for particular use cases. 

The SET itself is composed of several key claims, which together define the issuer, audience, subject, and event full context. A full description is available within the official documentation from the OpenID foundation, RFC 8417, and RFC 9493. The following is a brief outline of these claims. 

iss (issuer) – Represents the entity that issued the token, such as https://idp.example.com/ (as per SET examples). This is used by the receiving service to verify that the event originates from a trusted provider. aud (audience) – Specifies the intended recipient of the token. Depending on the deployment, the recipient may be the relying party application, an identity provider, or another trusted service. This helps ensure that only the designated service processes the security event. jti (JWT ID – unique event identifier) – A unique identifier for this specific event within the security stream. Helps with tracking and deduplicating events to avoid processing the same event multiple times. iat (Issued At Timestamp) – Indicates the exact Unix timestamp when the event was generated. Helps determine the event’s freshness and prevent replay attacks. sub_id (subject identifier) – Structured information that describes the subject of the security event.  events (Security Events Information) – The core claim that contains details about the specific security event. This is a mapping from an event type identifier (for example, https://schemas.openid.net/secevent/risc/event-type/account-disabled) to an event-specific JSON object that typically includes attributes such as subject, contextual metadata (for example, reason, timestamp, and risk level), and any profile-defined parameters required to interpret and act on the event. event_timestamp – Represents the date and time of an event. Uses NumericDate  txn (Transaction Identifier) – OPTIONAL – Represents a unique transaction value. Used to correlate SETs to singular events.

2.3 Risk Incident Sharing and Coordination (RISC)

While CAEP defines a standardized messaging transport for communicating session-related state changes between trusted parties during active sessions, additional security events that might compromise an identity outside of a single session must also be addressed. This is where Risk Incident Sharing and Coordination (RISC) comes into play. 

RISC is designed to share security events that are related to potential threats, credential compromises, and account integrity across federated systems. RISC hopes to define profiles that enable each recipient system to assess and act upon security events based on their unique risk policies, rather than mandating specific enforcement actions. 

RISC SETs might also empower standards compliant systems (via the System for Cross-Domain Identity Management (SCIM) standard for example) to communicate “statement of fact” assertions, with the goal to enable simpler automation and coordination across an asynchronous federated environment.

It is important to remember that RISC, like CAEP, suggests a framework of profiles and roles for platforms to leverage.

SETs only state provable assertions. They do not issue specific directives. Receivers may need to leverage profiles that are not yet established, to always take prescribed actions based on SETs received from transmitters. However, those profiles need to be understood by the transmitter/receiver pair. The ultimate goal is to enable more automation and faster reactivity across sessions through the sharing of SETs. 3. SSF and user journeys 

When you plan for implementation of IAM tools and capabilities, it is a common practice to consider the user journeys that need to be supported. These user journeys include day-to-day authentication and authorization processes, as well as more impactful (but less common) JML and recovery processes. Both CAEP and RISC methodologies can be used to enhance these workflows, building off strong authentication backed with FIDO2. With FIDO2 you are able to make decisions about users with certainty and with SSF you can track actions and react more quickly and accurately based on identity signals and user behaviors.

While the adoption of SSF is expected to grow, it will be up to the individual practitioner or organization to best determine how to leverage these capabilities. At the time of writing, the proposed workflows (as well as many of the transmitter and receiver interfaces) all need to be manually created and configured. Instead, it is recommended that you evaluate how these suggestions can enrich existing workflows and request delivery of these capabilities from your vendors and implementers.

3.1 Onboarding (joiners) and upgrading (movers) access

One journey that affects every end user is the joiner, or onboarding, process which generally establishes accounts for a user before they start at an organization. Accounts are created and entitlements are granted, with the expectation that they will not be used immediately. This timeframe is normally documented as “Day Zero -1.” This timeframe varies depending on organizational practices, but in order to ensure a speedy onboarding process most mid to large sized organizations follow this trend. 

The risk here is that it is easy to perform OpenSource Intelligence Gathering (OSINT) and enumerate accounts that fall into the “pending start day” category. The current set of IAM tools may lack the intelligence or agility to dynamically enable and disable accounts based on a strong identity proofing workflow and business demands of “hitting the ground running” often mean that these accounts are active and unmonitored before a user starts.

Profiles built on Shared Signal Frameworks (specifically RISC) can be leveraged to enhance this process. You can develop workflows that use the successful establishment of FIDO credentials via strong ID Proofing workflows, or initial detection of the use of pre-registered FIDO credentials, to trigger account enablement via IAM systems. With this workflow, accounts can sit inactive during the Day Zero – time frame and will only be dynamically activated once a successful strong authentication has been detected.

Role or access changes (known as mover workflows) can follow a pattern similar to that of the onboarding enhancement. New accounts can be created in a disabled state, awaiting specific triggers (such as date and time) in conjunction with authentication. RISC also opens the door to more dynamic access elevation, where the signaling framework can be used to trigger approval workflows in IAM ticketing and provision systems to temporarily grant higher privileges or roles.

Creative use of the shared signals frameworks, paired with a FIDO backed Root of Trust (RoT), can strengthen and enhance joiner and mover user journeys. These emerging techniques should be evaluated and adopted in a timely manner, to raise the bar for all IAM practitioners.

3.2 Device recovery/replacement

Another common user journey is establishment of a user on a new device. While it is similar to the onboarding journey, pre-existing permissions, accounts, and roles add complexity to this journey. This is also a common area of attack as attackers can abuse this workflow to enroll their own devices or otherwise compromise the pre-existing identity via unsecured channels. 
A best practice for device loss workflows is to lock down access as soon as a lost device is reported. You can leverage RISC signals to inform RISC consumer systems of the new device registration activity as part of an automated workflow that helps disable access as needed. Once a new device is issued, an identity can be re-established on the new device with a FIDO2 authentication workflow. The workflow can then leverage RISC signals to have IAM provisioning systems re-enable access.

Similar workflows can be leveraged if the FIDO2 authenticator needs to be replaced. This includes the loss of a device that contains a synced credential or a hardware token that contains a device-bound credential. Identity proofing workflows need to be leveraged to securely re-establish identity before a new credential can be bound to a user’s account. After this workflow is complete, RISC signals can be leveraged to re-enable sensitive access that was disabled when the credential was reported missing.

3.3 Offboarding (Leaver events in JML)

Offboarding workflows fall into two categories: planned and unplanned. Planned offboarding remains fairly unimpacted by SSF. It is possible to leverage CAEP signals to trigger termination of any active sessions after the user signs off for the last time. However, the SSF is more useful for unplanned offboarding events. A workflow can evaluate CAEP signals, and any open sessions can be identified and ended. As part of this workflow FIDO credentials should be de-associated from the user’s accounts, ensuring that the user can no longer log in. Both of these controls can ensure that unplanned offboarding events are well controlled and executed across the board.

3.4 Session tracking

Within the scope of modern identity security, session tracking plays a pivotal role in maintaining the integrity and security of user sessions. While authentication methods like FIDO effectively protect the initial login, they are significantly enhanced when complemented by session tracking. This involves the continuous monitoring of a session’s behavior and context throughout its entire lifecycle, from creation to termination. Such ongoing evaluation is crucial for identifying risk signals that may indicate potential security threats, such as session hijacking or unauthorized access attempts.

Platforms within a networked environment use CAEP events to send a range of signals to an authentication system responsible for managing sessions. You can utilize session tracking data so that as events are received, the authentication system can implement appropriate security measures, such as enforcing step-up authentication or terminating sessions. These events originate from multiple, diverse platforms, which each act as both transmitters and receivers within the SSF. This interconnected network offers valuable insights into potential security threats, enabling each platform to contribute to and enhance session tracking across the entire network.

To illustrate the impact of session tracking, we will explore use cases that compare an environment that uses only WebAuthn authentication with an environment that uses an enhanced approach that incorporates continuous authentication and shared signals. This comparison highlights how continuous session tracking can significantly bolster security and mitigate risks. 

The following table describes some possible ways to design these workflows. The table outlines the traditionally observed behaviors of systems and how security policies can be enhanced with the inclusion of SSF capabilities. When compared side by side, you can see the advantages provided by the adoption of SSF signaling. 

User Journey – Adding continuous access and session evaluation to a high assurance authentication.

ScenarioFIDO (Point-in-Time Authentication)FIDO + SSF (Continuous Assessment and Signals)CAEP/RISC eventsInitial authenticationUser logs in using WebAuthnUser logs in using WebAuthn.NASession establishmentSession is established and remains valid until expiration or logoutSession is established with continuous monitoring enabled.
If a disallowed event signal is received (for example, credential compromise, risk alert, or policy violation), the session can be revoked or re-evaluated immediately instead of waiting for expiration or logoutCAEP session-establishedThreat intelligence alertNo visibility or actionA threat intelligence system (for example, EDR/XDR or an anti-phishing platform) watches for a phishing campaign targeting a user group. If a phishing campaign is detected, the system acts as a transmitter and sends a RISC credential-compromise event to the Identity Provider (IdP), which functions as the SSF receiver in this scenario. Upon receiving the event, the IdP correlates the identity, flags the session, and revokes it as necessary.
The IdP can then act as a transmitter and issue a CAEP session-revoked event to other downstream SSF receivers, such as SaaS applications or partner services. This enables receivers to take appropriate actions (for example, terminating sessions or prompting re-authentication) based on the trust change initiated by the IdP.RISC: credential-compromise 
CAEP: session-revoked
Session hijack or replay (post threat alert)Session remains valid and an attacker can reuse the stolen session token (for example, via fixation or XSS), as FIDO-only systems do not have post-authentication visibility.Signals (for example, from threat intelligence platforms) elevate risk and those events are transmitted to receivers like the IdP, which then terminates the session. This prevents the reuse of any compromised session tokens.CAEP: risk-level-changedStep-up authentication (post threat alert)Not triggeredAfter receiving a RISC credential-compromise event from a threat intelligence system, the Identity Provider (IdP) flags the session as high-risk and prompts the user to authenticate using FIDO WebAuthn. Once the user completes strong re-authentication, the IdP issues a CAEP assurance-level-change event to reflect the increased assurance level. This event can also be transmitted to downstream consumers such as audit platforms or relying parties, enabling consistent assurance tracking.CAEP: assurance-level-change 4. Filling gaps – compliments to FIDO and conclusion

As demonstrated, by the use cases outlined above, both CAEP and RISC pair well with FIDO authentication standards to improve overall security postures and practices for enterprises and organizations. These cases only cover the largest areas where these frameworks should be adopted and integrated into current tools and workflows. In addition to our recommendation of implementing these standards, a robust and well planned SSF/FIDO program can provide buffers/flagging against potential false positive signaling and help make the tasks of attributing improper activities and detection of rogue actors easier for Network Operations Centers (NOCs). 

SIEM systems rely on credible data from endpoints. SSF helps to normalize the structure of many tasks that historically have required bespoke connectors. Shared signals (such as CAEP session state changes or RISC credential-compromise events) can add clarity and deeper insight into principal (the user or entity associated with the event) and system behavior. Additionally, SSF-enabled SIEM or IAM tools can be leveraged to strengthen current step-up authentication practices, providing native ways to track high privilege interactions without the need for full reliance on single point of failure third party systems. 

In the past, passive signals were used for dark web monitoring. With shared signals coordination we now have the capabilities to send notifications and cycle credentials automatically for systems that do not support strong authentication. Accounts with leaked credentials can either be auto-disabled and shunted to a reset workflow that is backed by a strong authentication with FIDO or automatically rotated with credentials that are vaulted and retrievable with IDV or FIDO authentication. Stolen credentials may not be limited to usernames and passwords and can also include stolen synced passkeys and or certificates. CAEP can be leveraged to communicate out of context credentials, and the shared signals should be leveraged as part of a risk-based authentication workflow.

CAEP, RISC, and FIDO provide a risk-averse way to enable federated login. Implementation of both enhanced session tracking and strong authentication creates a workflow in which external users can leverage federated login processes and security teams can more closely monitor and attribute activity and behavior. In the Customer Identity space, these enhanced signals can provide more secure ways to allow end users to authenticate using their existing trusted identity provider accounts (for example Google, Apple or enterprise Identity Providers) instead of creating new local credentials, through enhanced session tracking and strong, phishing resistant authentication.

When practitioners and vendors embrace RISC and CAEP frameworks for signaling, they strengthen not only their own environments but also the broader information security ecosystem. A common, interoperable signaling language increases the ability of systems across organizational boundaries to track and correlate user and process activity, detect inappropriate behavior, and respond consistently. In this way, the adoption of SSF moves security practice toward a more collaborative, standards-based model that prioritizes shared defense and ecosystem resilience. When SSF is put into practice, it enables external entities to be better informed in real time, improving collective security and ensuring that end users are more effectively protected.

5. SET examples

This section contains several mockup examples of the makeup of SETs. These are provided to add clarity to the contents and capabilities of each component of the SSF. They describe the information systems can expect to receive and what data points can be included in a token.

5.1 CAEP example tokens

CAEP provides a standardized way to communicate access property changes in real time. It defines Security Event Tokens (SETs), which are sent by transmitters using the SSF framework. Upon receiving a CAEP event, the receiver can dynamically adjust access permissions, which reinforces zero-trust security principles and ensures security decisions remain context aware and adaptive. 

The following are examples of key CAEP Security Event Tokens (SETs).

5.1.1 Session revoked

Session revoked: Indicates an active session has been terminated

Event transmission example.

5.1.2 Credential changes

Token claims change: Signals changes in token claims such as roles, entitlements, and group memberships that affect access control.

Credential change: Signals that a user’s credentials have been changed (for example, deleted, updated, created, or revoked). Examples of credentials include passwords, fido2-platform, and fido2-roaming. 

Event transmission example

5.1.3 Assurance level or compliance change

Assurance level change: Indicates that the assurance level of user’s authentication has changed, impacting session security.

Device compliance change: Signals a change in the security posture of a user’s device. For example, a previously compliant device is now non-compliant.

Transmission event for device compliance example.

5.2 RISC example tokens 

The following examples show the key RISC SETs.

5.2.1 Account credential change required

Indicates an event requiring a credential update for the subject, typically due to detected compromise or reuse. For example, this helps prevent credential stuffing attacks across federated accounts. 

5.2.2 Account enabled

Notifies that a previously disabled account has been re-enabled. This allows relying parties to reinstate access where appropriate (for example, after resolving a false positive).

5.2.3 Account purged

Notifies that the subject’s account has been permanently deleted and should no longer be recognized by relying parties.

5.2.4 Account disabled

Notifies that the subject’s account has been disabled and is no longer accessible. This helps prevent unauthorized access (for example, after fraud detection or HR termination).

Transmission event for account disabled for fraud detection.

5.2.5 Identifier changed/recycled

Notifies when a user’s identifier (for example, email or username) has changed or is reassigned. Helps prevent unauthorized access using outdated identifiers.

6. Document history ChangeDescriptionDateInitial publicationWhite paper first published.October 2025 7. References

Internet Engineering Task Force (IETF). (2020, November 30). Poll-Based Security Event Token (SET) Delivery Using HTTP. IETF Datatracker. https://datatracker.ietf.org/doc/rfc8936/

Internet Engineering Task Force (IETF). (2020, November). Push-Based Security Event Token (SET) Delivery Using HTTP. IETF Datatracker. https://datatracker.ietf.org/doc/html/rfc8935

Internet Engineering Task Force (IETF). (2018, July). Security Event Token (SET). IETF Datatracker. RFC 8417https://datatracker.ietf.org/doc/html/rfc8417

Internet Engineering Task Force (IETF). (2023, December). Subject Identifiers for Security Event Tokens. IETF Datatracker. https://datatracker.ietf.org/doc/rfc9493/

OpenID. (2025, August 29). OpenID Continuous Access Evaluation Profile 1.0. OpenID. 
https://openid.net/specs/openid-caep-1_0-final.html 

OpenID. (2024, June 25). CAEP Interoperability Profile 1.0 – draft 00. OpenID. https://openid.net/specs/openid-caep-interoperability-profile-1_0-ID1.html

OpenID. (2025, August 29). OpenID RISC Profile Specification 1.0. OpenID. 
https://openid.github.io/sharedsignals/openid-risc-1_0.html

OpenID. (2025, August 29). OpenID Shared Signals Framework Specification 1.0. OpenID. https://openid.net/specs/openid-caep-1_0-final.html


Hyperledger Foundation

Transitioning Hedera CLI to Hiero CLI: Embracing Open Source and Extensibility

The Hedera command-line interface (CLI) is undergoing a significant evolution. We are excited to announce that the Hedera CLI is transitioning to the Hiero CLI, reflecting its new home under Linux Foundation Decentralized Trust’s (LFDT) Hiero project. This change aligns the tool with Hedera’s broader move to open governance and sets the stage for a more community-driven future.

The Hedera command-line interface (CLI) is undergoing a significant evolution. We are excited to announce that the Hedera CLI is transitioning to the Hiero CLI, reflecting its new home under Linux Foundation Decentralized Trust’s (LFDT) Hiero project. This change aligns the tool with Hedera’s broader move to open governance and sets the stage for a more community-driven future.


OpenID

OpenID Foundation Applauds Western Balkans Digital ID Plan

      Joint Statement on Achieving Interoperable Digital ID Wallets and Trust Services at the 6th Western Balkans Digital Summit The Western Balkan governments welcomed regional experts, ministers, European Union representatives, the World Bank, and the OpenID Foundation, amongst other global experts, to take part in the 6th Western Balkans Digital Summit on October 1st […] T

     

Joint Statement on Achieving Interoperable Digital ID Wallets and Trust Services at the 6th Western Balkans Digital Summit

The Western Balkan governments welcomed regional experts, ministers, European Union representatives, the World Bank, and the OpenID Foundation, amongst other global experts, to take part in the 6th Western Balkans Digital Summit on October 1st and 2nd. A key accomplishment of the event was the 6 Western Balkan countries agreeing to the Joint Statement on Achieving Western Balkans Six Interoperable Digital ID Wallets and Trust Services.  “By agreeing to implement regionally interoperable digital identity wallets, harmonise trust services, and strengthen regulatory and technical capacities, the Western Balkans Six committed to advancing secure and seamless digital access for citizens and businesses, in line with EU eIDAS 2.0 standards.”

The OpenID Foundation applauds the Joint Statement, and the six Western Balkan nations’ intent to prioritize and pursue interoperability with each other, with Europe, and their foresight to align with EU eIDAS 2.0 standards. It is worth noting that the EU eIDAS 2.0 standards, in turn, point to international standards from the OpenID Foundation (including OpenID for Verifiable Presentation 1.0, and OpenID for Verifiable Credential Issuance 1.0, and the High Assurance Interoperability Profile 1.0, as well as pointing to standards from peer standards bodies such as ETSI, ISO/IEC, FIDO, W3C, the IETF, and the Cloud Signature Consortium among others.  

The OpenID Foundation’s Vice Chair Dima Postnikov took part in the mainstage panel on “Digital Wallets – Advancing Public Sector Services through Secure and Efficient Solutions.”  

The panel was moderated by Vasko Kronevski, CEO of Nextsense, and Dima was joined by peer experts Machiel van der Harst, CEO at Tech5, Vojislav Popovic, Senior Expert on Digital Connectivity, RCC, Edna Katadza, SEED+ Project Manager, Central European Free Trade Association (CEFTA), Ruth Puente, Founder and Executive Director, Open Wallet Foundation, Svyatoslav Senyuta, Government Solutions Regional Leader, VISA and Christopher Tullis, Senior Digital Development Specialist, World Bank.

Digital wallets, interoperability, and cybersecurity were key themes present in this panel, as well as in many other presentations and conversations during the Summit. As OIDF Vice Chair Dima Postnikov shared on stage, “Don’t do it alone. By working together, by partnering with the industry, we can deliver digital trust infrastructure that is functionally better and is secure and interoperable.”

As Executive Director, Gail Hodges affirmed, “The OpenID Foundation stands ready to support the Western Balkan countries, and their partners such as the World Bank and the EU, in their efforts to achieve this ambitious but vital regional digital identity wallet program. The Foundation will continue to work not only with the Balkans, but with our standards body peers and non-profit peers to ensure that the Western Balkan policies can be delivered through the underlying standards and conformance tools.” 

This two-way collaboration between countries and standards bodies helps ensure that all countries and their residents and businesses can equally benefit from secure, privacy-preserving, interoperable, and scalable digital identity solutions. In addition to the Foundation’s specifications, the Foundation also offers safe spaces for ecosystem experts to convene, such as the Ecosystem Support Community Group. This Community Group is designed for government and ecosystem decision makers to share best practices to inform their ecosystem and implementation design, and evolve their roadmaps over time. 

We are grateful to the Ministry of Digital Transformation of Macedonia and Minister Stefan Andonovski for hosting the summit and for extending the invitation to the OpenID Foundation to participate, and to Goran Vranic and Stela Mocan at the World Bank and Bojana Naumoska from the Ministry of Digital Transformation of Macedonia for bringing the OpenID Foundation into this excellent event.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

The post OpenID Foundation Applauds Western Balkans Digital ID Plan first appeared on OpenID Foundation.

Wednesday, 08. October 2025

Blockchain Commons

Musings of a Trust Architect: The Gordian Club

ABSTRACT: Unencrypted data isn’t safe. Centralized servers aren’t reliable. Gordian Club offer an alternative: the autonomous cryptographic object (ACO). Self-contained objects protected by cryptography can be passed around with minimal possibility of surveillance or censorship. It’s great for a number of use cases where infrastructure is unreliable, from work in politically unstable regions to dis

ABSTRACT: Unencrypted data isn’t safe. Centralized servers aren’t reliable. Gordian Club offer an alternative: the autonomous cryptographic object (ACO). Self-contained objects protected by cryptography can be passed around with minimal possibility of surveillance or censorship. It’s great for a number of use cases where infrastructure is unreliable, from work in politically unstable regions to disaster response.

Imagine: A reporter maintains a list of sources and the information they provided for an article critical of the federal government. This is a requirement for fact-checking any serious news story or for later defending it in a court of law. But the federal government illegally seizes the list and jails the sources.

Imagine: A protest group uses a supposedly secure messaging app to coordinate, but the government threatens the app store, who substitutes a version that records all of the information that should be encrypted. The government then begin arresting participants at home, before the protests even begin.

Imagine: An immigrant flees a totalitarian regime. They carry with them a digital cache of their identity credentials, which will be necessary to immigrate elsewhere. But a border patrol catches them exiting their country and confiscates their records. The government uses their control of internet infrastructure to block future attempts to verify the credentials, which all phone home. The emigrant is let go, but now not only cannot immigrate, but will also have problems proving their identity in their own country.

These stories are unfortunately no longer restricted to problematic countries on the edge of global democratic society: authoritarianism is spreading across the entire world. In the United States alone, unwarranted searches and seizures, violations of free speech rights, and illegal use of military forces for domestic peacekeeping are on the rise. As a result: unencrypted data is no long safe, because we can’t be certain it won’t be illegally seized; centralized services are no longer trustworthy, because they have shown that they will bow to dictatorial whims; and centralized servers are no longer reliable, because their usage could be censored through infrastructural control.

Protecting data, especially in a world of services that profit off of user content, is a serious problem that I’ve long struggled with. Unfortunately, the problem is coming to a head. In a globally connected society, we can no longer trust data, servers, or services that are outside of our personal control. New solutions are required, not just to ensure our human rights, but to ensure our personal safety.

The Rise of Autonomy

Fortunately, Bitcoin trailblazed the path for another way. Bitcoin protects a user’s assets (and their ability to transact) with decentralized protocols that depend on math rather than the fiat of whatever entity controls a server or service. But Bitcoin is largely limited to protecting “value transfer.” There’s a lot more ground to cover.

That’s where Blockchain Commons’ newest technology comes into play: the autonomous cryptographic object (ACO). It can protect many other sorts of digital interactions, and it pretty much does what it says on the label.

The Power of Autonomy: Autonomy means the ability to make your own choices: self-government. It’s a word that should be right up there with privacy and self-sovereignty as a fundamental digital right. But to truly maintain self-control requires you not to be dependent on external entities. That’s the core of the “A” in ACO.

This might be the most important part of an ACO because it creates a number of fundamental advantages:

Unblockable Access. No server or platform dependencies. Perfect Privacy. No logs or tracking. Disaster Resilience. Available during infrastructure failure. Censorship Resistance. No fiat controlling access.

The Power of Cryptography: Meanwhile, cryptography is the math. The “C” in ACO says that your autonomous control of the object is dependent upon set mathematical rules rather the arbitrary decision of some external force. It’s what allows you to escape the fragility of centralized servers: math doesn’t bow to authoritarian dictates.

The Power of Objects: Finally, the “O” in ACO isn’t just a neutral descriptor. It says that an ACO is a discrete (and, yes, autonomous) thing that can be stored or passed around as users see fit, without the need for a specific network. Calling it an object differentiates it from a client or a server or something else dependent on a global communication infrastructure.

Using ACOs is a paradigm shift. Traditionally, access control was at the whim of administrators. With ACOs, access control (and the information they protect) is determined by the pure math of cryptography instead. It’s a change from the serfdom of asking “Mother, May I?” to the agency of saying “Yes, I Can.” It creates infrastructure that you control, rather than infastructure that controls you.

Join the Gordian Club

I don’t want ACOs to just be a theory, so I’ve been working in recent months to create a working example of an ACO at Blockchain Commons: the Gordian Club.

A Gordian Club is an ACO built on Gordian Envelope (along with the rest of the Gordian Stack). It allows for the storage of credentials, data, or other information in an organized and protected way. Access control is managed by Permits.

Envelope. Gordian Envelope allows for the “smart” storage of information as a recursive set of semantic triplets. Permit. Envelopes can be encrypted, with permits allowing for the decryption of that data in a variety of ways.

A Gordian Club’s permits allow the encoding of either read or write permissions. These permissions can be simultaneously linked to public keys, to XIDs, and to secret shares, because envelope permits can enable different means of access to the same data. Permission can also be delegated, using cryptographic ocaps made possible by the Schnorr signatures at the heart of Gordian Clubs.

The permissions and data are published in an initial Edition of the Gordian Club. But that’s just the first step. Thanks to the write permissions, a Club can later be updated in new Editions that might contain slight or wholesale changes to the data and permissions found in the previous Edition. A provenance mark validates the linkage of multiple Editions.

Because of its autonomous design, Gordian Clubs are entirely transport neutral. Though you could send an Edition over the internet, you don’t have to. You could send it via messaging. You could put it on an NFC card or thumb drive, then mail it. You could print it as a QR code and publish it in a newspaper. You could distribute it via Blockstream Satellite. The transport doesn’t even have to be near-term: a Gordian Club Edition could be stored away for archival and used years down the road. This is true autonomy: not beholden to servers or services, but not beholden to a stable network either.

Here’s how a Gordian Club might look in those real-life examples of modern-day authoritarianism:

Journalism: A journalist stores a list of sources and their information in a Gordian Club. One permit allows him to open it with his private key. He also sends the Club and SSKR shares to the five board members for his newspaper. Any three of them together can open it, which they might need to do in the case of a lawsuit over an article. The journalist can later issue new Editions of the Club when he updates his information cache or when the members of the board change. The information is encrypted, which means it’s protected even in the case of an illegal seizure. Freedom of press has become a mathematical right: the government would have to coerce either the journalist or multiple board members to access it.

Protest: A protest group passes around a Gordian Club that contains information on upcoming protests. Updated Editions are published whenever new protests are planned, which can be done by agreement of a FROST quorum of protest organizers. Alongside its data, the Gordian Club contains a list of allowed readers, which was determined by the FROST quorum of organizers. However, any reader can also delegate read permissions to another reader. These delegated permissions only remain valid for that Edition of the Gordian Club; if there was a compromise due to delegation, it wouldn’t extend to future Editions.

Credentials: An immigrant stores their credentials as a Gordian Club, which they send to a human rights organization in the country they are immigrating to. It is locked with the organization’s public key and with a stretched password, which corresponds to a line from the immigrant’s favorite song, which is long enough to be largely unbreakable, yet memorable to the immigrant. If the immigrant is seized before they leave their country, the border patrol can only go on what the emigrant self-reports. Even if the border patrol learns who the immigrant is, they can’t block their credentials, because they’re all self-sovereign, without phone-home requirements, and stored in that Gordian Club. Alternatively, if the immigrant reaches a safe haven, the human rights organization will provide the Gordian Club; either they can unlock it with their key or the immigrant can do so with their own password.

There are many other use cases that go beyond that increasing authoritarianism of modern countries. This includes use cases where the internet is not available or where a longer timeframe is required.

Emergencies: A category 5 hurricane devastates the Eastern Seaboard. The internet is largely down, though cell phone access remains available for emergency use. An unencrypted Gordian Club is created with all the emergency resource information and passed from user to user via messaging. Signatures verify its authenticity, even as Editions are updated, which helps users to steer away from the scammers that inevitably come out during these times of tragedy.

Archival: The patriarch of a family writes his last will and testament in a Gordian Club, accessible by his private key, by his lawyer’s private key, or by three out of five shares given to his heirs. The information in it stays private until his passing, with the quantum-resistant cryptography available in Gordian Envelope ensuring privacy until that sad date. But afterward, it’s easily accessible by heirs or the lawyer. The provenance marks clearly note which version of the will is the newest.

Five Principles For Autonomy

Gordian Clubs make ACOs concrete by following five principles that I’ve developed for autonomous systems.

Operate Without External Dependencies. Everything you need is within the Gordian Club: data and permissions truly operate autonomously. Encode Rules in Mathematics, Not Policy. Permits are accessed through mathematical (cryptographic) constructs such as private keys or secret shares. Make Constraints Load-Bearing. Members can’t be removed from a Gordian Club Edition, but that also means permissions can’t be unilaterally revoked. Gordon Clubs don’t have live interactivity, but that means they can’t be censored by a network. Preserve Exit Through Portability. An ACO that can be freely passed around without network infrastructure is the definition of portability. Work Offline and Across Time. Gordian Clubs are meant to be used offline; archival is a major use case, allowing access across a large span of time.

I’ll have more on these principles, how I derived them, and what the Exodus Protocol Pattern is in a future Musings.

Credit Where Credit is Due

Gordian Clubs were inspired by the Clubs feature of Project Xanadu, which was the world’s first hypertext project: it could have been a world wide web of tightly interconnected information before there was a World Wide Web.

Project Xanadu was built around “Clubs,” which could be individuals or organizations and which could be recursively created: a Club (or individual) could be a member of a Club (or individual) … etc. Each Club could have read or write permissions to itself or to other clubs, and those rights could also be passed down through a hierarchy.

The problem with Clubs was that they required centralized administration. When I became peripherally involved with Project Xanadu in the early ’90s, I suggested the use of cryptography to turn that human-based administration into math-based administration. But cryptography wasn’t up to the requirements at the time.

Now it is, thanks in large part to the release and development of Schnorr signatures, allowing for the creation of Gordian Clubs.

Gordian Clubs Are a Reality!

None of this is just a theory. I have a working library and CLI for Gordian Clubs. A demo is available on YouTube:

There’s also a full log of the demo, which you can use to follow along, using the clubs-cli-rust app.

Take a look, but more importantly let me know how will you use Gordian clubs? What use cases will be served by ACOs? And are Gordian Clubs the right answer or do you need something more (or less)? I’d love to get your feedback as we continue work on this crucial new technology.

For more on Gordian Clubs, take a look at our developer pages:

Gordina Clubs Overview The Power of Autonomy Gordian Technology ocaps and Delegation Project Xanadu History

Also see the “Beyond Bitcoin” presentation from TABConf 7:

“Beyond Bitcoin”

Tuesday, 07. October 2025

Hyperledger Foundation

Celebrating Web3j 5: A Journey from 2016 to Today

Today, we’re excited to announce the release of Web3j 5, a major milestone in a journey that began back in September 2016. What started as an open source project to make Ethereum accessible to Java and Android developers has now grown into one of the most widely used Java libraries in the blockchain ecosystem. And established itself as an important integration technology in the LF Decen

Today, we’re excited to announce the release of Web3j 5, a major milestone in a journey that began back in September 2016. What started as an open source project to make Ethereum accessible to Java and Android developers has now grown into one of the most widely used Java libraries in the blockchain ecosystem. And established itself as an important integration technology in the LF Decentralized Trust project landscape.


OwnYourData

Digital Product Passports Towards More Sustainable Futures

Digital product passports (DPPs) have gained attention through the EU’s Circular Economy Action Plan and the Ecodesign for Sustainable Products Regulation (ESPR). They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition, and production history. DPPs are seen as critical components for creating circular economies, especially in lig

Digital product passports (DPPs) have gained attention through the EU’s Circular Economy Action Plan and the Ecodesign for Sustainable Products Regulation (ESPR). They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition, and production history. DPPs are seen as critical components for creating circular economies, especially in light of the European Green Deal to align industry with climate targets. As DPPs are already becoming mandatory for many products, their social, technical, environmental, and economic implications have to be considered from the start through interdisciplinary conversations and collaborations. 

MyData Global Conference was the perfect opportunity to have insightful discussions as we explored how DPPs can contribute to more sustainable futures. In our presentation, we also introduced the Promoting Accelerated Circular Economy through Digital Product Passports (PACE-DPP) project as an applied case study to explore the potential for DPPs.

So, what exactly is a Digital Product Passport?

A Digital Product Passport is a product-specific dataset accessible via a digital carrier. It enables businesses, regulators, and consumers to access key information—such as material composition, environmental impact, production history, and recyclability. 

They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition,  production history, repair activities, and recycling capabilities. There is also exciting potential to capture more information, going beyond just those related to sustainability.

By allowing information throughout a product’s lifecycle to be accessible and sharable, they can enable transparency, empower smarter choices, support regulatory compliance, and foster trust across the supply chain. So, DPPs are expected to play a key role in facilitating innovative approaches by not only enabling the exchange of information but also driving new business value and increased efficiencies across the supply chain. 

Main Aspects of DPPs 

The European Commission sets out some requirements for Digital Product Passports, including: 

User friendliness: Access to the digital product passports should be free and easy for all stakeholders. Tailored information: Differentiated access with different types of information available (for example, customers, lawmakers, and suppliers will look for and care about different information). Accuracy: The data provided needs to be accurate, complete and up to date Interoperable: DPPs must be interoperable to facilitate the sharing and utilisation of data generated by various actors. Adaptable: Any further data other than obligatory data can be added and stored. Promoting Accelerated Circular Economy through Digital Product Passports Project- Wood Use Case

In our presentation, we offered a case study to communicate what this all means in practice. The PACE-DPP project is already piloting DPPs in the wood industry—enhancing traceability from forest to mill gate. With this use case, we demonstrated how DPPs can streamline operations, reduce waste, and promote sustainable sourcing.

By integrating circular economy principles into value chains and conducting thorough analyses of supply chains within the wood/paper and electronic device sectors, the project aims to enhance processes with circular flows.

Benefits of Digital Product Passports

During our session, we discussed the diverse benefits that Digital Product Passports (DPPs) can bring across different domains. The following points summarize key takeaways from our presentation as well as valuable inputs and reflections shared by participants during the discussion.

Business Streamlined processes from partners to suppliers, as businesses will need to understand processes of their own and their partners before even thinking about DPPs. (For example, in a furniture business will need to understand their fabric and wood suppliers’ processes to be included in their furniture’s DPPs). Holistic analysis of the business supply chain to provide more accurate information on DPPs Increase efficiency and drive down costs through streamlining data flows and business operations Generate potential and incentives for multi-stakeholder collaboration and enable new cooperative business models DPPs can create a competitive advantage for those businesses that go beyond the minimum requirements and include information about new areas such as their brand authenticity, ethical standards and social impact There is a critical role, especially for big players, to promote and lead sector-wide best practices (for example, Ikea can bring in new standards through its DPP implementation to inspire change). Socially Empower consumers and promote transparency, as DPPs can act as accountability mechanisms. Fair and equitable digital societies by highlighting issues around product life cycles from raw materials to retail Broader civic engagement and increased activism, such as by creating awareness campaigns around greenwashing, backed by data provided through DPPs. Legally The biggest motivation to make DPP achieve its full potential is the legal requirements Regulatory enforcement that can have some teeth to enable DPPs to fulfil their purpose and achieve their full potential Promote connected thinking when it comes to regulatory compliance. For example, a business working in the wood industry will need to comply with forestation regulations, fulfil DPPs requirements, consider GDPR, etc. How to get there?

To fully realise their potential, we have to have human-centric approaches to DPPs from their design, to implementation and use. We wrapped up our presentation by reminding five core principles we need to keep in mind if we want to reap these benefits:  

Transparency: Ensuring information siloes are broken to enable data flow between actors.  Human-Centric Approach: Ensuring that the information that reaches consumers and citizens more broadly is accurate, understandable, and relevant. This is an exciting area to explore further to understand what this will mean for digital product passports in practice.  Accountability:  Ensuring there are robust regulations with teeth to prevent inaccurate claims such as greenwashing, etc. and that, as they are being rolled out and implemented, the regulations are not watered down. Trust: Between partners, manufacturers, suppliers, etc., as well as amongst citizens to ensure sustainable products become the norm in the EU, and we work towards truly circular economies. Equality: Holistic approach needs to include equality as it relates to sustainability and circular economies. The DPPs have great potential to highlight the interconnected nature of growth on a global scale and environmental boundaries. 

DPPs could be key to enabling circular economy and carbon reduction strategies, including those for new markets and business models, and also to social compliance reporting.  As we move toward a greener future, Digital Product Passports offer a powerful way to connect sustainability with digital innovation—making every product part of the solution.

The PACE-DPP project received financial contributions from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), supported by the Austrian Research Promotion Agency (FFG funded project #917177), as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK), supported by the German Research Promotion Agency (DLR-PT).

 

Der Beitrag Digital Product Passports Towards More Sustainable Futures erschien zuerst auf www.ownyourdata.eu.


MyData

Putting Patients First: How Dokport is Shaping the Future of Digital Healthcare

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. At Dokport, our journey […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. At Dokport, our journey […]

OpenID

New whitepaper tackles AI agent identity challenges

The OpenID Foundation has today released a critical new whitepaper addressing one of the most pressing challenges facing organizations deploying AI agents – how to securely authenticate and authorize these autonomous systems while maintaining proper governance and accountability. Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent […]

The OpenID Foundation has today released a critical new whitepaper addressing one of the most pressing challenges facing organizations deploying AI agents – how to securely authenticate and authorize these autonomous systems while maintaining proper governance and accountability.

Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world has been researched and compiled by the OpenID Foundation’s Artificial Intelligence Identity Management Community Group – a team of global experts collaborating to address rising identity management challenges in AI systems.

Why this whitepaper matters

The whitepaper reveals a number of significant challenges that have immediate implications for three key audiences. 

Developers and architects building AI agent systems need to understand how to leverage existing standards while preparing for emerging models of delegated authority and agent-native identity. The research provides them with both immediate best practices and a roadmap for future-proofing their systems. Standards organizations must accelerate the development of protocols that formalize new concepts around agent identity and delegation, ensuring future systems are built on interoperable foundations rather than fragmented proprietary solutions. This research highlights the specific areas where new standards are most urgently needed. Enterprises need to begin treating agents as first-class citizens within their identity and access management infrastructure, establishing proper lifecycle management, governance policies, and accountability measures. The research provides a framework for making these organizational changes before they become critical.

These key audiences must act now to prepare for the future of AI agent deployment. The whitepaper provides vital resources for these audiences to enable them to secure AI agents as they are today, while offering a strategic agenda to help them address the foundational authentication, authorization, and identity problems that will come as these autonomous systems become more widespread.

Existing frameworks can only handle today’s simple agents

The good news is that current authentication and authorization standards are already capable of securing many of today’s AI agent use cases. When agents operate within well-defined boundaries, such as an enterprise assistant accessing internal tools or a consumer agent managing personal services, the established infrastructure works effectively.

Modern authentication frameworks provide a solid foundation for scenarios where agents work within a single organization’s systems or help individual users access their own data. These protocols, which have been battle-tested across billions of authentication flows, offer robust security when agents operate in straightforward, predictable environments with synchronous operations.

The Model Context Protocol (MCP) has emerged as the leading standard for connecting AI models to external data sources and tools. Its growing adoption demonstrates the industry’s recognition that agents need specialized frameworks for interacting with resources. For organizations implementing agents today, the research recommends a “separation of concerns” approach: use specialized authentication servers to handle security decisions rather than building custom security into each system.

Enterprise infrastructure is already agent-ready – to a point. Existing Single Sign-On (SSO) systems and user management tools can support today’s AI agents while providing IT administrators with centralized control over agent permissions. This allows organizations to leverage their current identity infrastructure without starting from scratch.

However, this seemingly solid foundation reveals significant cracks when agents begin operating with greater autonomy. The current approaches work well only because today’s agents remain relatively simple, operating within single trust domains, following predictable patterns, and requiring frequent human oversight. As AI systems evolve toward true autonomy, these same frameworks will struggle to address fundamentally new challenges.

The autonomy inflection point is approaching faster than many realize. While a single agent calling a handful of internal APIs poses manageable security challenges, the vision of highly autonomous agents – spawning sub-agents, operating across organizational boundaries, and making thousands of decisions daily – requires a fundamental rethinking of identity and authorization. The frameworks that secure today’s agents weren’t designed for recursive delegation chains, cross-domain trust propagation, or the scale of authorization decisions that autonomous systems will demand.

This creates an urgent imperative: organizations must secure their current agent implementations using existing best practices while simultaneously preparing for the more complex authorization challenges that increased autonomy will bring. The window for establishing robust, interoperable standards is now, before proprietary solutions fragment the ecosystem and create security gaps that will be far more costly to address later.

Next steps for the industry

How organizations manage trust, authority, and accountability in digital systems must evolve. This means moving beyond basic login systems to more sophisticated identity and permission models that can handle complex networks of connected agents. While current frameworks provide a secure baseline for today’s agents, the gaps identified in the OpenID Foundation’s new whitepaper must be proactively addressed in order to ensure a foundation for secure, responsible AI agent deployment at scale. 

The full research paper can be found here.

Contribute to the discussion 

The OpenID Foundation welcomes feedback and input from the broader community on this whitepaper. Readers can share their perspectives either through:

The community group repository  This Google Form

All feedback will be reviewed by the Artificial Intelligence Identity Management Community Group and discussed during their weekly calls. These calls are open to anyone interested in participating. For meeting schedules and details on how to join, visit https://openid.net/calendar/.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post New whitepaper tackles AI agent identity challenges first appeared on OpenID Foundation.


The Engine Room

Join our team! We’re looking for an Associate for Resilient Tech!

Who we are The Engine Room (TER) is a nonprofit organization with a distributed global team of experienced and committed activists, researchers, technologists and community organizers. Our vision is for social justice movements to use technology and data in safe, responsible and strategic ways, while actively mitigating the vulnerabilities created by digital systems. Since 2011, […] The post Joi

Who we are The Engine Room (TER) is a nonprofit organization with a distributed global team of experienced and committed activists, researchers, technologists and community organizers. Our vision is for social justice movements to use technology and data in safe, responsible and strategic ways, while actively mitigating the vulnerabilities created by digital systems. Since 2011, […]

The post Join our team! We’re looking for an Associate for Resilient Tech! appeared first on The Engine Room.

Monday, 06. October 2025

OpenID

OIDF Announces New Investment to Expand Conformance Services

The OpenID Foundation Board has approved the development of a new service to enable stronger partnership with “managing entities” and “accredited laboratories” that wish to deploy the OpenID Foundation tests and best practices within a wider ecosystem conformance service. This additional service is scheduled for launch in Q2 2026, and will operate alongside the existing […] The post OIDF Announc
The OpenID Foundation Board has approved the development of a new service to enable stronger partnership with “managing entities” and “accredited laboratories” that wish to deploy the OpenID Foundation tests and best practices within a wider ecosystem conformance service. This additional service is scheduled for launch in Q2 2026, and will operate alongside the existing self-certification service.   This strategic investment responds to a strong demand from existing and prospective ecosystem partners who believe an accreditation option will help them meet their local regulatory requirements, foster market competition for conformance testing, and incorporate the OpenID Foundation tests within a wider conformance program. The Foundation anticipates 60 countries already are or will pursue digital identity initiatives and 90 countries are or will be pursuing open data programs. Many of these ecosystems are likely to select the OpenID Foundation’s global open standards and open source tests.   The current self-certification service will continue to operate. To date it has facilitated the successful self-certification of more than 4,000 implementations, including many vendors as well as public and private sector ecosystems in Brazil, the United Kingdom, Australia, the United Arab Emirates, Saudi Arabia, and the United States. Giving ecosystems the choice of self-certification through either the current service operated by the OpenID Foundation and/or the managing entities and accredited laboratories will help ecosystems choose the best fit option for their jurisdiction. The OpenID Foundation is committed to ensuring its specifications, open source tests, and conformance services support the Foundation’s ability to deliver to its mission and vision, and ensuring all implementers can realize the security, privacy and interoperability benefits inherent in the specifications.    In the months ahead, the OpenID Foundation will engage with pilot partners to establish and validate the new model, with plans to scale the service throughout 2026.   Ecosystem managing entities and accreditation laboratories interested in participating are invited to contact the OpenID Foundation at director@oidf.org for more information. About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

 

The post OIDF Announces New Investment to Expand Conformance Services first appeared on OpenID Foundation.

Friday, 03. October 2025

FIDO Alliance

Biometric Update: Germany pushes passkey adoption, releases draft technical guidelines

Germany’s Federal Office for Information Security (BSI) is asking for public comment on a draft document that outlines technical considerations for configuring passkey servers. The draft was published on September […]

Germany’s Federal Office for Information Security (BSI) is asking for public comment on a draft document that outlines technical considerations for configuring passkey servers.

The draft was published on September 30 and seeks to get inputs from relevant stakeholders, the BSI said in a news release.

The BSI TR-03188 Passkey Server guidelines are available as a draft in version 0.9, the BSI says. It was drafted within the scope of FIDO2 and WebAuthn standards, among others.


Biometric Update: Yubico finds passkeys awareness still lacking in global survey

There is a persistent disconnect between perceived cybersecurity and actual vulnerability. That’s the key finding from Yubico’s 2025 Global State of Authentication Survey. The findings indicate a world still reliant on […]

There is a persistent disconnect between perceived cybersecurity and actual vulnerability. That’s the key finding from Yubico’s 2025 Global State of Authentication Survey. The findings indicate a world still reliant on outdated authentication practices, highlighting the need to align personal and workplace cyber hygiene.


PC Mag: Ditch Your Passwords: Why Passkeys Are the Future of Online Security

Passkeys are revolutionizing the way we secure our online accounts, with the potential to eliminate passwords altogether. We explain why they offer stronger protection for your digital life and how […]

Passkeys are revolutionizing the way we secure our online accounts, with the potential to eliminate passwords altogether. We explain why they offer stronger protection for your digital life and how you can start using them.

There’s a reason everyone is working on a way to replace passwords. They’re often easy to guess, hard to remember, and changing them after every data breach is a pain, even if you do have a password manager. Thankfully, the Fast Identity Online (FIDO) Alliance developed passkeys, a new authentication technology that eliminates the need to enter your email address or a password into login fields around the web, and they’re gaining popularity. For example, Microsoft deleted passwords from its authenticator app in August, but left in support for passkeys.


IT Brief: Help desks emerge as cybersecurity weak spot amid rising attacks 

Bojan Simic, Chief Executive of HYPR and a FIDO Alliance board member, warns that IT help desks are increasingly targeted by attackers using social engineering tactics. These tactics often involve leveraging stressful […]

Bojan Simic, Chief Executive of HYPR and a FIDO Alliance board member, warns that IT help desks are increasingly targeted by attackers using social engineering tactics. These tactics often involve leveraging stressful scenarios, such as an executive locked out of their account just before boarding a flight, to pressure help desk agents into bypassing or overlooking security protocols. “The help desk shouldn’t be the weakest link; it should be the first line of defence. That means moving beyond guesswork and adopting identity verification that confirms who someone is, versus what they know or the device they’re using. With phishing-resistant, standards-based verification built into support workflows, agents stop being human lie detectors and start being defenders,” said Simic. 


Digital ID for Canadians

Learning from BritCard: Why Architecture Matters for Digital Trust and Identity

A Letter from the President, DIACC The choices we make about digital trust and identity verification architecture today will shape Canadian privacy, security, and digital…
A Letter from the President, DIACC

The choices we make about digital trust and identity verification architecture today will shape Canadian privacy, security, and digital rights for generations to come. Recent developments in the United Kingdom offer a timely reminder: how we build digital trust and identity systems matters as much as whether we build them.

Canadians demand a path that’s grounded in the principles of federation, decentralization, privacy by design, and user control. As pressure mounts globally for mandatory and centralized digital identity systems, we must advocate for principles that ensure our implementations live up to the designs that Canadian’s demand.

The UK’s Announced Approach

The UK faces genuine challenges that digital trust and identity could address. Document fraud in right-to-work verification creates significant problems. Administrative burden on employers is substantial. And approximately 10% of UK residents have never held a passport, creating barriers to proving identity for routine transactions. A digital credential accessible via a smartphone, which 93% of UK adults possess, could help address these legitimate issues.

The critical question isn’t whether digital trust and identity can solve real problems. It’s how it’s implemented.

The UK government has committed to making its proposed BritCard system mandatory and references “a central database of people entitled to live and work in the UK.” Cybersecurity experts have been explicit in their warnings: centralized databases create “enormous hacking targets,” particularly when complex dependency chains involving contractors and integrators are involved. Within days of the announcement, 1.6 million people had signed petitions opposing the scheme, expressing concerns about surveillance and the notion of “Big Brother in your pocket.”

These concerns aren’t theoretical. Centralized identity databases have been compromised in multiple jurisdictions, impacting millions of individuals. Once compromised, the consequences include identity theft, widespread fraud, and erosion of public trust that takes years to rebuild. The attack surface, which arises from aggregating millions of records in centralized systems, is of enormous importance.

Architecture as Values Made Concrete

There is an alternative architectural strategy that is privacy-preserving and uses decentralized credentials. This approach uses:

Cryptographically-signed credentials that are held on user devices, not in centralized databases Verification happens through cryptographic proofs rather than database lookups Selective disclosure enables proving what’s necessary (like eligibility to work) without revealing your complete identity profile Users control when and how their credentials are shared

This isn’t experimental technology. Estonia has operated such a system successfully for over two decades. The EU Digital Identity Wallet regulation explicitly requires selective disclosure and offline verification capability. Singapore’s Singpass uses QR-code-based verification to minimize tracking. These approaches have been proven at the national scale.

The lesson for Canada isn’t “don’t build digital trust and identity verification.” It’s “architecture must reflect values.“

A mandatory system built on centralized databases carries fundamentally different privacy risks, security vulnerabilities, and civil liberties implications than a voluntary system using privacy-preserving credentials held by users. The efficiency gains and fraud reduction can be achieved through either approach; however, one strategy respects privacy by design rather than by promise.

Canada’s Distinctly Different Approach

Canada has already charted a different course. Aligning with our governance models and values, our approach is decentralized. There is no single national digital identity system, no central government database of all Canadians, and no mandatory credential that citizens are required to obtain. Learn more about our vision for Canada’s decentralized approach.

Instead, our vision of a digital trust and identity verification ecosystem aligns with the Pan-Canadian Trust Framework (PCTF), developed by DIACC in collaboration with federal, provincial, and territorial governments, financial institutions, telecommunications providers, privacy advocates, and civil society organizations. The PCTF enables digital trust and identity services through:

Federated Architecture: Multiple credential issuers (provinces, federal government, private sector organizations) can issue credentials that are mutually recognized through conformance to a common framework of components without creating centralized databases or requiring technological uniformity.
Privacy by Design: The PCTF embeds privacy protections at the architectural level. Requirements include data minimization, purpose limitation, selective disclosure capabilities, transparent consent management, and security safeguards proportionate to the sensitivity of the information. These aren’t policy aspirations; they’re assessed through independent certification.
User Control: Individuals maintain control over their credentials and decide when and with whom to share information. Credentials can be stored on personal devices, and users can revoke consent and withdraw their credentials through clear procedures for data deletion.
Voluntary Adoption: Digital credentials supplement rather than replace existing identity documents. Canadians choose whether to use digital identity based on convenience, security, and trust. It’s not a government mandate.
Verifiable Privacy Protections: Through DIACC’s PCTF Certification Program, organizations can obtain independent verification that their digital trust and identity verification services implement privacy-preserving architectures. This shifts privacy from a policy promise to a verified reality.

This approach reflects what Canadians want. Our research consistently indicates that privacy, security, and choice are key factors driving Canadians’ desire for digital trust and identity. The bottom line is that voluntary, privacy-focused solutions earn public trust, while mandatory systems face resistance.

The Technologies That Enable Privacy

Privacy-preserving digital trust and identity isn’t just philosophically preferable; it’s technically achievable through verifiable credentials and related technologies that the PCTF supports:

Selective Disclosure: Instead of presenting your entire driver’s licence to prove you’re old enough to purchase age-restricted products, you can present a cryptographic proof that you’re over 19, without revealing your birth date, address, or even your name. The verifier gets the answer they need; you retain privacy over information they don’t.
Decentralized Verification: Credentials can be verified through cryptographic signatures without requiring queries to centralized databases. This means verification can happen offline, in real-time, without creating transaction records that enable tracking or surveillance.
Zero-Knowledge Proofs: Advanced cryptographic techniques enable proving statements about your identity (such as “I am a resident of Ontario” or “I hold a valid professional license”) without revealing the underlying credential or creating linkable identifiers across different interactions.
User-Held Credentials: When credentials live on your device rather than in government or corporate databases, you control when they’re shared. A data breach at one organization doesn’t compromise your credentials held elsewhere.

These technologies are standardized and operating at scale internationally. Canada’s PCTF is designed to accommodate them as they become more widely deployed, ensuring that our framework supports the most privacy-preserving approaches available.

What This Means for Canada’s Digital Future

The path forward requires intentionality about the choices we make now, in procurement specifications, in system design, in policy development, and in public dialogue:

For Government Leaders: The PCTF bridges regulations with operational realities to provide a foundation for cross-sector interoperability. Resist pressure for centralized databases or mandatory systems. Ensure procurement specifications prioritize privacy-preserving architectures and require independent DIACC’s PCTF certification. Build the physical-digital bridges (assisted digital services, multi-modal access) that ensure universal accessibility.
For Industry Stakeholders: Pursue PCTF certification for digital trust and identity solutions and design products that implement selective disclosure and user control, rather than relying on maximalist data collection. Accept certified credentials from diverse issuers to create convenience that drives voluntary adoption. Contribute expertise to framework evolution.
For Privacy Advocates and Civil Society: Hold organizations accountable for their privacy promises by demanding PCTF certification. Participate in framework governance to ensure that citizen perspectives inform the technical architecture. Help build public understanding of how privacy-preserving systems work and why architecture matters.
For Technology Providers: Align product development with PCTF specifications and privacy-preserving technologies. Seek DIACC PCTF certification for competitive differentiation. Invest in open standards that prevent lock-in and enable interoperability. Innovate on capabilities including: verifiable credentials, selective disclosure, and zero-knowledge proofs.
For Citizens: Engage with digital trust and identity programs as they launch. Demand transparency about whether systems use centralized databases or decentralized credentials. Provide feedback on usability, accessibility, and privacy concerns. Exercise control over personal information through consent management: support organizations that pursue voluntary, privacy-preserving approaches over mandatory, surveillance-enabling architectures. Building Services Worthy of Trust

Canada can build a digital trust and identity verification infrastructure of services that are:

More secure because it’s decentralized — no honeypot databases to target Widely adopted because it’s genuinely helpful and voluntary — convenience without coercion Privacy-protecting through architecture, not only policy promises — verified through independent testing Interoperable while respecting jurisdictional sovereignty — federation without centralization Inclusive by design rather than by afterthought — multi-modal access serving all Canadians

Our success demands that we prioritize privacy-preserving architectures in procurement specifications, insist on open standards and independent verification, invest in accessibility for all Canadians regardless of their digital literacy or access to technology, and establish strong trust frameworks and mutual recognition mechanisms that enable effective federation.

The path forward isn’t about government versus private sector, federal versus provincial, or mandatory versus voluntary in the abstract. It’s about all of us, across jurisdictions and sectors, committing to build and use digital trust infrastructure services that are distinctly Canadian: digital trust infrastructure that reflects our federal structure, our values, and our constitutional commitments to privacy and individual rights.

The UK’s experience with BritCard is a reminder that design choices matter. Centralized, mandatory systems may promise efficiency, but they carry profound risks to privacy, security, and civil liberties. Canadians have chosen differently, and we must ensure our implementations honour their choice.

We can develop digital trust and identity verification services that earn the confidence of Canadians. The architecture exists. The technologies are operational. The framework is ready. What remains is a collective commitment to getting the implementation right.

The choices are ours to make. The time to make them thoughtfully is now.

Joni Brennan
President, DIACC

Further Reading:

Pan-Canadian Trust Framework Understanding Canada’s Decentralized Approach PCTF Certification Program What Canadians Want: Privacy, Security, and Choice Verifiable Credentials and Privacy-Preserving Technologies

DIACC is Where Digital Trust Means Business

Contact us to be a part of the change you want to see, stay informed about developments in digital trust and identity verification, and learn how you can contribute to discussion drafts or become a member.


DIF Blog

DIF Newsletter #54

October 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF

October 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Steering Elections are coming up soon!

We sent out an explainer last week with all the details, but the most urgent reminder is until the 9th, you can:

Nominate someone else you think would make a great steering committee member (we will reach out to them), Self-nominate , and/or Submit questions you'd like all candidates to answer.

Keep an eye out for the answers to those questions from the final slate of candidates 16 Sept, and feel free to use the #sc-elections channel on Slack to discuss.

DIF Labs Beta Cohort 2 Concludes with Successful Show & Tell

DIF Labs Beta Cohort 2 concluded with a successful Show & Tell event on September 24, 2025, showcasing three months of development across three innovative projects.

The Anonymous Multi-Signature Verifiable Credentials (ZKMPA) project built a protocol for m-of-n multi-party credential approval while preserving signer anonymity. Using Semaphore with cryptographic membership proofs and nullifiers, the team achieved W3C VCDM 2.0 compatibility and demonstrated how DAOs can issue credentials with privacy-preserving governance.

The Privacy-Preserving Revocation Mechanisms project delivered the first comprehensive comparative study of revocation strategies for W3C Verifiable Credentials, analyzing status lists, accumulators, zk-SNARK proofs, and short-term credentials. The team created a detailed taxonomy and reference implementation benchmarking costs for issuers, holders, and verifiers, with collaboration from the Ethereum Foundation on Merkle-tree approaches.

Legally-Binding Proof of Personhood via QES (QVC) bridged W3C Verifiable Credentials with Qualified Electronic Signatures under EU eIDAS regulation, bringing legally non-repudiable identity to decentralized credentials. The project explored pseudonymous QES flows and ETSI standards compliance for use cases including contracts, academic credentials, healthcare, and financial agreements.

All three projects presented working demonstrations to global participants from Korea, Japan, Europe, and the United States. The community provided structured feedback using the Roses/Buds/Thorns framework, and projects will continue development as open-source implementations. Visit the DIF Labs blog for complete details and event recording.

DIF Labs Beta Cohort 2: Show & Tell Recap 🚀 | DIF Labs science DIF Labs Trusted Agents Working Group Launches

DIF has launched a new Trusted Agents Working Group to address the emerging challenges of AI agent identity, authentication, and trust. As AI systems gain increasing autonomy and operate across organizational boundaries, the need for robust identity infrastructure becomes critical.

Co-chaired by Andor Kesselman, Nicola Gallo and Dmitri Zagidulin, the working group will develop standards and frameworks ensuring AI agents can maintain verifiable identities, establish trust relationships, and operate with appropriate human oversight. According to Kesselman, "The Trusted AI Agents Working Group focuses on defining an opinionated, interoperable stack to enable trustworthy, privacy-preserving, and secure AI agents."

The inaugural meeting focused on brainstorming use cases and shaping initial focus areas, with the first work item addressing Agentic Authority Use Cases. Read more in Kesselman's LinkedIn post and learn how to get involved here.

Our first Trusted AI Agents Working Group Decentralized Identity… | Andor Kesselman Our first Trusted AI Agents Working Group Decentralized Identity Foundation meeting today was a big success! We ran a cross-work item session to brainstorm use cases and shape initial ideas for the group’s focus areas. This is just the beginning-multiple work items will now move forward within the broader Working Group. We’re focusing on providing artifacts and outputs that the community can anchor to. Each work-item is self governed. I expect we will eventually get into MCP, A2A, NANDA, NLIPs, oCAPs, oAuth, AP2, ANS, Agent Delegation, and many other systems. Our first work item is Agentic Authority Use Cases. What’s Our Scope? The Trusted AI Agents Working Group (WG) at the Decentralized Identity Foundation (DIF) focuses on defining an opinionated, interoperable stack to enable trustworthy, privacy-preserving, and secure AI agents. These agents act on behalf of users or systems and require robust mechanisms for identity, authority, and governance. Feedback form if you’re interested in participating: https://lnkd.in/gaU4hQFw Join DIF to get involved and check https://lnkd.in/gEHvKTYt for more information. Check the mailing list: https://lnkd.in/gCHX8uuJ. Thanks to Nicola Gallo and Dmitri Zagidulin 🇺🇦 for co-chairing this beast of a working group. This is going to be a lot of fun and we’re going to make some amazing progress really fast. My overall take was that this group was a bunch of doers. Also, updates: This pairs really well with https://lnkd.in/g6aitZpQ, so come on October 24 if you want to meet some of us live and work on agentic protocols. And also the Open Agent Labs with IBM AI Alliance should benefit a lot from this feedback Dave Nielsen Adam Larter Alan Karp Eric Scouten Jim St. Clair Kaliya Young Ken Griggs Tom Jones Mitchell Travers Neil Thomson Sachio Iwamoto Matthew McKinney Przemek Praszczalek Geun-Hyung (Peter) Kim Makki Elfatih Adrian Field Daniel Glinz Viraj Patva @iian henderson LinkedInAndor Kesselman Credential Schema Specification 1.0 Released

The Claims & Credentials Working Group, co-chaired by Otto Mora and Valerio Massimo Camaiani, has released Credential Schema Specification 1.0. This specification provides standardized schemas for basic identity credentials, defining the fields required to identify an individual for KYC purposes and other foundational use cases. The 1.0 release ensures schemas are interoperable, extensible, and aligned with existing standards including W3C Verifiable Credentials, OIDC, and schema.org, and includes comprehensive documentation, reference implementations, and guidance for schema developers.

DIF Represented at UNGA Identity Panel

DIF members Matt McKinney and Nicola Gallo spoke on a hands-on panel about digital public infrastructure, trust, and identity two weeks ago, raising awareness and bringing the good word (and up-to-date architectural thinking) to specialists and builders closer to government infrastructure deployments. See last week's blog post for a more detailed read-out.

🛠️ Working Group Updates

Browse our working groups here

Creator Assertions Working Group

The Creator Assertions Working Group continues advancing work on content provenance and authenticity assertions for digital media. Recent discussions have focused on integrating creator assertions with broader verifiable credential frameworks, exploring how content creators can make cryptographically verifiable claims about their work. The group is examining metadata standards that support various content types while maintaining flexibility for emerging use cases. Work continues on alignment with the C2PA ecosystem and development of assertion types that can accommodate both individual creators and organizational content production workflows.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused on updating evaluation criteria for DIF-recommended DID methods. The group refined its assessment framework to emphasize multiple independent implementations, demonstrated production deployments, and clear compliance with core DID traits. Discussions addressed balancing objective technical criteria with expert evaluation to ensure recommendations reflect both standards compliance and practical viability. The group continues work on the proposed W3C DID Methods Working Group charter, addressing community feedback about scope and the role of blockchain-based methods in standardization efforts.

👉 Learn more and get involved

Identifiers and Discovery Working Group

Multiple work streams advanced within the Identifiers and Discovery Working Group. The did:webvh team made significant progress toward their 1.0 release, with implementations demonstrating successful interoperability through comprehensive test suites. Performance analysis shows efficient handling of DID document updates even in high-frequency scenarios. The DID Traits team finalized specifications for their 1.0 release, with particular focus on traits related to key validation capabilities and long-term identifier availability. The group explored applications in software supply chain security contexts and examined how DID traits align with emerging regulations including the EU Cyber Resilience Act.

👉 Learn more and get involved

🪪 Claims & Credentials Working Group

Following the successful release of version 1.0 specification, the Claims & Credentials Working Group launched a community schemas initiative. This program creates frameworks for organizations to contribute verifiable credential schemas to a shared repository, with pathways for community review and potential standardization. Recent work includes extending schemas for banking KYC requirements, with particular attention to international postal address formats. The team refined terminology around personhood verification credentials and established processes for synchronizing schemas across multiple repositories. Future development priorities include employment credentials and anti-money laundering certification schemas.

👉 Learn more and get involved

Applied Crypto Working Group

The Applied Crypto Working Group made substantial progress on BBS+ signature schemes and privacy-preserving cryptographic primitives. Key developments include refinements to pseudonym generation approaches, with the team evaluating polynomial methods and their security properties against adversarial scenarios. Discussions addressed post-quantum security considerations and their implications for long-term privacy guarantees in credential systems. The group continues coordination with IETF standardization efforts and is preparing updated test vectors for upcoming draft releases. Members are exploring implementation approaches in both Rust and C++, weighing trade-offs in performance, security, and ecosystem compatibility.

👉 Learn more and get involved

DIF Labs Working Group

With Beta Cohort 2 successfully concluded, the DIF Labs Working Group is evaluating the program structure and considering timing for future cohorts. The group continues providing support to Beta Cohort 2 projects as they transition to ongoing open-source development. Discussions have focused on lessons learned from the cohort model, including the effectiveness of mentorship structures, project scoping approaches, and mechanisms for ensuring long-term project sustainability. The Labs team is also exploring opportunities to showcase project outcomes at industry conferences and standards bodies.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced work on binary encoding support through CBOR implementation. The team evaluated architectural approaches for supporting multiple encoding formats, considering whether to introduce binary encoding as an optional feature in version 2.2 or as the default in a future major release. Technical discussions addressed message encoding detection, MIME type handling for different encoding schemes, and backward compatibility with existing implementations. The group also explored DIDComm's role in AI agent-to-agent communications, examining how the protocol can support secure, privacy-preserving interactions between autonomous systems.

👉 Learn more and get involved

Hospitality & Travel Working Group

The Hospitality & Travel Working Group made substantial progress on the HAT Pro (Hospitality and Travel Profile) specification. The team developed comprehensive schemas for food preferences, dietary restrictions, and accessibility requirements using graph-based models that eliminate data duplication and improve cross-referencing capabilities. Recent work includes creating UML models and JSON schemas for complex preference structures that can adapt to varied travel contexts. The group is exploring AI-assisted data input mechanisms to simplify the user experience while maintaining data accuracy. Subject matter experts from multiple travel sectors have joined the working group, bringing valuable domain expertise to schema development.

👉 Learn more and get involved

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates

Browse our special interest groups here


DIF Hospitality & Travel SIG

The Hospitality & Travel SIG continued its evolution alongside the working group, focusing on broader ecosystem considerations and real-world implementation challenges. Recent sessions examined the intersection of decentralized identity with emerging AI capabilities in travel, including personalized itinerary generation, automated booking agents, and AI-powered concierge services. The group discussed how traveler-controlled credentials can enable these AI systems while maintaining privacy and user control. Participants also explored challenges in achieving industry-wide adoption of new credential standards, including the need for demonstration projects that showcase tangible benefits to both travelers and service providers.

👉 Learn more and get involved

DIF China SIG

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted discussions on regulatory developments affecting decentralized identity across the Asia-Pacific region. Key topics included alignment between national digital identity initiatives and decentralized identity standards, with particular attention to interoperability requirements for cross-border transactions. The group examined recent policy changes in Australia, Singapore, and Japan, identifying common themes around privacy protection, user control, and the role of government-issued credentials within broader digital identity ecosystems. Participants discussed strategies for engaging with regulators to ensure decentralized identity approaches are considered in policy development.

👉 Learn more and get involved

DIF Africa SIG

The Africa SIG continues its focus on practical implementations of decentralized identity across the continent. Recent discussions have examined mobile-first approaches to credential management, recognizing that smartphone adoption patterns in Africa differ from other regions. The group explored solutions for users with feature phones or limited connectivity, including offline verification capabilities and SMS-based fallback mechanisms. Participants shared insights on regulatory environments across different African nations and opportunities for harmonization of digital identity frameworks at the regional level.

👉 Learn more and get involved

DIF Japan SIG

The Japan SIG explored technical approaches to AI agent authentication using decentralized identifiers. Discussions covered the unique requirements for identifying autonomous systems, including mechanisms for establishing trust chains between AI agents and their human operators or organizational sponsors. The group examined use cases spanning automated trading systems, customer service agents, and collaborative AI workflows. Participants considered how existing DID methods can be adapted for AI agent use cases and whether new DID methods might be warranted. The group is planning offline events to deepen community engagement and facilitate face-to-face technical discussions.

👉 Learn more and get involved

DIF Korea SIG

👉 Learn more and get involved

📖 DIF User Group Updates
DIDComm User Group

The DIDComm User Group explored practical implementations of the protocol in production environments. Members shared experiences with mediator deployments, discussing scalability considerations and reliability patterns for always-available message routing. The group examined integration approaches with emerging AI communication frameworks, identifying similarities between DIDComm's secure messaging patterns and requirements for AI agent interactions. Discussions also covered developer experience improvements, including debugging tools, testing frameworks, and documentation enhancements that can lower barriers to DIDComm adoption.

👉 Learn more and get involved

📢 Announcements at DIF Executive Director Applications Still Open

DIF is accepting applications for the Executive Director position as the current term comes to a close. This is an opportunity to shape DIF's strategic direction and lead the organization through its next phase of growth. Application details are available in the job description, with questions welcomed at jobs@identity.foundation.

Decentralized Trust Graph Working Group Launches

The Linux Foundation Decentralized Trust (LFDT), Trust over IP, and Decentralized Identity Foundation have launched a new Decentralized Trust Graph Working Group, providing a venue for developing standards around trust networks and reputation systems in decentralized environments. This working group complements DIF's ongoing work by addressing graph-based approaches to modeling trust relationships. DIF members interested in participating can find details about joining in the ToIP community calendar. This collaboration demonstrates the growing ecosystem of organizations working on complementary aspects of decentralized identity and trust infrastructure.

Discount Codes Available for IIW and Agentic Internet Workshop

DIF members can access special discount codes for two upcoming events:

Internet Identity Workshop (IIW) XLI: Use code DIF_XLI_20 for 20% off registration at this link Agentic Internet Workshop: Use code AIW_DIF_10 for 10% off registration at this link

Explore the DIF Events Calendar for a complete listing of upcoming conferences, workshops, and community gatherings where DIF members will be participating.

🗓️ ️Community Events

Internet Identity Workshop XLI
The semi-annual gathering of the identity community returns, offering unconference-style sessions where participants drive the agenda. IIW continues to be a critical venue for discussing emerging challenges, sharing implementation experiences, and building consensus around identity standards.

Use code DIF_XLI_20 for 20% off registration at this link

Agentic Internet Workshop
The Agentic Internet Workshop takes place immediately following IIW, providing an opportunity to explore how decentralized identity standards can provide the foundation for AI agent interactions and trust. This new workshop addresses the intersection of AI agents and internet infrastructure, with decentralized identity as a key enabling technology. Sessions will explore authentication mechanisms for AI agents, human oversight frameworks, and trust models for agent-to-agent interactions.

Use code AIW_DIF_10 for 10% off registration at this link

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.

Thursday, 02. October 2025

DIF Blog

Scaling AI DIFferently: Why your content needs an ingredient list

In the 1990s, when mass food production introduced hundreds of novel ingredients and industrial processes, people lost significant visibility into what they were eating. Nutrition labels emerged as a solution for transparency and to help consumers make informed choices on their consumption. Today, digital content creation is experiencing its own

In the 1990s, when mass food production introduced hundreds of novel ingredients and industrial processes, people lost significant visibility into what they were eating. Nutrition labels emerged as a solution for transparency and to help consumers make informed choices on their consumption.

Today, digital content creation is experiencing its own industrial revolution. Canva logged 16 billion AI-powered feature uses last year 1, Midjourney surpassed 21 million users by March 2025 2, and 71% of businesses and 83% of creators are already reporting using AI tools in their content workflows 3. This shift has introduced new intermediaries and processes that make it harder to trace digital content origins. Without transparency, it is near impossible for artists and platforms to maintain creative control and proper attribution, or for audiences to connect to increasingly intermediated and automated creators.

The music industry for one is facing a tipping point of copyright infringement from unauthorised training models, artificially inflated play counts, and fake AI tracks swamping streaming platforms 4. The likes of Universal Music Group, Warner, and Sony, are pushing for the technology to serve the artists and help enhance their creativity instead of replacing them. As Jeremy Uzan of the Universal Music Group says, “AI can be used to assist the artist” as they pursue an artist-centric approach. For example, using AI to translate Brenda Lee’s iconic ‘Rockin around the Christmas tree’ into Spanish, or audio-enhancing archival Beatles audio, to demonstrate how human creative input combined with AI assistance can work commercially and legally. However, the current AI tools intending to foster new creative opportunities often lack the granular attribution controls and auditability that is needed to create the level of supply-chain transparency that creators need.

Like nutrition labels before them, The Content Authenticity Initiative and Creator Assertions Working Group (CAWG) are trying to restore the transparency that mass automation has obscured. Such efforts, deployed at scale, would give audiences the information they need to make informed choices about the media they consume, without dictating their decisions.

The Foundational Infrastructure

The C2PA Content Credentials Specification 5 acts as a foundation for this media transparency. It cryptographically binds an origin ‘label’ to a digital asset that says how the asset was created. The CAWG 6 builds on C2PA with a framework for attaching the ‘who’ and ‘why’ to an asset as ‘Content Credentials’. As Scott Perry, the Co-Chair of CAWG and Conformance Program Administrator of C2PA, puts it, CAWG metadata brings “human claims to digital media”.

What this looks like in the real world is Google’s Pixel 10 now shipping with C2PA conformance built in, YouTube tagging select videos as “captured with a real camera”, and LinkedIn marking images with a “Cr” symbol to show when they carry Content Credentials. These tags relay information like whether AI was used to generate or edit a part of the content, the entity that created the Content Credential, and when the credential was created. But, as Eric Scouten, Co-Chair of CAWG and Identity Standards Architect at Adobe, stresses, “one of the biggest misconceptions about [CAWG] is that it is something like Snopes or Politifact, out to say what’s true and what’s not, and that’s not the case.” CAWG does not arbitrate truth. It does not fact-check or attach political judgments. Instead, it provides signals about who created a piece of content, when, how, and in what context. The decision to ‘believe’ in the content remains with the viewer.

When FKA Twigs created her own AI clone for fan interactions, she demonstrated the difference between artist-controlled AI use and unauthorised exploitation 7. Once people knew that she stood behind her AI clone, the work felt legitimate and trust flowed from the person to the work. With provenance infrastructure, fans could verify which AI interactions were officially sanctioned by Twigs herself versus the unauthorised. Not because Content Credentials determine what’s ‘real,’ but because they can provide verifiable provenance information about the creation of digital assets, including an authorisation trail from creation to consumption.

The Challenges with Creator Identities

The FKA Twigs example hints at a much larger challenge ahead. Apps like Character AI already have ~9 million users per day, and as AI clones, virtual personas, and agentic creators proliferate, the range of online ‘identifiers’ becomes significantly more complex.

Even today, navigating the multiple digital identifiers of creators, from professional personas to social media handles and artist pseudonyms, is a fragmented journey. The plurality of creator identities creates a fundamental mismatch with existing identity verification systems. Large organisations like newsrooms and major labels rely on X.509 certificates and PKI systems that fit to enterprise workflows and secure their own supply chains, for example disincentivising pre-release leaks. But individual creators don’t operate in that world. For them, identity lives in social handles and personal websites where identifiers are informal and often platform-bound.

CAWG’s framework bridges this gap by accepting both ends of the spectrum. Their Identity Claims Aggregator mechanism verifies the disparate identity signals through trusted third parties, and issues a single verifiable credential that binds the creator’s chosen identity to their content. This gives creators a direct, human voice in the history of their work, rather than only recording what the device or app has logged in the process. As Eric explains, “the point of the identity assertion is that it is a framework that allows a lot of different things to be plugged into it.” The design is deliberately credential-agnostic, giving creators the flexibility to bring their own chosen identity signals. Future versions of the CAWG identity framework will likely add support for generic W3C Verifiable Credentials and self-controlled credentials such as those being developed by the First Person Project.

Major labels and organisations like the Universal Music Group and the Recording Industry Association of America are already exploring the use of ISNI (International Standard Name Identifier) for artist identities. In practice, this allows labels and managers to attach industry-recognised identifiers to digital assets that protect an artist’s image and likeness in their content. But this approach still has its challenges. For one, ISNI faces the perennial challenge of universal standards adoption. As with most industries, there is no single identifier used for creators today that is publicly resolvable. Scott takes a pragmatic approach to the universal identifier problem, “each industry should publish its best practices alongside the normative standard - i.e. saying this is the state of play in music right now. This is the best you can do, do it this way, we're working on it. Then as that evolves, as it updates, you have one place to go for anyone who wants to know how to identify music.”

CAWG’s strength lies in anticipating this plurality of identity and evolution. The framework is designed to incorporate new credential types as they emerge, from today’s ISNI and social accounts to tomorrow’s W3C Verifiable Credentials and even agentic identity systems.

This adaptability is particularly critical for media industries because digital content can be discovered and consumed decades after creation. Provenance data needs to persist across the content’s entire lifecycle, requiring what Eric calls “archival-quality identity”. Unlike transactional systems that only need authorisation at the point of use, such as purchasing an item online, media attribution can become more valuable as content gains cultural significance or commercial success. Sample clearances, royalty disputes, and copyright claims can arise years later, demanding granular, persistent attribution records that today’s identity token-based models like OAuth don’t provide.

As Eric explains, “if I produce a piece of content today, and you happen to find it in 2030 or 2040, I would like you to be able to understand that it was me that produced that, and to have confidence that you correctly attribute it to me. But that sort of lasting, archival quality identity, is shaky. I think the AI systems are especially shaky on that front.”

But what if every track could carry its creative history? A kind of musical DNA that travels with the content, recording not just what was made, but who made it, how, and under what authority.

The Complexity of Agentic Identity

This type of content DNA becomes essential with agentic AI systems. Unlike generative AI tools that simply transform input to output, agents pursue goals over time, coordinating multiple tools and delegating to other agents. When a music producer delegates post-production to an AI agent that then assigns harmonisation to one agent and mastering to another, the non-deterministic nature means every delegation, agent version, and training input must be recorded in case of future disputes.

This creates a fundamental distinction in attribution requirements. Tools are deterministic, their provenance handled by C2PA which can reliably attest to what happened inside a capture device or editing suite. Agents are non-deterministic, making autonomous choices and passing work along delegation chains. CAWG addresses this by developing persistent, verifiable identifiers that survive across delegations and enable authorisation chains to be traced.

In media industries, the complexity extends beyond identity into rights management and remuneration. The JPEG Trust Initiative, an ISO standards effort collaborating with CAWG, is standardising how usage permissions and commercial terms travel with content. Together, C2PA, CAWG, and JPEG Trust form a layered trust stack, proving what happened, who did it, and under what rights.

This infrastructure enables critical uses cases for the agentic web, such as:

AI Disclosure Granularity: Moving beyond binary “AI or not AI” labels to capture the spectrum of AI involvement. Copyright Protection: Recording types and quantities of input from humans, agents, instruments, or other sources, to establish legal protection for mixed human-AI works, as fully AI-generated works cannot be copyrighted. Platform Identification: Indicating content boundaries and licensing restrictions while maintaining creator control over broader commercial use.

These capabilities can unlock automated royalty distribution, combat unauthorised training data use, and support new discovery mechanisms between creators and fans. Alongside these market-driven opportunities, regulatory pressure is simultaneously accelerating Content Credential adoption across industries.

The Drivers from Compliance to Opportunity

Steps towards mandating content labelling have already begun. California has proposed legislation that would fine platforms $5,000 a day for failing to label AI content, with implementation targeted for 2026 8. The EU is similarly considering disclosure requirements for AI-assisted and generated content for 2026 9. These disclosure laws will catalyse C2PA adoption as platforms need the infrastructure to record content provenance and AI involvement to comply with regulations.

Some may see these laws as a regulatory burden, or worry that they will create surveillance infrastructure, forcing creators to expose more than they wish, but the technical reality is different. C2PA alone only records the tools used and when, allowing for total anonymity of the creator. CAWG equally gives complete control to the creators in what they disclose. The technical architecture enables privacy by letting creators choose which identity signals amplify their message or benefit their attribution goals. There’s no requirement to tie your entire identity to one piece of content.

To further increase creator flexibility, CAWG is now developing a new mechanism called ‘identity hooks’ as a way to delay attribution decisions until creators know which identity signals they need. When creators are authenticated in a phone or editing tool, that system can both sign via C2PA and attest the creator was logged in during the creation process. This establishes a stable anchor at the time of creation that creators can hook back into later when they need to attach a relevant persona or credential. As Andrew Dworschak, Co-founder of Yakoa, says, “[Identity hooks] bring flexibility so that a creator can have maximum optionality down the line when they realise they need [attribution] to support their content flow”.

Andrew’s company builds digital rights protection technology for creators and he sees even broader opportunities from Content Credentials. For example, “allowing people to connect with each other in new ways, come to new agreements, and share revenue in ways that are appropriate to them.” Even today, Yakoa’s AI monitoring tool can help to identify where creators haven’t received proper attribution across platforms, shifting the conversation from reactive compliance to proactive rights-management infrastructure.

The Future of Creative Infrastructure

The infrastructure for content authenticity is already evolving. Regulations on content disclosure are effective from next year. Major technology providers and platforms have begun adopting the tools. The question isn’t whether this happens but who will shape how it develops.

CAWG’s open and credential-agnostic approach creates infrastructure that serves all creators regardless of size or association. The specifications continue to be written as new technologies emerge, and provenance data types continue to be developed. Altogether the ecosystem is enabling creative controllability while embracing AI’s collaborative potential.

For creative industries facing AI transformation, engaging with the working groups now means influencing the attribution systems that will eventually be as commonplace as nutrition labels.

Join the conversation:

Participate in DIF’s Creator Assertions Working Group: Help shape the solutions that allow content creators to express individual and organisational intent about their content. Test the Tools: Experiment with identity assertions and provenance using Contentauthenticity.adobe.com

Join the Content Authenticity Initiative: Help the growing cross-industry ecosystem that is restoring trust and transparency online.

Endnotes McGill, Justin. 2025. https://brandwell.ai/blog/midjourney-statistics/. Brandwell. McGill, Justin. 2025. https://brandwell.ai/blog/midjourney-statistics/. Brandwell. Singla, A., et al.. 2025. The State of AI: How organizations are rewiring to capture value. Quantum Black AI by McKinsey Force, Eamonn. 2025. AI, bot farms and innocent indie victims: how music streaming became a hotbed of fraud and fakery. The Guardian. C2PA. C2PA Specifications. CAWG. CAWG Specifications. Youngs, Ian. 2024. FKA Twigs uses AI to create deepfake of herself. BBC UK. Cal Matters. SB 942: California AI Transparency Act. European Union. AI Act: Regulation (EU) 2024/1689.

A huge thank you to Eric Scouten, Scott Perry, Andrew Dworschak, Jeremy Uzan, and Erik Passoja for their time and insights in preparing this article.


Oasis Open

Call for Participation: Open Exposure Management Framework (OEMF) TC

A new OASIS technical committee is being formed. The Open Exposure Management Framework (OEMF) TC has been proposed by the members of OASIS listed in the charter below. The TC name, statement of purpose, scope, list of deliverables, audience, IPR mode, and language specified in this proposal will constitute the TC’s official charter. Submissions of technology for […] The post Call for

New TC to establish an unbiased, community framework to unite and direct the efforts in preventing, assessing, and resolving exposures in organizational technology.

A new OASIS technical committee is being formed. The Open Exposure Management Framework (OEMF) TC has been proposed by the members of OASIS listed in the charter below.

The TC name, statement of purpose, scope, list of deliverables, audience, IPR mode, and language specified in this proposal will constitute the TC’s official charter. Submissions of technology for consideration by the TC, and the beginning of technical discussions, may occur no sooner than the TC’s first meeting.

The eligibility requirements for becoming a participant in the TC at the first meeting are:

You must be an employee or designee of an OASIS member organization or an individual member of OASIS, and You must join the Technical Committee, which members may do by clicking here.

To be considered a voting member at the first meeting:

You must join the Technical Committee at least 7 days prior to the first meeting (on or before October 23, 2025 ; and You must attend the first meeting of the TC, on October 30, 2025

Participants also may join the TC at a later time. OASIS and the TC welcomes all interested parties.

If your employer is already on the OASIS TC member roster, you may participate in DPS TC (or any of our TCs) at no additional cost. Find out how.

If your employer is not a member, we’re happy to help you join OASIS. Contact us to discuss your options for TC membership.

Please feel free to forward this announcement to any other appropriate lists. OASIS is an open standards organization; we encourage your participation.

CALL FOR PARTICIPATION

OASIS Open Exposure Management Framework (OEMF) TC

Section 1: TC Charter1.a. TC Name
Open Exposure Management Framework (OEMF) TC1.b. Statement of Purpose
The purpose of the Open Exposure Management Framework (OEMF) is to establish an unbiased, community framework to unite and direct the efforts in preventing, assessing, and resolving exposures in organizational technology. 

The need for this framework emerged from a desire of cybersecurity professionals to have a thoughtful, purpose-driven set of parameters for managing exposure.  Some of the motivating forces behind creating the OEMF are: 

– An aspiration to accommodate security domains such as Vulnerability Management and Cloud Security in a more detailed way than existing cybersecurity frameworks currently do. 
– An opportunity to standardize and structure how technology exposures are defined, discovered, prioritized, and acted upon. 
– A drive to include and focus on critically important upstream activities that prevent technology exposures. 
– A desire to outline tactical guidance around the processes and technologies that intersect Exposure Management. 
– A present need for an independent, industry accepted scale for measuring Exposure Management maturity. 
– A present need to define best practices and terminology related to Exposure Management in a manner that is agnostic of specific vendor technologies.

Major Goals (see Section 5 for more detailed explanations and timelines): 
1. Propose a functional exposure management lifecycle. 
2. Offer practitioners a common set of capability requirements per lifecycle stage. 
3. Map capability requirements to prominent frameworks such as NIST, CIS, Gartner, etc. 
4. Offer the Cybersecurity industry an acceptable maturity scale for Exposure Management. 
5. Provide implementation parameters to achieve each maturity milestone. 
6. Address data inconsistencies between disparate exposure data sources. 
7. Map technology capabilities to the OEMF functional lifecycle.1.c. Business Benefits 
The primary business benefit of the OEMF is to provide organizations (both private and public) with a structured methodology to better avoid and correct the exploitability of their technology footprints. By following the methodology outlined in the OEMF, organizations would benefit through:

– More effectively avoiding the creation of exploitable technology configurations at scale. 
– Becoming more efficient in discovering, prioritizing, and resolving technology exposures. 
– Maximizing limited technology and human resources on the Exposure Management activities that most significantly reduce organizational susceptibility.
– Making better use of exposure data organizations may already have today. 
– Being enabled to make more educated and effective decisions in technology investments and personnel allocation related to Exposure Management programs.1.d. Scope
The primary scope is to enable the Cybersecurity community with a series of best practices around Exposure Management. Following that, the project intends to provide a methodology for Cybersecurity professionals or partners to perform selfassessments in Exposure Management maturity, much like OWASP has done with the Software Assurance Maturity Model. Additionally, the project seeks to develop reference material that Cybersecurity professionals can leverage to tactically drive Exposure Management maturity within their respective organizations.

As such, the main scope of the OEMF is to provide framework documentation and supplemental educational materials such as videos, presentations, images, and templates regarding Exposure Management. 

The OEMF will not produce any software products or engage in any direct commerce with outside entities.  The project assumes that the best practices put forth by the OEMF will organically drive an evolution in technology and human capabilities by those who consume the OEMF’s materials.1.e. Deliverables
Below are the major milestones/deliverables the OEMF is working towards, with target dates for each.

1. Publish the first edition of the Open Exposure Management lifecycle. This lifecycle defines what best practice entails for preventing, assessing, and resolving technology exposure. (estimated December 2025) 

2. Publish a set of capability requirements (both process and technology capabilities) for each OEMF lifecycle stage. (estimated February 2026) 

3. Map the defined capability requirements to controls in prominent Cybersecurity frameworks including NIST CSF, CIS Critical Security Controls, and Gartner CTEM. (estimated February 2026) 

4. Publish an OEMF maturity scale that Cybersecurity professionals can use to self-assess organizational Exposure Management maturity. (estimated May 2026)

5. Provide implementation requirements to achieve each maturity milestone for each OEMF lifecycle stage. Once this is complete, stakeholders will be able to not only understand their maturity but to synthesize their own improvement plans. (estimated June 2026) 

6. Publish a guide to mapping data inconsistencies between Exposure Management data sources, specifically targeting issues with disparate severity scales across different data sources. (estimated November 2026)1.f. IPR Mode
Non-Assertion Mode1.g. Audience
The primary stakeholders of the OEMF are Cybersecurity personnel, most notably personas such as Chief Information Security Officers (CISOs), Directors of Security, as well as managers and leads responsible for Vulnerability Management, Application Security, Cloud Security, and Identity Security. Secondary stakeholders would include executive leadership, Risk & Compliance Personnel, and even customers/partners of an organization since a more effective means of reducing technology exposure, and reporting on outstanding exposure, benefits these secondary stakeholders greatly. 

Exposure Management is a domain of Cybersecurity that has a fairly consistent relevance across all industries, however, larger enterprises and public entities as well as organizations that design their own infrastructure and applications would benefit even further from the OEMF as those organizations have deeper, more complex Exposure Management considerations and would have more need for the secure design elements of the framework.1.h. Language
The primary language of the OEMF TC is English.Section 2: Additional Information2.a. Identification of Similar Work
The OEMF seeks to be a supplemental framework that integrates with existing Cybersecurity frameworks and models to achieve two critical outcomes:

1. A unification and detailed direction of “find and fix” security domains such as Vulnerability Management, Application Security, Cloud Security, Software as a Service Security, and Identity Security. 

2. A bridging of best practices in secure design and Cybersecurity to give a consistent approach to preventing exposure in addition to assessing and resolving exposures that occur. 

The OEMF mainly intends to augment and link to existing frameworks and models. For secure design lifecycle phases, mapping will be provided to the tenants of the latest version of OWASP SAMM. For Cybersecurity lifecycle phases, an initial mapping will be provided for the latest version of the NIST CSF, CIS Critical Security Controls, and Gartner’s Continuous Threat Exposure Management framework. These mappings will be a guide that details how each lifecycle stage relates to a lifecycle stage in an existing framework, and which control domain each prescribed capability supports in those frameworks. 

The intended outcome is that organizations can still use these prominent frameworks to direct their overall Cybersecurity and Operations programs, but when trying to assess and drive maturity on Exposure Management domains, that organizations can “drill in” utilizing the OEMF. The output of an OEMF maturity evaluation can be used to easily update maturity against these existing frameworks due to the mappings provided by the project.2.b. First TC Meeting
The first OEMF TC meeting is expected to take place on/around October 30, 2025 via Zoom.2.c. Ongoing Meeting Schedule
Virtual meetings are expected to meet weekly though completion of the first deliverable, then likely transition to semi- monthly. 2.d. TC Proposers
Chris Peltz, Guidepoint Security
Bill Olson, Tenable 
Steve Carter, Nucleus 
Nathan Paquin, Guidepoint Security
Christopher Brown, Guidepoint Security
Gavin Millard, Tenable 2.e. Primary Representatives’ Support 
I, Chris Peltz, as OASIS primary representative for Guidepoint Security, confirm our support for the OEMF TC and our participants listed above.
I, Bill Olson, as OASIS primary representative for Tenable, confirm our support for the OEMF TC and our participants listed above.2.f. TC Convener
Chris Peltz, GuidePoint Security, chris.peltz@guidepointsecurity.com2.g.  Anticipated Contributions
The OEMF project is at its inception, there are no preexisting repositories or open source projects to donate.2.h. FAQ Document
N/A2.i.  Work Product Titles and Acronyms
N/A

The post Call for Participation: Open Exposure Management Framework (OEMF) TC appeared first on OASIS Open.


EdgeSecure

Letter from Forough Ghahramani, Ed.D.

Dear EdgeDiscovery Community, As the pace of innovation accelerates and transformative technologies like AI and quantum computing reshape the research landscape, this Summer/Fall 2025 issue of EdgeDiscovery invites you to… The post Letter from Forough Ghahramani, Ed.D. appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Dear EdgeDiscovery Community,

As the pace of innovation accelerates and transformative technologies like AI and quantum computing reshape the research landscape, this Summer/Fall 2025 issue of EdgeDiscovery invites you to explore the ideas, people, and infrastructures advancing discovery and inclusive innovation across our ecosystem.

We begin with a powerful conversation featuring Dr. Dan Stanzione, Executive Director of the Texas Advanced Computing Center (TACC), whose leadership in open science supercomputing continues to shape national and global capabilities. From Frontera, the fastest university-based supercomputer, to the new NSF-supported Leadership Class Computing Facility (LCCF), Dr. Stanzione offers insights into the future of AI-enabled HPC, the promise of quantum, and the importance of training the next generation of technologists. His thought leadership underscores the critical intersection of infrastructure and impact in advancing science.

We are proud to feature Dr. Ilkay Altintas, Chief Data Science Officer at the San Diego Supercomputer Center and Founding Director of the Societal Computing and Innovation Lab (SCIL), has built a career at the intersection of computing and societal impact. Her work spans environmental modeling, biomedical data, and advanced cyberinfrastructure, with a strong focus on accessibility, scalability, and responsible innovation. Through programs like WIFIRE, she has pioneered real-time hazard response tools that bridge research and emergency management, while also helping shape a data science workforce grounded in ethical and community-driven practice. At SCIL, she leads efforts to design user-centric, composable platforms that empower domain experts to harness AI and HPC without needing deep technical expertise. Dr. Altintas exemplifies how thoughtful collaboration and infrastructure innovation can drive equity, resilience, and discovery across disciplines.

We also feature a deep dive into the work of Dr. Frank Wuerthwein, Director of the San Diego Supercomputer Center, Executive Director of the Open Science Grid, and Principal Investigator of the National Research Platform (NRP). Dr. Wuerthwein shares his vision for building a scalable and inclusive infrastructure to support AI education and data-intensive science at institutions of all sizes, including community colleges and under-resourced campuses. He emphasizes that advancing education in AI requires not only technological infrastructure, but also social infrastructure, collaborative networks of educators, shared platforms, and aligned curriculum pathways. His efforts to democratize access to computing and integrate hands-on learning from high school to career reskilling reflect the kind of systems thinking needed to close opportunity gaps and scale innovation.

We highlight the contributions of Dr. Manish Parashar, Director of the Scientific Computing and Imaging (SCI) Institute, Inaugural Chief Artificial Intelligence Officer at the University of Utah, whose career has shaped national conversations around responsible AI, data sharing, research infrastructure, and policy. A mentor to many, including myself during his time at Rutgers, Dr. Parashar's vision and collaborative leadership have laid the foundation for initiatives such as the National Data Platform and the Virtual Data Collaboratory.

From national centers to regional catalysts, we spotlight Dr. Michael Johnson, President of the New Jersey Innovation Institute (NJII), whose work is redefining how higher education intersects with industry to translate academic innovation into real-world impact. Drawing on his entrepreneurial background and translational research experience, Dr. Johnson has positioned NJII as a nimble, mission-driven organization accelerating AI and EdTech innovation, expanding access for small and mid-size businesses, and addressing workforce development through pragmatic, cost-effective solutions.

Our feature on SHI International’s AI & Cyber Labs gives readers a glimpse inside one of New Jersey’s most exciting new facilities for enterprise AI adoption and experimentation. Through an interview with Lee Ziliak, SHI’s Field CTO, we explore their work with NVIDIA, aspirations for quantum engagement, and commitment to partnering with higher education institutions to accelerate responsible innovation.

These articles reflect the power of voices influencing national conversations. As regional research and education networks like Edge work to bridge gaps and build ecosystems, we’re reminded that partnerships, across disciplines, sectors, and geography, are essential to unlocking equitable access to advanced technologies.

As always, EdgeDiscovery is not just a publication, it is an evolving platform for dialogue, collaboration, and community-building. I hope this issue sparks new ideas and inspires deeper engagement as we continue to build a future grounded in innovation, access, and purpose.

With appreciation,

Forough Ghahramani, Ed.D.
Assistant Vice President for Research,
Innovation, and Sponsored Programs
Edge

The post Letter from Forough Ghahramani, Ed.D. appeared first on Edge, the Nation's Nonprofit Technology Consortium.


Digital ID for Canadians

Recognizing Quebec’s Digital Trust and Cybersecurity Leadership in Kananaskis

October 2, 2025 DIACC is proud to recognize Quebec’s leadership during the recent Federal, Provincial and Territorial Ministers’ and Deputy Ministers’ Seminar on Digital Trust…

October 2, 2025

DIACC is proud to recognize Quebec’s leadership during the recent Federal, Provincial and Territorial Ministers’ and Deputy Ministers’ Seminar on Digital Trust and Cybersecurity in Kananaskis, Alberta. As co-chair, Quebec’s Deputy Minister of Cyber and Digital Affairs and Chief Information Officer played a key role in driving progress, resulting in a collaborative agreement on crucial cybersecurity proposals.

This strategic approach to digital trust demonstrates Quebec’s efforts to create more secure and responsive government services for its citizens and businesses. The province has taken a forward-thinking approach to solving complex digital challenges by uniting teams and platforms under a single, reliable framework. This creates a streamlined “digital front door” for everyone, with features like “tell us once” and coordinated service delivery.

DIACC looks forward to a continued partnership supporting Quebec’s digital trust leadership. By building on these foundations, Quebec is strengthening its reputation as a pioneer in digital innovation. This ongoing work empowers Quebecers, strengthens local businesses, and fuels innovation across the province and Canada.

Joni Brennan
President, DIACC


EdgeSecure

Leveraging and Managing AI in Education Today and into the Future

Leveraging and Managing AI in Education Today and into the Future A conversation with Forough Ghahramani and Florence Hudson - originally published on the Springer Nature Research Communities website on… The post Leveraging and Managing AI in Education Today and into the Future appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Leveraging and Managing AI in Education Today and into the Future

A conversation with Forough Ghahramani and Florence Hudson - originally published on the Springer Nature Research Communities website on September 24, 2025.

What drew you each to such varied topics of work/study and how did you find yourself where you are today?

Forough Ghahramani: I’ve always followed my curiosity, and that’s taken me through a varied journey across biology, math, computer science, software engineering, academia, entrepreneurship, and now into AI and quantum. I started out passionate about science, biology and math especially, and added computer science during my graduate program, which opened the door to early work in high-performance computing. I still remember using punch cards on the IBM S/360, then watching computing evolve rapidly, from minicomputers during graduate school to 64 bit systems, open source computing environments, the internet, search engines. My early career included working with operating system development for proprietary Virtual Memory Management System (VMS) at Digital Equipment Corporation (DEC), then moved on to Unix engineering, performance management and benchmarking and migrating applications from 32-bit to 64-bit systems.

One of the most rewarding phases of my career was as a technical consultant, where I got to see how the systems I helped build were applied across industries including, pharmaceuticals, biotech, healthcare, finance, manufacturing, and even steel manufacturing. Working as a systems architect on the Human Genome Project brought everything full circle. It tied my backgrounds in biology, mathematics, and computing into one meaningful direction. I became fascinated by the field of bioinformatics, and I fully reinvented myself in that area. I went from being a traditional software engineer to running my own biotechnology consulting company, which exposed me to the world of entrepreneurship and its unique challenges and rewards.

Over time, I came to view technology as much more than a tool, it became a vehicle for discovery, innovation, and transformation. That entrepreneurial spirit eventually led me to academia, where I found joy in teaching, mentoring, and launching new programs that bridge industry and education. I’ve always been driven by a love of learning and innovation, and even when career shifts weren’t intentional, sometimes guided by industry shifts, life phases or family needs, they added depth and diversity to my skill set and opened my interest to new areas.

I went back to school a couple of times, one earlier in my career to get my MBA and then later in life I earned my doctorate, something that brought together my leadership work in higher education and my lifelong commitment to continuous growth. I first learned about AI in the 1980s and have worked with big data and HPC for over 30 years, but what fascinates me today is seeing AI enter the mainstream and imagining its future alongside quantum technologies. My experience in both industry and higher education, always on the leading edge,  has allowed me to live four very different careers, and I’m still energized by what lies ahead.

In my current role, as Vice President for Research & Innovation at NJ Edge,  I work with higher education leaders as they are developing the strategy to support research through advanced and emerging technologies, including AI, high performance computing, and quantum.

While much has changed in my career journey, what has stayed constant includes a problem-solving mindset, a hunger to grow, and a strong sense of what matters to me at any given time. Education has played a central role in shaping opportunities throughout my life, and I’m a firm believer in giving back. As an engineer and advocate, I’ve worked to encourage young people, especially girls, to pursue STEM fields, often speaking to students from K–12 through college to help spark interest and confidence in science and technology. Another aspect of giving back is serving on advisory boards of two of my alma maters, Penn State University college of Science and University of Pennsylvania. Involvement in professional organizations such as IEEE and Society of Women Engineers has also provided opportunities for community engagement.

Florence Hudson: I always loved math and science from a young age. When I was a young girl my brother would wake me up to watch NASA spaceflight missions on TV which I thought were so cool. One day I asked “how do they do that?” That’s when I began thinking like an engineer.

As an engineer and a scientist, I have insatiable curiosity, plus I love to create things and fix things whether for business, technology, research, government, policy, humans, or society. Basically, I follow my curiosity, identify challenges to address, and apply my thinking and all types of technology to help solve problems. It’s a never-ending opportunity as the problems change as do the technologies and solutions available to address them. I believe our opportunity while we are on this earth is to identify the unique gifts we each have and use them for good everyday. That is what I strive to do across all domains that interest me, from data science to all types of engineering, sciences, knowledge networking, cybersecurity, standards, societal challenges, education, outreach and more.

As my educational and professional careers unfolded, I worked for NASA and the Grumman Aerospace Corporation while earning my aerospace engineering degree. I loved aerospace engineering, but the lead time from research to launch was decades and funding was declining. Computing and information technology was growing, so I expected that computers would run the world some day and I went into computing. My first job in computing was at Hewlett Packard. Then I enjoyed a long career at IBM where I was able to apply technology to all sorts of societal, business and technical challenges from an initial sales role to eventually becoming an IBM Vice President of Strategy and Marketing and Chief Technology Officer.

After retiring from IBM in 2015, I became a Senior Vice President and Chief Innovation Officer at Internet2 in the research and education world, and then worked for the NSF Cybersecurity Center of Excellence at Indiana University. In 2020 Columbia University asked me to lead the Northeast Big Data Innovation Hub after I had been on the advisory board since 2015 working on their overall strategy and cybersecurity initiatives, so it was a natural fit to become Executive Director. I had also started my own consulting firm (FDHint, LLC) as CIOs were asking me to consult with them. I have also served on over 18 corporate, advisory and steering boards - from NASDAQ-listed companies to academic and non-profit entities.

My passion for cybersecurity is a key focus of mine. It started in my early days as an aerospace engineer working on defense projects. At IBM I worked on security initiatives in servers and solutions, and continued the focus working for the NSF cybersecurity center of excellence at Indiana University. This led to my leading the development of the IEEE TIPPSS standard to improve Trust, Identity, Privacy, Protection, Safety and Security for clinical IoT (Internet of Things) which won the IEEE Emerging Technology Award in 2024. Springer has published two of my books on TIPPSS.  I am currently Vice Chair of the IEEE Engineering in Medicine and Biology Society Standards Committee, and lead a TIPPSS roadmap task group which has spawned a new IEEE standard working group on AI-based coaching for healthcare with TIPPSS - Trust, Identity, Privacy, Protection, Safety and Security. TIPPSS is being applied in other domain areas as well, including large experimental physics control systems, energy grids and distributed energy resources. TIPPSS is envisioned to apply to all cyberphysical systems.

In what ways do you think your own educational/academic/career path might have been different if you started in today’s climate?

Forough Ghahramani: If I were starting my academic and professional journey today, I think it would have looked quite different, maybe not in direction, but in pace, access, and mindset. When I was starting out, computing was a specialized, niche field. Physical access to machines, time on shared systems, and a lot of patience was necessary. Today, a high school student can access cloud-based computing resources, learn to code from YouTube, and contribute to open-source projects from their bedroom. That kind of accessibility changes everything. With AI, cloud computing, and real-time collaboration platforms now core to both education and work, the barriers to accessing knowledge and innovating early have dramatically lowered.

With today’s startup opportunities, accelerators, and online communities, I probably would have embraced entrepreneurship sooner. I also imagine I would have engaged with more interdisciplinary learning earlier on, because today’s educational environment really encourages learning across domains. AI, data science, and quantum computing would have pulled me in even faster given my background and propensity, but I would have had to be more intentional about focus, given today’s information overload can be overwhelming.

I think my motivation and values would be the same. I’ve always been driven by curiosity and the desire to connect ideas across fields. What has changed is that today’s climate rewards that type of thinking more openly, and it provides more tools to act on it faster.

Florence Hudson: I think if I were to start my educational and professional career today I might have stayed in aerospace engineering longer as there are many more job opportunities with government and commercial space organizations, and faster transition of the research to practice. When I was an Aerospace and Mechanical Engineer at Princeton University and was working on future missions around Jupiter during a NASA summer internship, they said my summer internship project would take 18 to 20 years to come to fruition. That’s a long time! That’s when I decided to go into Information Technology (IT). Now there is a much faster path from research to execution in aerospace, and many more jobs, thereby broadening and accelerating opportunity and impact.

Being involved in both technology and education, do you see more risk with the technology itself (misinformation, bugs, security) or how it is applied in the educational landscape (with complicated policies, uneven funding, inequalities)? Or a combination?

There is risk in both AI technology itself as well as how it is used in the educational landscape.

To think more broadly, we must consider that the educational landscape of AI includes everyday use and education for all citizens - not just educational institutions. Openly available AI-enabled systems from Large Language Models (LLMs) like ChatGPT to everyday devices using AI to make suggestions that may be incorrect, are affecting the education of our citizens, students, teachers and professionals. If an educator, professional or parent is provided incorrect information and they teach others or take actions with incorrect information, the incorrect recommendations by AI can have a broad negative impact. We must aspire to limit that negative impact.

There is also a risk with users sharing information in AI tools that are meant to be kept private, whether they are private citizens or professionals in industry or government. The AI tools may leverage information used to ask questions of AI to add to the corpus of content used to answer questions for other users, thereby risking privacy and security of shared information. This risk applies to all humans and institutions asking questions of AI tools as their questions provide context and content that can be used by the AI tool more broadly.

In educational systems and institutions, AI has the risk of providing incorrect information so students and teachers may be learning things incorrectly, which will proliferate to others they speak with or teach. AI is creating a false sense of comfort that it knows the right answer, without people questioning or vetting it. It makes it easy for people to stop thinking. Many people want to let the AI think “for” them, and many people do not bother to check if it is right or wrong. This is a real danger.

Technology, by itself, can be flawed, but the risks can be managed with good design, robust testing, responsible development and ongoing management. An important concern is when powerful technologies are layered on outdated systems, infrastructures, or unclear policies.

While we must continue to improve the technology itself, we also need to focus on the human, structural, and policy dimensions that determine whether technology helps or harms. If AI is deployed without thoughtful design and policy, and educator involvement, it can do more harm than good. The challenge isn’t just what AI can do, it is also who gets to use it and for what purpose.

Like any technology, there will be bugs and problems, but it’s when we abuse the power of AI that the risk to humans and institutions increases.

What, briefly, is the big picture landscape of AI and education, including key strengths and risks?

AI is being used in education already, by students, teachers, and administrators. Like any tool, it can be used for good or for bad.

AI is transforming education at every level, from K–12 classrooms to higher education and workforce training, by introducing new possibilities for personalization, real-time support, availability and scalability across the broad ecosystem of educational systems and institutions. Key strengths include the ability for AI to deliver adaptive learning experiences tailored to each student’s pace and style, automate time-consuming tasks like grading or feedback, and reveal data-driven insights that help educators intervene earlier and more effectively in student learning journeys. AI can provide a quick synopsis for students and teachers to be able to quickly ingest content, and even translate content across languages, generate visualizations to support complex thinking, and serve as a tutor, coach, or creative collaborator. It can enable teachers and administrators to analyze student and school data and metadata to identify patterns, anomalies and opportunities to make better decisions and improve processes to better enable student success.

But alongside these strengths are real risks. There are real concerns about authorship, academic integrity, privacy, and surveillance, especially when student data is collected without transparency or used to make high-impact decisions. The ease of generating text or code with AI raises philosophical and practical questions about what it means to learn, think critically, or create original work in an AI-augmented world. There's also the risk of over-reliance: students and educators may become dependent on AI to the point that foundational skills erode or motivation diminishes. It also may enable students, teachers and administrators to disconnect from the content and make less informed or human-centered decisions.

Striking the right balance means centering human agency and pedagogy in the design and deployment of AI tools. AI should serve as a support mechanism, not a substitute, for the relational, reflective, and exploratory aspects of education. This requires thoughtful policies, transparent use guidelines, educator training, and practical design that anticipates and avoids unintended consequences.

In what ways can AI be used to enhance/encourage learning rather than give students a way around it?

AI can be a powerful cognitive companion when integrated thoughtfully into the learning process. Rather than serving as a shortcut to answers, it can enhance learning by helping students form better questions, explore multiple perspectives, visualize abstract or complex ideas, and engage in iterative practice with immediate, personalized feedback. For example, intelligent tutoring systems can walk students through problem-solving steps, while AI writing tools can offer style and grammar feedback that encourages revision rather than doing the writing for them.

The real shift lies in moving away from a transactional learning mindset, where students are focused on getting the answer as efficiently or quickly as possible, toward a collaborative learning mindset, where AI acts as a coach, partner, or creative assistant in the learning process. In this context, students are not passive recipients of knowledge but active participants in the construction of their understanding. AI tools can model Socratic questioning, recommend readings based on prior gaps, or simulate real-world scenarios for application of skills.

When used this way, AI doesn’t replace learning, it scaffolds it. It gives learners room to explore, fail safely, reflect, and try again. That’s not just about keeping students honest, it’s about keeping them engaged, curious, and confident in their capacity to learn and grow.

How AI has impacted students’ attitudes towards education (from K12 to higher ed). Do they feel it’s less relevant? Or are they excited because it’s a tool they harness?

Students’ reactions to AI in education are mixed. Some see it as a big benefit in reducing their effort, thereby diminishing their perceived value of their own effort, (“Why write when AI can do it?”), while others see it as a superpower that enhances their creativity and efficiency. There is some skepticism around the use of AI by educators in the classroom. Much depends on how schools and educators frame AI, not as a crutch, but as a catalyst for inquiry, reflection, and application.

What areas outside of paper writing are changing and in what ways? 

AI is reshaping how students approach nearly every part of academic life. Lecture transcription and summarization tools (e.g., Otter.ai), AI-powered flashcard generators, and group collaboration platforms with embedded AI assistants are streamlining notetaking, study sessions, and project work. The learning ecosystem is becoming more modular, on-demand, and scaffolded by intelligent systems.

AI is changing how educators approach their roles as well. Some educators are requiring in-person test-taking for students with hand-written answers in the classroom, to avoid the use of AI and ensure they know what the students are actually learning and understanding. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

How has AI changed how students attend lectures and take notes, study, do group work, etc.? 

AI is rapidly reshaping how students engage with learning, from the way they attend lectures to how they study and collaborate. Tools like Otter.ai and Notion AI allow students to focus on understanding rather than taking frantic notes, offering real-time transcription, summarization, and translation to support diverse learners. AI-enhanced note-taking apps can organize content, generate highlights, and even answer follow-up questions, turning notes into interactive study companions. When it comes to studying, platforms like Khanmigo and Quizlet deliver personalized learning experiences by creating adaptive quizzes, tutoring simulations, and targeted study plans based on students' evolving needs.

Group work has also become more efficient with the help of AI-powered tools that support brainstorming, project planning, and communication, especially in remote or multilingual settings. Perhaps the most significant shift is in mindset: with AI handling many of the routine academic tasks, students are free to focus on deeper learning, critical thinking, and strategic problem-solving. Ensuring consistent and broad availability of AI tools, training, and infrastructure is essential to enable these advancements to enhance learning for all students.

How can AI be implemented without widening the digital divide between well-resourced and under-resourced schools?

To implement AI in education without widening the digital divide, we need to treat broad availability as a design requirement, not an afterthought. AI tools need to be available to the broad community of schools and learners whether they have ample resources or limited resources related to high-speed internet, infrastructure, and trained staff, or we risk creating uneven opportunities for growth across the broad student population. A set of suggested actions are included below.

Prioritize broad availability and low-bandwidth tools - Develop and adopt AI tools that work offline or with minimal internet connectivity. Many students still rely on shared devices or limited data plans, so tools must be optimized for use on mobile devices, with offline functionality, and in resource-constrained environments. Open-source platforms and lightweight AI models can play a critical role here. Invest in educator training across all settings - Professional development opportunities must be extended to educators across the broad landscape of schools, both well-resourced as well as under-resourced schools, so they can all have the opportunity to understand, evaluate, and effectively use AI. It’s not just about the tools, it’s about empowering educators to integrate them meaningfully and thoughtfully into their classrooms. Embed broad AI enablement in policy and funding - Policymakers and funders could tie grants and procurement to technology goals across a wide array of schools and communities to incentivize AI use and adoption. For example, federal and state programs could subsidize AI deployments or provide incentives for companies to co-design tools in a range of communities. Promote Public-Private-Partnerships (PPPs) in and across communities - AI adoption should be accompanied by partnerships that bring together schools, community organizations, libraries, universities, and industry. These partnerships can support infrastructure upgrades, shared use of cloud resources, or mentorship programs that extend beyond the school walls. Focus on student-centered AI - Instead of deploying AI only for administrative efficiency (e.g., grading automation or test prep), educational institutions and funders could invest in tools that support learner growth, curiosity, and agency, tools that work just as well for a student in a rural district as one in a top-performing urban school.

In summary, if we approach AI as a tool for both effectiveness and efficiency, and ensure community voices are part of the process from the beginning, it can help close, not widen, the digital divide.

This Nature article discusses using a document’s version history to deter AI usage in writing. Have you heard of other techniques or ideas involving technology? 

The method proposed in the Nature article includes reviewing incremental edits over time and can reveal whether a document was developed iteratively or was included as a fully polished piece, which can be a potential flag for large language model use. Version history is just one part of a growing set of tools. Other techniques include technological, pedagogical, and procedural based approaches.

Technological approaches use software and systems to detect or deter AI-generated content by analyzing how text is created or submitted. Examples include Turnitin AI Detection, which detects plagiarism and AI use. Another example is OpenAI watermarking that includes subtle signature embedding.

In the pedagogical approach, educators redesign assignments and assessments to emphasize critical thinking, originality, and personal connection, which are harder for AI to simulate. Students are taught how to use AI responsibly as a learning enhancer. Examples include Otter.ai for lecture summarization and study support, and custom AI Reflection Assignments such as comparing ChatGPT outputs with human-written drafts.

Procedural approaches include institutional or classroom policies that govern when and how AI can be used, often relying on transparency, documentation, and updated honor codes. Canvas LMS with audit trail and version control features is one example.

While advancements are being made in detection tools, none are foolproof. Ethical dilemmas can be created due to false positives, especially when punishments are incurred without clear evidence. Institutions will need clear, transparent, and fair AI use policies, combined with student education and faculty development.

As AI becomes widely adopted across industries, education will need to ultimately shift from a suspicious stance to one of guided integration. Maintaining integrity may involve detection and deterrence, however approaches will also need to include trust-building and authentic assessment.

Can you share an example of a study or project where AI significantly improved learning outcomes?

The University of Michigan's “Maizey”, a customized generative AI tool, is trained on specific course materials to provide personalized assistance to students. Positive results in student performance and engagement have been reported. It has also increased efficiency for both instructors and students. For example, in an Operations Management course at the University of Michigan Ross School of Business, the tool  saved instructors 5 to 12 hours of answering questions each week. For students, the tool provided an ability to ask questions beyond the classroom or a professor’s office hours. According to self-reported surveys, improvements in assignment and quiz scores were shown. This is a small but significant step in scaling personalized support.

Should English and coding still be taught in the same format? For instance, how would one teach a CS student the value/quality of code when it’s generated by AI? Will the field/study be more about prompts rather than writing code?

While English and coding will continue to be foundational, how they are taught needs to evolve.

The shift for English instruction may involve multiple facets. For instance, we envision there will be a move towards developing skills in discovering available information leveraging AI tools, and vetting it for accuracy. Beyond leveraging available information, more focus on creating and producing new information, learning to make informed judgments as users of information, leveraging AI for writing with critical oversight, and ethical writing will be important. With AI bringing basic information to our fingertips, an increased focus on creative thinking and creative information development, analysis, data storytelling and data visualization will be valuable.

For coding, the emphasis will need to shift from syntax mastery to a focus on problem-solving, critical thinking, and the ability to adapt and improve AI-generated code.  The shift will be from writing code from scratch to evaluating, refining, and architecting systems with assistance of AI. While prompts will be important, the emphasis in training will need to include how to critically assess and improve code in lieu of simply generating it.

With the tech industry leveraging AI to reduce the cost of employing developers, does this impact young people's interest in studying CS? 

Despite automation and computing advances, and perhaps fueled by them, Computer Science (CS) remains a dynamic field. From the early days when the focus in CS classes was writing a compiler, to the evolving focus on AI and now Quantum Computing, computer science grows with the evolution of innovative technologies and their applications.

Regarding software development, with the advent of automation leveraging AI, some students may gravitate toward areas where they feel human agency remains central including AI ethics, security, data science, human-computer interaction, data story-telling and data visualization. Some may shift from coding to prompt engineering, but the underlying logic, structure, and systems thinking are still core competencies.

Will Google and Stackover flow in their current forms become irrelevant? 

Google and Stack Overflow may not vanish, but they will evolve. AI systems trained on forums like Stack Overflow already offer contextualized responses. However, the social and pedagogical value of such platforms, seeing multiple solutions, peer validation, and community norms, remains important. The future may see integration rather than obsolescence.

How do you believe those on the technical side of AI can fight against the threats to education?

Technologists can choose to embrace rather than fight AI, as it is here and will be here for a while. They can choose to embed ethical guardrails into AI tools and their use, advocate for transparent systems, and co-design with educators. They can support open infrastructure and prioritize broad usability. Perhaps most importantly, they must acknowledge that technological literacy is also civic literacy in the AI age.

Perhaps the real opportunity is to see AI as a tool to help critical thinking and creativity grow, using AI tools to provide a baseline in thinking, with humans using that as a springboard for more creative and imaginative thinking and innovation.

How do you envision the educational and academic landscape in 5 years, 10 years, 20 years?

In the near term, AI will likely be woven seamlessly into the day-to-day fabric of education. AI-powered tools will assist with personalized learning pathways, real-time feedback, and multilingual content delivery. Adaptive platforms will support differentiated instruction, helping students master concepts at their own pace while offering educators rich insights into individual progress. Faculty and students will routinely use AI for brainstorming, tutoring, lab simulations, and writing assistance. Microcredentials and skills-based learning, especially in areas like AI literacy, ethics, and data fluency, will grow rapidly, both inside and alongside traditional degree programs. Efforts in leveraging real insights to improve and further the science of AI will grow, and the application of AI in science, engineering and other disciplines will increase.

The evolution of trust and use of AI in education will likely have to evolve in order to harness its value and mitigate the risks it introduces. As mentioned above, some educators are requiring in-person test taking with hand-written answers in the classroom to confirm what the students are actually learning and understanding rather than using AI to answer the tests. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

In 10 years: The classroom may be less bound by physical walls or static schedules. We could see AI agents co-teaching with human instructors managing formative assessments, generating tailored lesson variations, and supporting students in multiple languages and learning styles. Augmented Reality (AR) and Virtual Reality (VR) will likely be commonplace in STEM labs, medical training, and arts education, offering fully immersive simulations and collaborative experiences. Interdisciplinary programs that combine computing, humanities, ethics, and policy will become the norm, responding to the needs of an AI-shaped world. Institutions may start awarding modular degrees that reflect personalized learning trajectories, not just traditional majors.

In 20 years: The boundary between formal and informal education may blur almost completely. AI-powered tutors will likely be embedded in the tools and environments students use daily, including their personal digital devices such as smartphones, wearables, home assistants, or AR glasses. Learning may happen anywhere, anytime, guided by intelligent agents that adapt not just to what learners know, but how they feel, what motivates them, and where they struggle. Credentials may shift from degrees tied to credit hours to skills portfolios based on demonstrated mastery, verified through performance in real-world simulations or digital apprenticeships. Lifelong learning will no longer be optional. It will be dynamically integrated into professional life through just-in-time learning pathways driven by AI.

Throughout this transformation, human educators will remain essential. Their roles may evolve from content deliverers to mentors, curators, and ethical stewards of technology, but their presence will be more critical than ever in guiding values, fostering community, and ensuring that learning remains deeply human, not just algorithmic.

How can research publishers help in AI and education?

Broadly, research publishers can help in AI and education by inspiring and publishing all sides of the AI story - the good, the bad, and the ugly - like Springer did with this invited blog. Allow everyone to learn from others through your publications.

Research publishers also have a role in ensuring that AI is used responsibly in scholarly communication, including setting norms around disclosure of AI usage, enabling reproducibility through shared datasets and code, and fostering interdisciplinary research that explores AI's impact on pedagogy, as well as on all people, institutions and systems using AI or who may be impacted by the use of AI.

One question above was written by AI. Can you guess which one?

We are not sure, however, this seems like a fitting end. AI is now both the tool and the topic, the assistant and the questioner. And perhaps that’s the most important takeaway: we are all co-authors in this unfolding story.

A guess is the “Google and Stackover flow” question based on the fact that it should be Stack Overflow.

Florence Hudson is Executive Director of the Northeast Big Data Innovation Hub at Columbia University and Founder & CEO of FDHint, LLC, a global advanced technology consulting firm. A former IBM Vice President and Chief Technology Officer, Internet2 Senior Vice President & Chief Innovation Officer, Special Advisor for the NSF Cybersecurity Center of Excellence, and aerospace engineer at the NASA Jet Propulsion Lab and Grumman Aerospace Corporation, she is an Editor and Author for Springer, Elsevier, Wiley, IEEE, and other publications. She leads the development of global IEEE/UL standards to increase Trust, Identity, Privacy, Protection, Safety and Security (TIPPSS) for connected healthcare data and devices and other cyber-physical systems, and is Vice Chair of the IEEE Engineering in Medicine & Biology Society Standards Committee. She earned her Mechanical and Aerospace Engineering degree from Princeton University, and executive education certificates from Harvard Business School and Columbia University.

Forough Ghahramani is Vice President for Research and Innovation for New Jersey Edge (Edge).  As chief advocate for research and discovery, Forough serves as an advisor and counsel to senior higher education leaders to help translate vision for supporting research collaborations and innovation into actionable Advanced CI strategy leveraging regional and national advanced technology resources. Forough was previously at Rutgers University providing executive management for the Rutgers Discovery Informatics Institute (RDI2), working with Dr. Manish Parashar (Director). Forough's experience in higher education also includes previously serving as associate dean and department chair. Prior to joining academia, she held senior level engineering and management positions at Digital Equipment Corporation and Hewlett Packard (HP), also consulted to Fortune 500 companies in high performance computing environments. Forough is a Senior Member of IEEE, has an appointment to the NSF Engineering Research Visioning Alliance (ERVA) Standing Council, a Vice President for NJBDA's Research Collaboration committee, serves on the Northeast Big Data Innovation Hub and the Ecosystem for Research Networking (ERN) Steering committees. Forough has a doctorate in Higher Education Management from University of Pennsylvania, an MBA in Marketing from DePaul University, MS in Computer Science from Villanova University, and BS in Mathematics with a minor in Biology from Pennsylvania State University. Forough is consulted on the state, national, and international levels related to STEM workforce development strategies. She is currently a Principal Investigator on two NSF-funded projects: “EAGER: Empowering the AI Research Community through Facilitation, Access, and Collaboration” and “CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE)”, and a co-PI on the NSF ADVANCE Partnership: “New Jersey Equity in Commercialization Collective.” She has previously served a co-PI on the NSF CC* OAC: Planning “Advanced Cyberinfrastructure for Teaching and Research at Rowan University and the Southern New Jersey Region” and the NSF CCRI: Planning “A Community Research Infrastructure for Integrated AI-Enabled Malware and Network Data Analysis”.

The original article can me found here »

The post Leveraging and Managing AI in Education Today and into the Future appeared first on Edge, the Nation's Nonprofit Technology Consortium.


FIDO Alliance

IDAC Podcast: Going Passkey Phishing with Nishant Kaushik, FIDO Alliance

In this episode of the Identity at the Center podcast, Jeff and Jim discuss various aspects of identity access management (IAM) policies and the importance of having a solid foundation. […]

In this episode of the Identity at the Center podcast, Jeff and Jim discuss various aspects of identity access management (IAM) policies and the importance of having a solid foundation. They emphasize the need for automation, controls, and how IAM policies should be created without technology limitations in mind. The discussion also covers the implementation challenges and the evolving concept of identity verification. Jeff, Jim, and their guest, Nishant Kaushik, the new CTO at the FIDO Alliance, also delve into the issues surrounding the adoption of passkeys, highlighted by Rusty Deaton’s IDPro article, and address some common concerns about their security. Nishant offers insights into ongoing work at FIDO Alliance, the potential of digital identity, and the importance of community in the identity sector. The episode concludes with mentions of upcoming conferences and an homage to the late identity expert, Andrew Nash.


EdgeSecure

Edge Announces Leadership Transition: Christopher R. Markham, Ph.D.(c) Named Interim President and CEO

Edge Announces Leadership Transition: Christopher R. Markham, Ph.D.(c) Named Interim President and CEO NEWARK, NEW JERSEY, October 1, 2025 – Edge, a leading member-owned nonprofit provider of high-performance optical fiber… The post Edge Announces Leadership Transition: Christopher R. Markham, Ph.D.(c) Named Interim President and CEO appeared first on Edge, the Nation's Nonprofit Technology Cons
Edge Announces Leadership Transition: Christopher R. Markham, Ph.D.(c) Named Interim President and CEO

NEWARK, NEW JERSEY, October 1, 2025 – Edge, a leading member-owned nonprofit provider of high-performance optical fiber networking and advanced technology solutions, announced today the retirement of President and CEO Samuel Conn, Ph.D., effective September 26, 2025, and the appointment of Christopher R. Markham, Ph.D.(c) as Interim President and Chief Executive Officer.

Dr. Conn, who joined Edge in September 2016, has guided the organization through a transformative period of growth and innovation. During his tenure, he leveraged more than 35 years of combined military, professional, private sector, and academic experience to advance Edge's mission of empowering research, education, and economic development through cutting-edge technology infrastructure. Under his leadership, Edge solidified its position as a national model of excellence for purpose-built research and education networks, expanding its reach across the nation while maintaining its commitment to affordability, reliability, and thought leadership.

"Sam's visionary leadership has been instrumental in positioning Edge as a trusted partner for institutions seeking to harness the potential of digital networks to advance teaching, learning, and research. His dedication to innovation and member service has left an indelible mark on our organization and the communities we serve. We extend our deepest gratitude for his years of distinguished service and wish him well in his retirement."

– Dr. Steven Rose
Chairman of Edge's Board of Directores

"Sam's visionary leadership has been instrumental in positioning Edge as a trusted partner for institutions seeking to harness the potential of digital networks to advance teaching, learning, and research," said Dr. Steven Rose, Chairman of Edge's Board of Directors. "His dedication to innovation and member service has left an indelible mark on our organization and the communities we serve. We extend our deepest gratitude for his years of distinguished service and wish him well in his retirement."

Christopher R. Markham brings over 25 years of executive leadership experience spanning higher education, research networks, military, and the private sector to his new role. Since joining Edge in 2018 and most recently serving as Executive Vice President of Operations & Chief Economic Development Officer, Markham has played a central role in the organization's transformation into one of the largest and most respected research and education networks in the Northeast.

Working in close collaboration with Edge's leadership team, Markham has helped advance the expansion of multi-state GigaPOP connectivity anchored at Princeton University and Rutgers University, with critical hubs in Philadelphia and Manhattan. These initiatives have enabled high-performance research networking for R1 universities, medical centers, and federal research partners, while elevating Edge's national profile as a trusted leader in the digital infrastructure space.

Markham's extensive background includes two decades of service in the U.S. Army Active Duty and Reserve components from 2000 to 2021, where he progressed from an enlisted technology engineer to a commissioned officer, ultimately leading multi-state operations, fiscal planning, logistics, and organizational readiness. This military experience instilled a values, service, and mission-first ethic paired with a collaborative leadership style that continues to guide his executive approach.

A dedicated scholar and educator, Markham has been actively engaged in academically rigorous, peer-reviewed research dissemination since 2013, with a focus on economic policy, technology, and institutional transformation. His doctoral dissertation, Artificial Intelligence, the Productivity J-Curve, and the Timing of Public Policy Responses, reflects his deep engagement in analyzing the intersection of general purpose technologies, labor markets, and public economic policy. He has also served as an adjunct professor at institutions ranging from community colleges to research universities.

"Edge's mission to empower research, education, and economic development through innovation and collaboration continues to inspire all that we do. Over the past several years, I have had the privilege of working closely with our remarkable team as we've expanded our reach and strengthened our services for members. As Interim President and CEO, I look forward to building on that shared success, fostering new opportunities, and continuing to serve with a focus on excellence and vision."

— Chistopher R. Markham, Ph.D.(c)
Interim President and CEO, Edge

"Edge's mission to empower research, education, and economic development through innovation and collaboration continues to inspire all that we do," said Markham. "Over the past several years, I have had the privilege of working closely with our remarkable team as we've expanded our reach and strengthened our services for members. As Interim President and CEO, I look forward to building on that shared success, fostering new opportunities, and continuing to serve with a focus on excellence and vision."

As Interim President and CEO, Markham will provide comprehensive leadership across operational strategy, financial stewardship, digital transformation, and infrastructure modernization, ensuring continuity and sustained excellence in Edge's service to its membership.

About Edge

Edge is a member-owned, nonprofit provider of high-performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge's membership spans across the nation, serving colleges and universities, K-12 school districts, government entities, hospital networks, and nonprofit business entities. Edge's common good mission ensures success by empowering members for digital transformation with affordable, reliable, and thought-leading purpose-built advanced connectivity, technologies, and services.

For more information, visit www.njedge.net.

The post Edge Announces Leadership Transition: Christopher R. Markham, Ph.D.(c) Named Interim President and CEO appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Wednesday, 01. October 2025

FIDO Alliance

Ideem: Q/A with Andrew Shikiar, CEO of FIDO

We had the pleasure of sitting down with Andrew Shikiar, CEO of the FIDO Alliance known for their creation and evangelism of the Passkey the authentication method we’ve all come to know […]

We had the pleasure of sitting down with Andrew Shikiar, CEO of the FIDO Alliance known for their creation and evangelism of the Passkey the authentication method we’ve all come to know and love. The team here at Ideem, is of course huge fans of the passkey and what it has done to revolutionize how people authenticate themselves and were honored that Andrew took the time to answer all of our questions about passkeys and banking. That Q&A is below. Of course if you’re interested in learning more about how Ideem is making passkeys bank-grade you can learn more at our site.


EdgeSecure

Shaping the Future of Computational Science

Shaping the Future of Computational Science: A Conversation with Dan Stanzione on HPC, AI, and National Research Infrastructure In the rapidly evolving landscape of high performance computing, artificial intelligence, and… The post Shaping the Future of Computational Science appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Shaping the Future of Computational Science: A Conversation with Dan Stanzione on HPC, AI, and National Research Infrastructure

In the rapidly evolving landscape of high performance computing, artificial intelligence, and quantum technologies, few leaders have shaped the trajectory of open science infrastructure as profoundly as Dr. Dan Stanzione. As Executive Director of the Texas Advanced Computing Center (TACC) and Associate Vice President for Research at The University of Texas at Austin, Stanzione has built a career on a foundational principle that has guided supercomputing development for decades.

Stanzione's approach to building large-scale computing infrastructure stems from a hard-learned lesson about putting users first. "Ultimately, we're building large-scale computing to do science," he explains. "It's less about what we, as computing people, might think is cool as the latest and most interesting technology, and more about what is useful for delivering science." This philosophy was crystalized when TACC won a major system competition worth $120 million. After his presentation, another large center director approached him with what he called a backhanded compliment: "Man, I wish I had the guts to be as boring as you were on this design." Stanzione's response was characteristically pragmatic, "We didn't put in any of the newfangled, crazy stuff because it's all more expensive and it doesn't work as well."

This user-centric approach has driven TACC to hire computational scientists rather than traditional IT professionals to lead systems teams. The strategy ensures that infrastructure decisions are driven by what computational scientists actually need to accomplish, as opposed to technological novelty alone.

Building National Research Ecosystems
TACC's influence extends far beyond its physical systems to encompass a national research ecosystem that supports over 3,000 projects annually across 450+ institutions. From research universities to community colleges, TACC provides computational resources that enable both cutting-edge research and workforce development. The center operates on a hub-and-spoke model that recognizes the importance of regional networks and local expertise. "We can scale up big computers to run tens of thousands of users, but it's awfully hard for me to scale the person down the hall from you who you can go ask about stuff," Stanzione explains. This ecosystem approach ensures that computational resources are accessible not just technically, but through human networks of expertise and support.

Regional research and education networks play a crucial role in this ecosystem, providing both the physical infrastructure for data transfer and the human networks necessary for knowledge dissemination. As scientific workflows increasingly rely on remote resources and collaboration, these networks become "basically air"—essential but invisible infrastructure that enables modern research.

“Developing a skilled data science workforce starts with creating learning environments that are inclusive, interdisciplinary, and connected to real-world challenges. The best way to build data science skills is by using them in practice and we must empower people not only to work with data, but to use it ethically and effectively in service of their communities. By co-developing educational resources and tools with the communities, students, researchers, and practitioners don’t just learn from the system, they help shape it. When we co-develop training materials with individuals who represent the needs of their own environments, the solutions and the learning are directly relevant.”

Dan Stanzione, Ph.D. Executive Director of the Texas Advanced Computing Center (TACC) and Associate Vice President for Research at the University of Texas at Austin

Navigating the AI Revolution in Scientific Computing
The explosion of artificial intelligence has fundamentally transformed the computational landscape, creating both unprecedented opportunities and complex challenges for research infrastructure. Stanzione describes the shift from gradual adoption to an "overwhelming avalanche" following ChatGPT's release, forcing centers like TACC to rapidly adapt their systems and services.

"Five, six years ago maybe 40% of our users could use GPUs. Now maybe it's 65%," Stanzione notes. This dramatic shift informed the design of Vista, TACC's AI-centric system built on NVIDIA Grace Hopper architecture, which serves as a bridge to prepare users for the next generation of leadership-class computing. However, the AI revolution presents a deeper challenge for the entire scientific computing ecosystem. "How are we going to keep traditional scientific computing going in a world where all the chips are built for AI?" Stanzione ponders. The answer lies in adapting to commodity AI components, much like the transition from custom supercomputing silicon to commodity microprocessors that began in the 1990s with the Beowulf project.

The fundamental difference between AI and traditional scientific computing lies in precision requirements. While scientific simulations demand 64-bit floating-point accuracy, AI algorithms can operate effectively with 8-bit or even 4-bit precision. This creates both challenges and opportunities. For example, Stanzione explains, "If these things are optimized for 8-bit integers, how do we make it look like we're doing 64-bit floating point?" The solution requires clever algorithms and hardware adaptations that could ultimately deliver superior performance even for traditional scientific workloads.

The Horizon Supercomputer: Enabling Discovery at Scale
TACC's upcoming Horizon supercomputer, scheduled for deployment in 2026 as part of the National Science Foundaion's (NSF) Leadership-Class Computing Facility, represents a quantum leap in computational capability. Expected to deliver 10× the performance of Frontera for traditional workloads and 100× for AI applications, Horizon will feature the largest public deployment of NVIDIA Blackwell processors available to researchers without cloud-like pricing.

Stanzione and his team have already identified 11 flagship projects that will launch with Horizon, spanning astronomy, materials science, molecular dynamics for drug discovery, seismology, and natural disaster prediction. But the most exciting aspect of these systems, he notes, is their unpredictability: "Discovery is by its nature somewhat unpredictable. We will be surprised, and something will happen." This element of surprise has characterized TACC's impact throughout its history. The center has supported work by four Nobel Prize winners, including David Baker from the University of Washington, who has been using TACC systems since 2005 for protein folding research that ultimately contributed to his 2024 Nobel Prize in Chemistry.

The Convergence Challenge: Power, Efficiency, and Innovation
The rapid growth of AI computing has created unprecedented challenges in power consumption and efficiency. Stanzione estimates that current AI development focuses primarily on being first and fastest, with little attention to efficiency—a luxury that scientific computing has never been able to afford. "We've never had the kind of money to throw around the hundreds of billions that they're throwing around in the AI space," he observes.

The solution, Stanzione argues, lies in software optimization rather than simply building more data centers. He points to DeepSeek's breakthrough in February 2024 as a prime example. By focusing intensively on software optimization rather than raw scaling, the company achieved 3-4× performance improvements while using significantly less power. "If your argument is, can I build $400 billion of data centers or with $10 million in software, where can I make that $200 billion in data centers? It was a pretty obvious answer to me," reflects Stanzione.

The industry faces fundamental limits in both silicon scaling and precision reduction. As transistor features approach atomic scales and precision requirements bottom out, the field must turn to architectural innovations, custom silicon designs, and potentially quantum accelerators to maintain the pace of computational advancement.

As TACC prepares for the Horizon era and the broader scientific community grapples with the AI revolution, Dan Stanzione's user-centric philosophy offers a valuable guide, “Stay focused on what serves science, remain adaptable to technological change, and never lose sight of the human element—the researchers, students, and technologists who ultimately determine the impact of any computational infrastructure.”

Quantum Computing: The Long View
While quantum computing generates significant attention and frequent questions about deployment timelines, Stanzione maintains a characteristically practical perspective. "I always go back to when I have users that actually want to use it," he responds to questions about quantum system deployment. "Right now you deploy big quantum systems to learn a lot about how to run quantum systems, and there's nothing wrong with that, but it doesn't serve my end science users that ultimately pay the bills."

Looking forward, Stanzione sees quantum accelerators as more likely than general-purpose quantum machines in the medium term. This hybrid approach aligns with his user-first philosophy to deploy quantum technologies when they solve real scientific problems more effectively than classical alternatives.

The Workforce Imperative
Perhaps no challenge is more critical than preparing the next generation of researchers and technologists for a world where AI, HPC, and quantum technologies converge. For TACC, workforce development is integral to advancing scientific progress. Many of its 9,000 annual users are first-year graduate students, making continuous education and onboarding essential.

Stanzione frames the workforce challenge in terms of historical precedent, noting that most economic growth over the past century has been driven by technology. From agriculture's transformation through improved productivity to the creation of entirely new industries, technological advancement has consistently created more jobs than it has displaced.

The key insight from this historical perspective is that fundamental research—often with no apparent practical application—ultimately enables transformative innovations. "In 1905 when doing work in relativity, Einstein did not think one day, if I do this right, I'll be able to get a taxi from my phone," Stanzione notes, yet GPS requires relativistic corrections to function accurately.

This long-term view underscores the importance of sustained investment in research infrastructure and education. Stanzione warns that declining government investment since the 1970s threatens the innovation ecosystem that has driven American technological leadership and cautions, "We just look at the last bit of the product and not all the things that it took to get there."

Stanzione frames the workforce challenge in terms of historical precedent, noting that most economic growth over the past century has been driven by technology. From agriculture's transformation through improved productivity to the creation of entirely new industries, technological advancement has consistently created more jobs than it has displaced. The key insight from this historical perspective is that fundamental research—often with no apparent practical application—ultimately enables transformative innovations. "In 1905 when doing work in relativity, Einstein did not think one day, if I do this right, I'll be able to get a taxi from my phone," Stanzione notes, yet GPS requires relativistic corrections to function accurately. "We just look at the last bit of the product and not all the things that it took to get there."

Looking Ahead: Challenges and Opportunities
The coming years will test the scientific computing community's ability to navigate several converging challenges. The commercial value of AI threatens to overwhelm traditional scientific computing through competition for hardware, talent, and attention. Power consumption continues to grow at unsustainable rates. Fundamental limits in silicon scaling and precision reduction approach rapidly.

As Forough Ghahramani, Ed.D., Assistant Vice President for Research, Innovation, and Sponsored Programs at Edge, observes, "Dr. Dan Stanzione's leadership at TACC continues to shape the future of advanced computing in this country. His vision, spanning HPC, AI, and quantum, is driving open science forward at unprecedented scale. As a thought leader and builder of national research infrastructure, his work through systems like Frontera, Vista, and the upcoming Horizon supercomputer reflects an unwavering commitment to accessibility, excellence, and innovation."

Yet Stanzione remains optimistic about the field's ability to adapt and innovate. The same community that successfully transitioned from custom supercomputing hardware to commodity clusters in the 1990s now faces another architectural transition. Success will require the same combination of technological innovation, pragmatic decision-making, and unwavering focus on scientific utility that has characterized high-performance computing's evolution.

For higher education leaders, Stanzione's message is clear: investment in computational infrastructure and workforce development is not optional but essential for maintaining America's position as a global leader in scientific innovation. The discoveries enabled by tomorrow's computational tools may be unpredictable, but the need for those tools is certain.

As TACC prepares for the Horizon era and the broader scientific community grapples with the AI revolution, Dan Stanzione's user-centric philosophy offers a valuable guide, “Stay focused on what serves science, remain adaptable to technological change, and never lose sight of the human element—the researchers, students, and technologists who ultimately determine the impact of any computational infrastructure.”

The future of open science depends not just on building bigger, faster computers, but on building systems that serve the scientists who will use them to unlock the next century of discovery.

The post Shaping the Future of Computational Science appeared first on Edge, the Nation's Nonprofit Technology Consortium.


CMMC on Campus

The post CMMC on Campus appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Accessibility Priorities for 2025

The post Accessibility Priorities for 2025 appeared first on Edge, the Nation's Nonprofit Technology Consortium.

OpenID

OIDF Supports National Strategy on Fraud and Scam Prevention

Today, the Aspen Institute Financial Security Program launched a groundbreaking National Strategy on Fraud and Scam Prevention. The OpenID Foundation was delighted to participate as a task force member, contributing to the effort alongside more than 80 cross-sector partners. This represents the first time such a broad collection of leaders from government, law enforcement, private […] The post O

Today, the Aspen Institute Financial Security Program launched a groundbreaking National Strategy on Fraud and Scam Prevention.

The OpenID Foundation was delighted to participate as a task force member, contributing to the effort alongside more than 80 cross-sector partners. This represents the first time such a broad collection of leaders from government, law enforcement, private industry, and civil society have come together in the US to develop a strategy aimed at preventing fraud and scams.

The Foundation commends the rigorous analysis, comprehensive approach and actionable recommendations in the Aspen report. The recommendations include:

Establishing a whole-of-ecosystem response across government, industry, and civil society Modernizing legal frameworks to enable faster detection, stronger enforcement, and appropriate liability protections Improving industry coordination to share data, strengthen defences, and reduce scam exposure – including use of standards to accomplish this goal Supporting victims more effectively, recognizing the real harm these crimes cause Treating scams as a national security and economic threat, not just a regulatory issue.

As Gail Hodges, the OpenID Foundation’s Executive Director, said, “The OpenID Foundation was delighted to participate as a Task Force member, and is impressed with the breadth of experts Aspen was able to convene to develop the National Strategy on Fraud and Scam Prevention. As a global open standards body, we hope that this report will stimulate timely policies and due diligence on standards that can deliver policy at the protocol level, such as Shared Signals, OpenID for Verifiable Presentation, OpenID for Verifiable Credential Issuance, and OpenID for Identity Assurance. These standards can play a meaningful role in delivering the digital identity infrastructure required to mitigate fraud while serving the public, the government, and the private sector alike.”

Perfect timing 

This report times well with several OpenID Foundation’s achieving final status meaning they are mature and stable for scale adoption, including by complex and interconnected ecosystems like those in the US . Relevant standards to the Aspen Report include:  Shared Signals 1.0, OpenID for Verifiable Presentation 1.0, OpenID for Verifiable Credential Issuance 1.0,  and OpenID for  Identity Assurance 1.0. These standards are already playing a meaningful role in delivering digital identity infrastructure, and they are well placed to to mitigate fraud while supporting wider ecosystem objectives.

Shared Signals 1.0 Final

Shared Signals enables real-time sharing of security intelligence across platforms. Major implementers, including Apple, Google, CISCO, Sailpoint, and Okta, are already deploying these standards. The technology is highlighted on the Gartner Hype Cycle and recommended by CISA.

The momentum continues to build. Google recently announced a new enterprise platform feature using Shared Signals, and the OpenID Foundation is getting ready to conduct a fourth interoperability session during Authenticate in Carlsbad, California October 15th.

OpenID Foundation Board member and Shared Signals WG Co-chair, Atul Tulshibagwale, also participated in the Task Force. He added: “The rampant level of online scams has reached an unprecedented and significant scale, severely affecting a large number of people and businesses. The Aspen report’s findings and recommendations are extremely important to our collective defense against fraudsters. We hope that the Shared Signals Framework can serve the US community by offering an open standard that communicates near real-time updates about potentially fraudulent activity. This will enable all participants across industry sectors and between the public and private sectors to be smarter about their decisions.”  

OpenID for Verifiable Presentation 1.0 Final

This standard enables secure verification of digital credentials, like mobile driver’s licenses that use the mdoc credential type from ISO/IEC SC17 18013-5 and SD-JWTs from IETF. The Foundation has partnered with NIST on the NCCoE Project on Mobile Driving Licenses for use of mDLs to “open a bank account,” co-hosting eight successful interoperability events on OpenID for Verifiable Presentation (OpenID4VP) and OpenID for Verifiable Credential Issuance (OpenID4VCI) this year before both specs were approved as final specifications this year.

Now the Foundation is actively addressing gaps identified by NIST that prevent US financial institutions from adopting and integrating the use of mobile driver’s licenses into their processes to comply with CIP and other US financial regulatory requirements. The OpenID Foundation seeks to support, NIST, and the US financial ecosystem (e.g. individual banks and the American Bankers Association) to ensure financial institutions understand how mobile driver’s licenses meet CIP/KYC regulations “as is,” and how issuing authorities, wallets and financial institutions can jointly increase confidence in mDLs to enable more rapid adoption by financial institutions. The OpenID Foundation is working in partnership with NIST to generate targeted proposals on how to close the gaps identified by NIST, with the expectation those proposals will help accelerate US stakeholder consensus and how some gaps could be closed by leveraging NIST’s SP 800-63-4 Digital Identity Guidelines and potential additions to the OpenID Foundation’s eKYC and IDA Working Group specifications.

The OpenID4VP standard has already been selected by Google Wallet, Android, Amazon.com, Samsung Wallet, 1Password, NIST for the NIST NCCoE project, the EU for the EU Digital Identity Wallet, Switzerland, UK, six Western Balkan countries, MOSIP for their marketplace of open source code services, and it is live in deployment by the California DMV. By the end of 2027, the OpenID Foundations anticipates 37 countries will be live with OpenID4VC. 

OpenID for Verifiable Credential Issuance 1.0 Final

This standard enables public or private sector organizations to securely issue digital credentials into a digital wallet. This OpenID4VCI specification has already been selected by major public and private sector platforms, including Google Wallet, Android, the EU for the EU Digital Wallet, Switzerland, UK, six Western Balkan countries, MOSIP for their open source marketplace, and it is live in deployment with the California DMV. 

Global impact and future deployment

The High Assurance Interoperability Profile (HAIP) will reach final status later this year. This profile will become the foundation for Europe’s digital identity infrastructure by the end of next year, and looks set to be adopted by Google, Amazon, 1Password, California, and many others in the USA.

The OpenID Foundation is also preparing a major announcement with an international governmental organization in November to support large scale deployment across the global south.

The Foundation is proud to have contributed to this groundbreaking strategy alongside more than 80 cross-sector partners. Now, with the global standards finalized and gaining global adoption, the OpenID Foundation is moving from strategy development to implementation. 

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net

 

The post OIDF Supports National Strategy on Fraud and Scam Prevention first appeared on OpenID Foundation.

Tuesday, 30. September 2025

Digital Identity NZ

Inspiring Trust across Aotearoa NZ Inc.

The energetic Digital Trust Hui Taumata at Te Papa lit a fire under NZ Inc’s digital identity movement. Industry leaders, policymakers, iwi kaitiaki, and innovators gathered to collectively declare that solving for trust is mission critical to New Zealand’s digital future. The post Inspiring Trust across Aotearoa NZ Inc. appeared first on Digital Identity New Zealand.

Kia ora,

The energetic Digital Trust Hui Taumata at Te Papa lit a fire under NZ Inc’s digital identity movement. Industry leaders, policymakers, iwi kaitiaki, and innovators gathered to collectively declare that solving for trust is mission critical to New Zealand’s digital future.

The Hui Taumata concluded with an unmistakable sense of urgency, shared purpose and appetite to collectively deliver on the vision for an open and decentralised marketplace for trusted credential proof access to products and services.

Recognising that excitement alone is not a strategy, we have been actively working on our new strategic plan and member value proposition, as signaled at the Hui Taumata.

Our refreshed strategy aims to strengthen trust in the digital economy through a united, open and collaborative stakeholder ecosystem. With the increase in distrust in our institutions and misinformation related to NZ’s empowering approach to identity, we most definitely have our work cut out for us!

Assuming we can remain a trusted voice, Digital Identity New Zealand (DINZ) is ideally positioned to connect, promote and advance trust solutions.

Major initiatives for 2025-26

Investable value proposition for members and partners Aotearoa NZ Inc communications strategy and go-to-market narrative aligned to Te Ao Māori data sovereignty principles (e.g. taonga, tikanga & kaitiakitanga) Trusted Aotearoa NZ Inc ecosystem architecture and sovereign data infrastructure.

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption. 

Working groups will build momentum around demand-side adoption and a trusted vendor ecosystem.

Key Takeaways for Members

The time is now for trusted decentralised identity:

Global drivers: Trading partners and visitors are adopting interoperable credentials, requiring NZ exporters and operators to adapt Domestic demand: Banks, insurers, corporate NZ, government agencies and SME sector face unsustainable delivery, compliance and fraud costs, and increasing risks / chaos Technology maturity: Open-standard wallets, zero-knowledge proofs, and consent dashboards are ready for safe and privacy enhancing adoption. 

Key value drivers and critical success factors:

Security, machine readable traceability and privacy by design National DISTF trust-mark & open standards for an interoperable framework Government & tier-1 procurement mandates to drive adoption Cross-sector collaboration to demonstrate ROI in various sectors (public services, banking, health, exports, and SMEs).

The plan outlines significant economic opportunities over the next five years from DISTF credential marketplace adoption.

See the complete sector breakdown.

New Working Group Highlights

We have established two new working groups to support market adoption:

Demand side (reference architecture): Focus on building adoption through use cases, reference architecture, and sector-specific strategies (such as education, health, social services; open banking and payments, and domestic commerce – SME focus).

DINZ has worked constructively with the DIA to establish and support core policy and technical working groups, plus accessibility and Te Ao Māori groups. 

Supply side (trusted vendor ecosystem): Focus on a common aligned approach for verifiable credentials (VCs) that delivers trust, simplicity, and safeguards for holders, while ensuring safe, interoperable, and trusted VC delivery.

At the Hui Taumata in August 2025, Minister Collins’ called for vendors to align on their role in building a trusted verifiable credentials ecosystem for New Zealand.

Following this, Craig Grimshaw of Sush Labs and Andrew Mabey of UNIFY Solutions are co-chairing a newly established Vendor Working Group on verifiable credentials for people of New Zealand.

With Digital Identity New Zealand (DINZ) hosting, Andrew and Craig will guide the group to ensure outcomes are collaborative, transparent and aligned with national goals.

Together Andrew and Craig bring complementary expertise with Sush Labs in wallet design and mobile user experiences critical for adoption and UNIFY Solutions in consulting, architecture and government-scale identity implementations. Both providing balanced leadership to convene vendors around a shared purpose: delivering a safe, interoperable and trusted ecosystem for all New Zealand.

Digital Public Infrastructure 

DINZ increasingly plays a thought leadership role in areas such as multi-modal biometric frameworks, authentication including identity binding and proofing, credential issuance and validation, access control, user-controlled data storage, and trusted data processing.

We intend to champion the importance of a Trust Aotearoa NZ Inc jurisdiction level namespace (i.e. name service) to a truly user centric decentralised ecosystem architecture. 

And finally, we continue to make proactive submissions as part of NZ’s regulatory modernisation programme and guidance on emerging standards including the new assurance hierarchy in the updated DISTF.

Executive Council Nominations

Nominations will open on 13 October. The results of the election will be announced at the Annual Meeting on 4 December.

The following board positions are available for election this year:

3 Major Corporate Seats  2 Other Corporate Seats  2 SME & Start Up Seats 

Start thinking about whether you are interested now. 

Industry News

We continue to experience an ever-increasing buzz in the digital identity space both making and breaking news domestically and around the world. Here’s a selection:

Government digital changes to bring big savings | Beehive.govt.nz Proactive-release_Driving-down-the-cost-of-digital-in-government Britain to introduce compulsory digital ID for workers | rnz.co.nz BBC News article on digital ID UK mobile operators launch age verification and anti-fraud APIs through GSMA Open Gateway Initiative | libertyglobal.com My take on Digital Driver’s Licenses Andy Higgs Newstalk interview  Full Interview with Leah Panapa on the Platform

Upcoming Interoperability Event: 16-17 November 2025

Chris Goh and Belinda Taylor’s NZTA team are hosting an interoperability event in Wellington on 16 -17 November, leading into the ISO Working Group 10 meetings that week. These international mDL/mdoc community events regularly confirm implementation feasibility, gather feedback to enhance standards quality, and maintain market momentum for mDL and mdoc implementations.

Sponsorship opportunities available:

Sunday 16 Nov: Lunch (approx. 120 attendees) and/or Coffee cart Monday 17 Nov: Lunch (approx. 120 attendees) and/or early evening canapés and non-alcoholic drinks (approx. 120 attendees)

Please contact Gabrielle.George@dia.govt.nz if you are interested.

Next  Actions for Members

Engage: Nominate representatives to the reference architecture, vendor working and special interest working groups.   Contribute: Share sector use cases for Circles of Trust white papers. Communicate: Adopt the new messaging framework and amplify the everyday benefits of trusted digital identity.

Together, we are shaping the trusted credential ecosystem that will empower New Zealanders, protect privacy, and unlock economic growth through a unified Aotearoa NZ Inc approach.

Tihei mauri ora!

Andy Higgs
Executive Director,
Digital Identity NZ

Read full news here: Inspiring Trust across Aotearoa NZ Inc.

Subscribe for more

The post Inspiring Trust across Aotearoa NZ Inc. appeared first on Digital Identity New Zealand.


FIDO Alliance

First Credit Union: Transforming Digital Banking with Passkeys

Corporate Overview Founded in 1955, First Credit Union is a member-owned financial institution in New Zealand with over 60,000 members. The organization delivers secure and innovative digital banking experiences through […]
Corporate Overview

Founded in 1955, First Credit Union is a member-owned financial institution in New Zealand with over 60,000 members. The organization delivers secure and innovative digital banking experiences through its comprehensive online banking platform. Members access their accounts via mobile app and browser options to manage finances anytime, anywhere. The credit union has embraced cutting-edge authentication technology to enhance both security and user experience for its diverse membership base.

Executive Perspective

“Implementing FIDO authentication through Authsignal has been a game-changer for our members’ digital experience. It’s secure, seamless and sets a new standard for trust in online banking.” – Herb Wulff, Treasury and Agency Banking Manager, First Credit Union

The Business Challenge

As a progressive modern financial institution, First Credit Union has embraced a path toward digital transformation. As part of its journey, it identified several critical challenges impacting both security and user experience.

Those challenges include:

Cybersecurity Risks. The organization wanted to reduce reliance on passwords, which is one of the most common attack vectors. First Credit Union sought phishing-resistant authentication methods to mitigate growing security threats. User Experience Friction. Traditional multi-factor authentication methods often create friction in the login process. The credit union aimed to make secure access feel seamless and intuitive for members with varying technical comfort levels. Cross-Platform Compatibility. Members access the platform across diverse devices and operating systems. First Credit Union needed a solution that worked consistently across mobile apps and web browsers. Integration Complexity. The new authentication solution had to integrate smoothly with existing infrastructure. This approach would minimize disruption to internal teams and members during deployment. Why First Credit Union Chose Passkeys

First Credit Union conducted a thorough evaluation of several traditional and emerging authentication methods. The goal was to find the right balance between security, usability and accessibility for its diverse membership base.

Traditional Options Fell Short

The team explored multiple multi-factor authentication (MFA) methods but found significant drawbacks with each approach. Authenticator apps can enhance security but have vulnerabilities that can be exploited due to their reliance upon one-time codes. They also require members to install and manage a separate app, which added complexity and friction. Email magic links provided convenience but created usability challenges and vulnerability to phishing and email interception risks.

Device credentials delivered a more seamless experience but lacked the standards-based interoperability needed across platforms. The credit union also considered standalone biometric authentication, but these solutions lacked the robust security guarantees and cross-platform compatibility that FIDO standards provide.

A critical insight emerged: offering too many authentication options risked confusing members, especially given the wide range of technical comfort levels across their demographic. A fragmented experience could lead to frustration, support overhead and reduced adoption.

FIDO Delivered What Others Couldn’t

FIDO authentication stood apart from alternatives that still presented significant vulnerabilities to phishing and lacked seamless, standards-based interoperability. The technology offered compelling advantages:

Phishing resistance eliminates shared secrets like passwords or OTPs that attackers can intercept or steal. The passwordless experience reduces friction for members while making access to online banking quicker and more secure. FIDO2 specification ensures seamless authentication across a wide range of devices and platforms, supporting both their app and browser-based services.

The solution improved member trust and satisfaction through enhanced security and streamlined login processes. It also reduced support overhead from password resets and login issues, allowing the team to allocate resources more efficiently and improve overall service quality.

Implementation Overview

First Credit Union partnered with Authsignal to implement a FIDO Certified passkey infrastructure. The team followed a structured rollout approach:

Phase 1: Internal Testing and Validation

The organization conducted rigorous internal testing to validate passkey integration across mobile and browser platforms. This phase ensured technical stability and compatibility.

Phase 2: Member Education and Communication

First Credit Union launched a targeted communication campaign that included:

Clear messaging about passkey benefits Step-by-step setup and usage guides Comprehensive support resources for onboarding

Phase 3: Gradual Branch Network Rollout

The team introduced passkeys in phases across the branch network. This approach allowed for performance monitoring, feedback collection and iterative improvements.

Phase 4: Monitoring and Optimization

Post-launch activities included tracking adoption metrics and authentication usage patterns. Member feedback drove user experience refinements.

Results and Impact

First Credit Union achieved impressive adoption and security outcomes since launching passkeys:

Adoption Metrics

58.4% of members adopted the new authentication experience 54.5% of all authentications now use passkeys Over 23,500 members enrolled in multi-factor authentication

Member Experience

Most members provided positive feedback citing ease of use and improved trust. Passkeys enabled simplified login through device-native biometrics like facial and fingerprint recognition. Members enjoy seamless experience across mobile and web platforms.

Operational Benefits

The organization reduced support overhead from password-related issues. First Credit Union enhanced its security posture with phishing-resistant authentication. The infrastructure now aligns with global standards for future readiness.

Future Vision

FIDO authentication serves as the cornerstone of First Credit Union’s long-term digital security strategy. The organization plans these expansions:

Secure Transaction Authentication: Extending passkeys to high-risk actions like transaction approvals Internal Systems Access: Implementing FIDO-based authentication for staff systems Third-Party Integrations: Leveraging FIDO’s interoperability for future service integrations Key Recommendations

First Credit Union offers these insights for organizations considering FIDO implementation:

1. Understand Your User Base: Assess members’ devices, digital habits and comfort levels to tailor the experience appropriately

2. Simplify the Experience: Avoid overwhelming users with too many authentication options

3. Choose the Right Partner: Work with trusted providers who offer expertise in passkey infrastructure

4. Communicate Clearly: Educate users early with clear messaging about benefits and simple setup guides

5. Test Thoroughly: Conduct comprehensive internal testing across platforms before member-facing deployment

Read the Case Study

Kantara Initiative

In conversation with ….. Amit Sharma, IDEMIA Public Security

Amit Sharma on his passion for all things ‘identity’ and the challenges he sees for the market in general The post In conversation with ….. Amit Sharma, IDEMIA Public Security appeared first on Kantara Initiative.

Amit Sharma on his passion for all things ‘identity’ and the challenges he sees for the market in general

The post In conversation with ….. Amit Sharma, IDEMIA Public Security appeared first on Kantara Initiative.

Monday, 29. September 2025

Digital Identity NZ

Digital Identity NZ – Major initiatives for 2025-26:

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption.  Working groups will build momentum around demand-side adoption and a trusted … Continue reading "Digital Identity NZ – Major initiatives for 2025-26:" The post Digital Identity NZ – Major initiatives for 2025-26: appeared first on Digital Identity New Zealan
Investable Value Proposition for members and partners Aotearoa NZ Inc Communication Strategy and Go-To-Market Narrative aligned to Te Ao Māori data sovereignty principles (e.g. taonga, tikanga & kaitiakitanga) Trusted Aotearoa NZ Inc ecosystem architecture and sovereign data infrastructure

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption. 

Working groups will build momentum around demand-side adoption and a trusted vendor ecosystem.

Key Takeaways for Members

The Time is Now for Trusted Decentralized Identity:

Global Drivers: Trading partners and visitors are adopting interoperable credentials, requiring NZ exporters and operators to adapt. Domestic Demand: Banks, insurers, corporate NZ, government agencies and SME sector face unsustainable delivery, compliance and fraud costs and increasing risks / chaos Technology Maturity: Open-standard wallets, zero-knowledge proofs, and consent dashboards are ready for safe and privacy enhancing adoption 

Key Value Drivers and Critical Success Factors:

Security, machine readable traceability and privacy by design National DISTF Trust-Mark & Open Standards for an interoperable framework. Government & Tier-1 Procurement Mandates to drive adoption. Cross-Sector collaboration to demonstrate ROI in various sectors (public services, banking, health, exports, SMEs)

The plan outlines significant economic opportunities over the next five years from DISTF credential marketplace adoption.

CategoryDetails / MetricsNational 5-Year UpsideNZD 8–16 B mid case combined cost savings, fraud reduction, export premiums and new-revenue uplift. Higher if combined with NZD stablecoin enabled trade.Typical Project Payback18–36 months, Fastest ROI: Financial services, Government services, Health, Agriculture/food exports, SMEs (eInvoicing + KYB reuse)Public-Sector Efficiency20–30 % reduction in manual verification tasks (cross-agency VC issuance & consent dashboards)Fraud / Leakage Reduction30–50 % decrease in high-risk processes (banking, benefits, e-commerce, health)

Complete sector breakdown:

SectorEstimated 5-Year Value (NZD)Primary Value LeversFinancial Services$1.2–2.0 BReusable KYC/KYB, instant onboarding, fraud loss reduction, account credentialsGovernment & Social Services$1.0–1.8 BGovernment app / wallet upgrade with reusable entitlement credentials, e-signatures, e-voting pilotsHealth & Aged Care$1.3–2.2 BPatient/provider credentials, e-prescriptions, research data sharingEducation & Skills$0.4–0.8 BSkills passports, micro-credential walletsAgriculture & Food$1.1–1.9 BExport provenance, license to operate, biosecurity, monitoring, product passportsTransport & Logistics$0.7–1.4 BChain-of-custody, border clearance, verified telematics, traceabilityEnergy & Utilities$0.5–1.0 BSmart-meter attestations, carbon/REC trackingConstruction & Property$0.6–1.1 BDigital building consents, product passportsTourism & Visitor Economy$0.5–0.9 BVerified traveller profiles, seamless border flows, personalised conciergeRetail & Consumer$0.6–1.2 BAge assurance, product authenticity, loyalty portabilityMedia & Creative$0.3–0.7 BContent provenance credentials, creator rights & royaltiesSMEs & Professional Services$0.9–1.6 BeInvoicing, verified suppliers, payroll/workforce credentials, automation

The post Digital Identity NZ – Major initiatives for 2025-26: appeared first on Digital Identity New Zealand.


Internet Safety Labs (Me2B)

Reusable ISL Graphics

The post Reusable ISL Graphics appeared first on Internet Safety Labs.

Resources on this page are made available under Creative Commons Attribution Non-Commercial ShareAlike 4.0 International Public License as found at: https://creativecommons.org/licenses/by/4.0/legalcode

2022 K12 Edtech Benchmark Infographics 2022 K12 Edtech Benchmark Infographics: Findings Report 1

  2022 K12 Edtech Benchmark Infographics: Findings Report 2

2022 K12 Edtech Benchmark Infographics: Findings Report 3

Consumer Sentiment Report

Did You Knows

Do You Know Where Your Data Is

Principles of Safe Software

Miscellaneous

The post Reusable ISL Graphics appeared first on Internet Safety Labs.


We Are Open co-op

Conscious Discourse for Activists and Educators

Reflections on helping Amnesty International UK using Open Source technologies.
Based on an original by Visual Thinkery for WAO

As part of their wider digital transformation plan, we’re currently helping Amnesty International UK (AIUK) with a new community platform for activists and educators. As we’ve done for almost a decade now, and as befits the name of our cooperative, we’re working openly on this project. In fact, this post is informed by one we wrote for the AIUK Community Platform Project blog.

Our last post about the AIUK project on this blog talked about the importance of community calls. In this one we want to talk about community building for the kinds of conversations that activists and educators need to have. We will be using Discourse to power the community platform after our research earlier this year led to a longlist of 29 platforms and a shortlist of 4 platforms. You can check out our comparison spreadsheet here.

A note about Discourse

This is not a post about Discourse as a platform per se, but it is important to note that, by default, it provides many useful features and settings. The stated aim of its co-founders is to “democratize online community and teamwork by raising the standard of civilized discourse on the Internet.” As such, it has features such as user trust levels, content warnings, and role/status badges that have been provided thoughtfully in a way that other systems haven’t yet managed.

It’s also Open Source, allowing AIUK to be able to host it wherever they choose—something which is increasingly important given the global rise of authoritarianism. Human rights organisations need to, sadly, be prepared for security breaches like hacks, bots or fake accounts that could  compromise discussions.

Conscious configuration A series of personas, created using Open Peeps

The only way to know how users will interact with a system you have created and/or configured is to put them in front of it. They will surprise you in both positive and negative ways, which you can then make a note of and reconfigure the system accordingly. 

The platform or software you choose constrains what is or is not possible for users. It provides a set of “affordances” creating an environment which offers the individual different options. For example, we have decided against the affordances provided by real-time chat apps such as Slack or Rocket.Chat in favour of a more ‘discussion area’ vibe with Discourse.  

Over and above the intentional decisions around platform choice, we also have to be mindful about the way we initially configure it for our target audience. Unlike some other platforms, almost everything is configurable in Discourse. This means we need to do the work to make the platform as easy to use and “intuitive” to community members as possible. 

AIUK has a widespread and diverse community, so we need to think about how that community currently exists, while preparing for a move to a new platform. We now have a very long configuration document documenting these conversations and reminding ourselves why we made certain decisions around set up and defaults. This includes everything from the theme components we have installed, to the way we’re dealing with user permissions, through to the names of buttons we’ve changed.  

It is very unlikely that we get everything right the first time around. We will receive useful feedback during the training and piloting phases and  we will also change things based on what we observe users actually doing (rather than just what they say they will do!)

The wider ecosystem Image CC BY Visual Thinkery for WAO

The new AIUK community platform does not sit alone in a vacuum. Nor is it the answer to every situation in which activists and educators may want or need to interact. For example, end-to-end encrypted communications are much better dealt with in a secure app such as Signal. As a result, we are advising AIUK community members to see the new platform as one part of a wider ecosystem.

This ecosystem also includes a new website and knowledge hub which is being developed by Torchbox. Therefore, in addition to thinking through how activists and educators interact within the community platform, we need to consider how different types of users might move through the entire ecosystem. How do they become aware of what is available? How do we meet their needs? How might we enable them to meet their activism and education goals?

The demographics of AIUK skew both young and old. That is to say, there is a majority in the 50+ age group, but there are also many young people and university students who are actively involved with Amnesty campaigns. Our research showed that these two groups tend to use very different communication tools. The older demographic tend to use email as their primary means of communication, whereas the younger demographic tend to use social media.

Our aim is for the community platform to meet the needs of as many different AIUK groups as possible. For example, an important requirement was the ability to receive updates from the platform via email, and also for community members to be able to reply to discussions from the comfort of their inbox. 

We cannot solve everything with the community platform, but we are configuring a solution that can respond to users’ needs over time.

Get involved!

Whether or not you currently consider yourself part of the AIUK community, you are very welcome to lend your thoughts and expertise to this project. Follow the project blog over on the Amnesty UK website, share positive examples you have of activists and educators engaging in constructive discussion, and help us build a space which helps the AIUK community protect people wherever justice, freedom, truth and dignity are denied.

Friday, 26. September 2025

DIF Blog

DIF at the UN: Bringing Decentralized Identity and AI to the Global Stage

Today, two Decentralized Identity Foundation (DIF) members brought their expertise to the "Trusted Digital Identity for People & AI” panel at the "Digital@UNGA", part of the 80th U.N. General Assembly.


Today, two Decentralized Identity Foundation (DIF) members brought their expertise to an influential global audience at Digital@UNGA, a high-level event convened by the ITU, UNDP and WDTA during the 80th UN General Assembly. 

In a critical session titled “Trusted Digital Identity for People & AI,” the discussion moved beyond theory to address the real-world challenges of deploying Digital Public Infrastructure (DPI) that is secure, equitable, and future-proof. In partnership with Gambian Ambassador Muhammadou Kah, Chairman of the UN Commission on Science and Technology for Development, the session focused on turning the UN’s digital identity strategy into a deployable reality, grounded in the core principles of interoperability, privacy by design, and inclusion for all.

The core challenge addressed by the panel was the persistent gap between policy and production. While global goals like SDG 16.9 are clear, the goal of providing legal identity for all by 2030 is often stalled by protocol fragmentation and the lack of a robust architectural model for a world where both people and AI agents are first-class citizens. The session explored how to bridge this gap by encoding principles as measurable engineering requirements, ensuring that concepts like privacy and interoperability can be verified through rigorous, evidence-based testing before procurement and large-scale deployment.

Representing DIF, Matt McKinney, CEO of AIGNE, an ArcBlock company, and Co-Chair of the DID Method Spec Working Group, presented a framework for building this next generation of DPI. His talk focused on the necessity of a symmetrical architecture that serves both people and AI agents with the same high standards of security and control. He argued that for this mixed-initiative future to be safe, AI agents must use controller-bound credentials, operate with least-privilege, time-boxed permissions, and be subject to fast, verifiable revocation. He outlined a phased, low-risk path for policymakers to move from architectural requirements to a multi-vendor sandbox, then to a pilot, and finally to a scalable rollout, all based on open standards and anchored to objective conformance proofs.

Nicola Gallo, Co-Founder of Nitro Agility and Co-Chair of DIF's new Trusted AI Agents Working Group, also presented at length. He emphasized the importance of building a trust stack for AI agents, one that clarifies the types of trust required, considers the impact of AI on social and market structures, and defines protocols that can effectively address these concerns. In his view, a sustainable path is to anchor trust in the identities of the executors themselves, enabling distributed chains of attested actions. Without this granular and auditable accountability, we risk relying too heavily on impersonation models, where the role of the actual executor may be unclear and trust becomes harder to govern or verify. Ultimately, the key lies in giving workloads their own flexible and verifiable identities, making it possible to trace and govern responsibilities across distributed systems, thus envisioning a new Internet of Trust

As a next step, the members will publish a 2-page outcomes brief and an implementer checklist for policy makers and practitioners within the next two weeks. Stay tuned for links to these practical documents to be open-sourced by DIF.


FIDO Alliance

TechGenyz: Password-Free Future: How Biometrics & Passkeys Unlock True Security 

While biometrics offer convenience, passkeys provide the backbone for the next stage in authentication. Developed as a part of a global effort by Apple, Google, Microsoft, and the FIDO Alliance, […]

While biometrics offer convenience, passkeys provide the backbone for the next stage in authentication. Developed as a part of a global effort by Apple, Google, Microsoft, and the FIDO Alliance, passkeys replace traditional passwords with cryptographic keys stored securely on a user’s device. Instead of typing in a word or a phrase, users can confirm their identity through a fingerprint, a face scan, or a prompt in a trusted device. 


Forbes: The iPhone’s New Camera? Whatever. The iPhone’s New Wallet? Cool. 

Apple’s approach to identity in wallets is built on open standards, including the W3C’s Digital Credentials API and FIDO Alliance protocols. This is important to identity nerds like me because […]

Apple’s approach to identity in wallets is built on open standards, including the W3C’s Digital Credentials API and FIDO Alliance protocols. This is important to identity nerds like me because they are standards that enable privacy-enhancing exchanges of digital credentials, allowing consumers to (crucially) prove what they are (over 18, entitled to drive, holding a valid ticket) without having to divulge who they are.  


Biometric Update: Bitwarden among first to implement FIDO credential exchange standards on iOS 26

Apple iOS 26 has landed, and it includes support for FIDO Alliance Credential Exchange standards to enable secure, end-to-end encrypted transfers of passkeys, passwords and other credentials across platforms and […]

Apple iOS 26 has landed, and it includes support for FIDO Alliance Credential Exchange standards to enable secure, end-to-end encrypted transfers of passkeys, passwords and other credentials across platforms and apps. A release from large open-source login management service Bitwarden says it is “among the first credential managers on iOS 26 to implement the Credential Exchange standards, helping lead passkey and password portability with a secure, standardized way for users to move credentials between Apple Passwords, Bitwarden and other compatible services.” 


OpenID

FAPI 2.0 Message Signing Final Specification Approved

The OpenID Foundation membership has approved the following as an OpenID Final Specification:   FAPI 2.0 Message Signing: https://openid.net/specs/fapi-message-signing-2_0-final.html    A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenI
The OpenID Foundation membership has approved the following as an OpenID Final Specification:   FAPI 2.0 Message Signing: https://openid.net/specs/fapi-message-signing-2_0-final.html    A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenID FAPI Working Group.   The voting results were: Approve – 87 votes Object — 0 votes Abstain – 20 votes Total votes: 107 (out of 448 members = 23.9% > 20% quorum requirement)   Marie Jordan – OpenID Foundation Secretary
About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net



The post FAPI 2.0 Message Signing Final Specification Approved first appeared on OpenID Foundation.


Human Colossus Foundation

EHDS Promises & Pitfalls: The Case of Genomic Data Integration in Personalised Medicines

Human Colossus Foundation co-organised the NextGen Pre-event at MyData 2025: Genomic Data and the Future of the European Health Data Space Helsinki, 24 September 2025 — The European Health Data Space (EHDS) is set to redefine digital health across Europe. With the potential to benefit more than 250 million citizens, it promises to transform clinical research, innovation, an

Human Colossus Foundation co-organised the NextGen Pre-event at MyData 2025: Genomic Data and the Future of the European Health Data Space

Helsinki, 24 September 2025 — The European Health Data Space (EHDS) is set to redefine digital health across Europe. With the potential to benefit more than 250 million citizens, it promises to transform clinical research, innovation, and patient care. But can it truly deliver?

A successful rollout of EHDS would restore trust—an essential prerequisite for unlocking the value of health data not only for medical purposes but for the entire health economy. Driven by the AI revolution, new revenue opportunities for European innovators could reach billions of euros. Done right, EHDS could become a flagship success story, showcasing the competitive edge of well-organised data ecosystems.

If it fails, however, the consequences would be so severe that its absence of success could only be described as a doomsday scenario.

Meeting this challenge requires a clear understanding of the barriers to building a health data ecosystem at a continental scale.

The NextGen EU Horizon project tackles some of the most complex data challenges in cardiovascular personalised medicine. Serving as a kind of “mini-EHDS,” NextGen acts as a testing ground for digital tools that enable the creation of interoperable data spaces.

Against this backdrop, the high-level pre-conference event at MyData 2025—titled “EHDS Promises & Pitfalls: The Case of Genomic Data Integration in Personalised Medicines”—took place on Tuesday, September 23. Over three and a half hours, the session addressed topics for researchers, clinicians, regulators, policymakers, and all those who recognise that maintaining the status quo is the simpler—but ultimately false—choice.

The central question was clear: How can genomic data—among the most sensitive and valuable forms of health data—be safely and effectively integrated into EHDS for the benefit of all?

A distinguished panel of experts shared their insights:

The fundamentals of EHDS — Mikael Rinnetmäki, Finnish Innovation Fund Sitra, introduced EHDS and explored the challenges posed by both the primary and secondary use of health data in Europe.

The legal dimension and the Finnish approach — Sofia Kuitunen, Senior Legal Counsel, FinnGen, examined the secondary use of health and genomic data in Finland.

Overcoming implementation barriers in the Netherlands — Johan Bokslag, Programme Manager, Health Data Space Utrecht, addressed the practical challenges of building an EHDS-compliant infrastructure in the Netherlands and how these can be turned into opportunities for transformation.

End-user expectations — Dr. Petra Ijäs, Head Physician at Helsinki University Hospital, presented a clinical case highlighting how EHDS could help overcome barriers in cardiovascular risk prediction for carotid artery stenosis.

Moderated by Philippe Page (NextGen & Human Colossus Foundation), the session illuminated both the opportunities and the serious pitfalls of Europe’s flagship health data initiative.

The key conclusions were:

EHDS implementation offers a historic opportunity to bring Europe’s healthcare system into the digital era.

Major pitfalls exist, with the restoration of trust and confidence standing as the most critical.

If EHDS fails to deliver a secure and efficient data space, global competition risks driving Europe’s health innovation elsewhere.

A detailed summary of the discussions and participant takeaways is in preparation and will provide further insights.

From the Human Colossus Foundation’s perspective

The EHDS vision responds to the urgent need to unlock the value of health data. It aims to build a human-centred data ecosystem, where “human” encompasses patients, healthcare professionals, public health authorities, private actors, regulators, and policymakers. Creating such an ecosystem requires three conditions:

Distributed governance — Governance must be shared across legitimate authorities representing regions, communities, professionals, and individuals. This requires a federated digital design that builds upon existing frameworks while advancing them into the digital era.

Respect for Europe’s diversity — Diversity is a source of creativity; complexity is merely an implementation challenge to be overcome. EHDS should prioritise harmonisation, not standardisation, especially in data models. The semantics—the meaning of data—should remain as close as possible to the collection point. Mechanisms must ensure data is structured and its integrity preserved before it reaches AI tools, training datasets, or other uses.

True digital identity — Both individuals and organisations need digital identities that uphold privacy and fundamental rights as protected by EU and Member State ethical, regulatory, and governance frameworks. Achieving this requires a truly decentralised authentication architecture capable of accommodating Europe’s diverse sovereignties.

Together, these requirements form the foundation for introducing sovereignty in the digital era. Regaining control over our data demands governance, integrity, and authenticity in every data exchange.

Permalink

Thursday, 25. September 2025

FIDO Alliance

Driving Automotive Innovation with FIDO Standards and Certification

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition […]

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition brings an unprecedented opportunity to innovate and capitalize on new business models (such as in-vehicle commerce and subscription services) but also introduces significant cybersecurity threats and user experience challenges. 


Driving Automotive Innovation with FIDO Standards and Certification

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition […]

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition brings an unprecedented opportunity to innovate and capitalize on new business models (such as in-vehicle commerce and subscription services) but also introduces significant cybersecurity threats and user experience challenges. 

This session builded upon the FIDO Alliance’s recently published white paper, Addressing Cybersecurity Challenges in the Automotive Industry, exploring how the FIDO Alliance is uniquely positioned to address these challenges using passkeys, FIDO Device Onboard (FDO), and existing and future certification programs.

Watch the presentation:


Hyperledger Foundation

Compact smart contract language is now Minokawa, newest LF Decentralized Trust project

The contribution by Shielded Technologies marks a big milestone in open sourcing the development of tools to advance the vision of a privacy-preserving internet.

The contribution by Shielded Technologies marks a big milestone in open sourcing the development of tools to advance the vision of a privacy-preserving internet.

Wednesday, 24. September 2025

EdgeSecure

MarylandOnline Partners with Edge to Expand Access to Educational Technology and Services for Maryland Higher Education Institutions

MarylandOnline Partners with Edge to Expand Access to Educational Technology and Services for Maryland Higher Education Institutions NEWARK, NEW JERSEY, September 24, 2025 – Edge, the nation's leading member-owned nonprofit technology… The post MarylandOnline Partners with Edge to Expand Access to Educational Technology and Services for Maryland Higher Education Institutions appeared first
MarylandOnline Partners with Edge to Expand Access to Educational Technology and Services for Maryland Higher Education Institutions

NEWARK, NEW JERSEY, September 24, 2025 – Edge, the nation's leading member-owned nonprofit technology consortium, today announced the addition of MarylandOnline to its EdgeMarket Affiliate Partner Program. The affiliate agreement enables MarylandOnline member institutions to access Edge's comprehensive marketplace of educational technology vendors and contracts, while also providing access to Edge's internal nonprofit services in cybersecurity, organizational transformation, digital learning, and accessibility. Importantly, a portion of revenue from services utilized by Maryland institutions will flow back to MarylandOnline to support additional programming and initiatives for its members.

Via the EdgeMarket Affiliate Partner Program, the partnership will provide MarylandOnline's 19 member institutions expanded access to educational technology contracts, services, and collaborative opportunities through Edge's procurement platform and professional services.

"This partnership represents exactly the kind of forward-thinking collaboration that higher education needs today. Partnerships between institutions is really the way we're going to see higher education evolve, and crossing state lines while still managing regulations opens up tremendous opportunities for our members to better serve their students."

— Stephen T. Kabrhel
Dean of Online Learning, Community College of Baltimore County
President, MarylandOnline

"This partnership represents exactly the kind of forward-thinking collaboration that higher education needs today," said Stephen T. Kabrhel, Dean of Online Learning at the Community College of Baltimore County and current President of MarylandOnline. "Partnerships between institutions is really the way we're going to see higher education evolve, and crossing state lines while still managing regulations opens up tremendous opportunities for our members to better serve their students."

Wendy Gilbert, Executive Director of MarylandOnline, emphasized the strategic importance of cross-state collaboration. "Interacting with people who do what I do in other states is invaluable. We're already seeing state consortiums expand beyond their borders, and this partnership with Edge allows us to share different perspectives and solutions with a broader network."

The partnership comes at a critical time for higher education institutions facing enrollment challenges and resource constraints. Both organizations have built their missions around democratizing access to educational technologies and strategies, making this alliance particularly well-suited to address current market dynamics.

"Interacting with people who do what I do in other states is invaluable. We're already seeing state consortiums expand beyond their borders, and this partnership with Edge allows us to share different perspectives and solutions with a broader network."

— Wendy Gilbert
Executive Director
MarylandOnline

"We're really excited for what this partnership represents," said Adam Scarzafava, Vice President of Marketing, Business Development, and EdgeEvents at Edge. "MarylandOnline is an organization that we know historically has done so much good work for the institutions of Maryland and their online learning capacity. For them to see value in Edge really says a lot about the consortium approach to addressing higher education challenges."

Both organizations will be represented at EdgeCon Autumn 2025, Edge's upcoming conference October 9, 2025 at Rider University, where they plan to showcase collaborative opportunities and share best practices in consortium-based educational support.

 

About Edge: Edge serves as a member-owned, nonprofit provider of high-performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks, and nonprofit business entities as part of a membership-based consortium spanning across the nation. For more information, visit njedge.net.

About MarylandOnline: Founded in 1999, MarylandOnline is a consortium of 19 Maryland higher education institutions dedicated to excellence in online learning. The organization developed the Quality Matters Program and continues to provide course sharing platforms, professional development, and collaborative resources to support digital learning initiatives across Maryland. MarylandOnline's members include all 16 Maryland community colleges plus Morgan State University, Stevenson University, and the University of Maryland Global Campus. The organization has trained nearly 4,000 online instructors and continues to operate course sharing platforms that have served institutions since 2001.

About EdgeMarket: The EdgeMarket Cooperative Pricing System is purpose-built to provide Edge members with an easy way to procure the solutions and services they need at advantageous pricing and terms.

 

The post MarylandOnline Partners with Edge to Expand Access to Educational Technology and Services for Maryland Higher Education Institutions appeared first on Edge, the Nation's Nonprofit Technology Consortium.


MyData

Your health, your data: A personal health account for healthcare

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. What if managing your […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. What if managing your […]

Human Colossus Foundation

Human Colossus Joins “SNV on Tour” at EPFL — Deepfake Trust & Verification

On September 23, Human Colossus participated in SNV on Tour at EPFL, an event organized by the Swiss Association for Standardization (SNV). This year's event focused on artificial intelligence and deepfakes, two of the most pressing challenges in today's digital landscape. The event brought together leaders from academia, industry, government, and civil society to discuss the future of trust,

On September 23, Human Colossus participated in SNV on Tour at EPFL, an event organized by the Swiss Association for Standardization (SNV). This year's event focused on artificial intelligence and deepfakes, two of the most pressing challenges in today's digital landscape.

The event brought together leaders from academia, industry, government, and civil society to discuss the future of trust, authenticity, and digital integrity in an era increasingly shaped by artificial intelligence and synthetic media.

Our participation highlighted Human Colossus’s commitment to strengthening the foundations of digital trust through technology, standards, collaboration, and international leadership.

Deepfakes and the Three-Fold Strategy of Authenticity

During the event, Professor Touradj Ebrahimi of EPFL presented a clear framework for addressing the growing challenge of deepfakes and manipulated media. He identified three complementary strategies:

Reactive: Detecting manipulation by developing forensic methods for spotting tampering, anomalies, and adversarial examples in audio, video, and images.

Proactive: Authenticity and integrity. This involves embedding cryptographic seals, provenance metadata, or integrity markers directly into content to certify its authenticity at the source.

Collaborative: Verification as a vector of trust. This involves building mechanisms for evidence collection, community-based verification, and shared governance to enable stronger ecosystem responses.

Of these three strategies, the proactive and collaborative approaches align closely with what Human Colossus is building.

Proactive Authenticity: Our work on verifiable provenance, integrity seals, and trustworthy digital infrastructure supports Prof. Ebrahimi’s call to certify authenticity at the source.

Collaborative verification: Human Colossus is pioneering frameworks for evidence sharing, community-driven verification, and governance models that establish verification as a source of trust.

Together, these approaches create a foundation for digital ecosystems that are resilient, transparent, and accountable, going beyond detection alone.

Strengthening Switzerland’s Role in Global Standardization

Another key message was the importance of standardization. Human Colossus is deeply committed to aligning its cutting-edge solutions with international frameworks to ensure interoperability and long-term sustainability.

At SNV on Tour, we reaffirmed our dedication to:

supporting Switzerland’s role as a hub for neutrality, transparency, and innovation in standardization.

actively contributing to European and global discussions on content authenticity and trustworthy AI;

shaping common protocols so that authenticity systems can scale across borders and industries.

This work strengthens Switzerland’s leadership in EU and global standardization and ensures that trust technologies evolve with fairness, accountability, and technical rigor.

Looking Ahead

Participating in SNV on Tour reinforced the importance of collaboration among technology providers, policymakers, researchers, and communities. Tackling deepfakes and synthetic media requires a multidisciplinary approach. It is not just an engineering problem, but a societal challenge that requires reactive detection, proactive authenticity, and collaborative verification.

Human Colossus is dedicated to building infrastructures of trust and contributing to standardization processes that ensure reliability, interoperability, and future-proofing.


Next Level Supply Chain Podcast with GS1

How a Family Recipe Turned Into a National Supply Chain Story

From Sunday suppers to 2,000 stores… and almost losing them all. Andrew Arbogast turned his dad's cheese dip recipe into a fast-growing CPG brand, only to face the harsh realities of shelf life, co-packing, and retailer expectations. In this episode, he joins Liz Sertl to share how he scaled Arbo's Cheese Dip, what nearly sank the business, and the turning point that gave him a second chanc

From Sunday suppers to 2,000 stores… and almost losing them all. Andrew Arbogast turned his dad's cheese dip recipe into a fast-growing CPG brand, only to face the harsh realities of shelf life, co-packing, and retailer expectations. In this episode, he joins Liz Sertl to share how he scaled Arbo's Cheese Dip, what nearly sank the business, and the turning point that gave him a second chance.

Listeners will hear the unfiltered story behind bringing a homemade recipe to the national stage, and the resilience, partnerships, and supply chain decisions that made survival possible.

In this episode, you'll learn:

How shelf life testing and packaging decisions directly impact scalability

Why rapid national expansion without brand awareness creates costly supply and demand mismatches

What strategies helped Arbo stabilize its operations

Jump into the conversation: (00:00) Introducing Next Level Supply Chain

(02:33) Family traditions that inspired Arbo's recipe

(04:25) Why barcodes matter for brand credibility

(06:03) Shelf life challenges with early co-packers

(08:11) Reformulating products for large-scale production

(09:24) Rapid retail expansion that backfired

(11:31) Winning Walmart Open Call and scaling mistakes

(13:28) Debt crisis and asking for help

(17:33) Grants and innovation with Real California Milk

(19:31) Favorite flavors and unexpected uses for cheese dip

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: Andrew Arbogast on LinkedIn Check out Arbo's Cheese Dip

Tuesday, 23. September 2025

FIDO Alliance

The Passkey Playbook | HID Global

Explore how to deploy passwordless authentication at scale — securely, strategically and with minimal disruption. Download the playbook to learn: Phishing-resistant authentication isn’t a one-size-fits-all. It’s a journey. Complete the […]

Explore how to deploy passwordless authentication at scale — securely, strategically and with minimal disruption.

Download the playbook to learn:

The different types of passkeys and how to choose the right option for your organization A phased deployment strategy that helps you start small, scale confidently and standardize passkey adoption across your organization The ROI of passkeys backed by data from the FIDO Alliance

Phishing-resistant authentication isn’t a one-size-fits-all. It’s a journey. Complete the form to get our Passkey Playbook to start yours.


Podcast: The Passwordless Shift: Rethinking Identity for the Modern Enterprise

In the inaugural episode of Imprivata’s new podcast, Access Point, hosts Joel Burleson-Davis and Chip Hughes sit down with Andrew Shikiar, CEO of the FIDO Alliance, to explore the global […]

In the inaugural episode of Imprivata’s new podcast, Access Point, hosts Joel Burleson-Davis and Chip Hughes sit down with Andrew Shikiar, CEO of the FIDO Alliance, to explore the global movement toward passwordless authentication.

Listen to Episode 1 here: https://ow.ly/Y2Zl50X0vEJ

Monday, 22. September 2025

FIDO Alliance

The Indian Express: ‘Password resets cost businesses more than they realise’: Zoho exec on ROI of going passwordless

The world is rapidly moving away from traditional security methods. With FIDO standards in place, more companies are shifting toward passwordless authentication. Many industry players are already phasing out passwords […]

The world is rapidly moving away from traditional security methods. With FIDO standards in place, more companies are shifting toward passwordless authentication. Many industry players are already phasing out passwords from their authenticator apps.

In India, the passwordless market is estimated at $411 million in 2024 and projected to reach more than $1.5 billion by 2030. This reflects how businesses are opting for faster, smarter, and safer login experiences. To understand what’s driving this trend and how companies are adapting, indianexpress.com spoke with Chandramouli Dorai, chief evangelist, cyber solutions and digital signatures at Zoho Corp.


Biometric Update: To build trust in biometrics, Vietnam banks should adopt FIDO passkeys: report

VinCSS has released an industry first report on the authentication experience in apps for Vietnamese banks, and it shows a “strong shift from traditional to modern authentication methods” in the country’s […]

VinCSS has released an industry first report on the authentication experience in apps for Vietnamese banks, and it shows a “strong shift from traditional to modern authentication methods” in the country’s banking ecosystem.

Biometrics rank as the most commonly used authentication methods for high risk transactions. It’s also rated as the most convenient, with 58.3 respondents listing it as such.

As usual, there are corresponding concerns about data privacy. One in three people worry their biometric data or digital credentials could be stolen or faked, leading to identity fraud. One in Authentication data theft is a top fear. “Many users feel that biometric authentication, though widely implemented, still is not secure enough for them or their digital assets.”


Back End News: HID offers passwordless authentication to support BSP compliance

HID, a company that provides secure identity solutions, announced the availability of its updated FIDO-certified authentication solutions in the Philippines, to help financial institutions and enterprises comply with the Bangko […]

HID, a company that provides secure identity solutions, announced the availability of its updated FIDO-certified authentication solutions in the Philippines, to help financial institutions and enterprises comply with the Bangko Sentral ng Pilipinas’ (BSP) new rules on IT risk management under the Anti-Financial Account Scamming Act (AFASA).

BSP requires organizations under its supervision to strengthen fraud management and identity verification by June 25, 2026. The directive calls for the adoption of secure, phishing-resistant methods, such as passwordless authentication through FIDO standards.

The measure comes amid rising online scams and fraud cases in the country. 


Security Boulevard: Beyond Passwords: A Guide to Choosing the Right Passkey

For many market analysts, cybersecurity agencies and authentication experts, passkeys, based on FIDO2 standard protocol, appear as the future proof authentication technology that will become mainstream within the next years. […]

For many market analysts, cybersecurity agencies and authentication experts, passkeys, based on FIDO2 standard protocol, appear as the future proof authentication technology that will become mainstream within the next years.

“By 2027, more than 90% of MFA transactions using a token will be based on FIDO protocols natively supported in IAM tools.”


White Paper: Passkeys and Verifiable Digital Credentials: A Harmonized Path to Secure Digital Identity

Editors Christine Owen, 1Kosmos Teresa Wu, IDEMIA Public Security Abstract Around the world, government entities are currently working to create and implement their digital identity strategies, which includes issuing verifiable […]
Editors

Christine Owen, 1Kosmos
Teresa Wu, IDEMIA Public Security

Abstract

Around the world, government entities are currently working to create and implement their digital identity strategies, which includes issuing verifiable digital credentials (VDCs) to their citizens. As a result of these initiatives, organizations are also beginning to discuss using VDCs as a primary form of authentication. VDCs are an important part of verifying a user’s identity that can be used alongside FIDO’s passkeys, which provide a primary authentication mechanism that is fast, safe, and reliable. Passkeys should be issued after a citizen’s VDC is presented for identity verification. This paper will discuss how VDCs and passkeys should coexist when implementing authentication for citizens.

We anticipate a growing confusion in recognizing the differences between the use of digital ID to ascertain identity attributes and allowing users to authenticate themselves online as solutions that follow digital ID standards for online use cases (such as ISO/IEC 18013-7, W3C, IETF, or SD-JWT) are deployed.

This paper aims to clarify misperceptions and avoid confusion by discussing the coexistence of passkeys and digital ID/VDCs, including best practices for using these technologies.

Audience

Government entities, policy makers, relying parties

Download the White Paper 1. Verifiable digital credentials (VDCs) and passkeys

As new technologies continue to emerge, experts are finding new ways to use these solutions together. This paper discusses how verifiable digital credentials (VDCs) and passkeys should coexist when implementing authentication for citizens. VDCs and passkeys were both developed to secure identities in the digital world. However, they have different yet intersecting roles for end users. The following sections introduce VDCs and passkeys.

1.1 VDCs

A VDC can contain an electronic version of identity attributes or can be a digital representation of physical credentials (for example, driver licenses, passports, other identifiable information) that can be cryptographically verified. Typically, a VDC follows the W3C Verifiable Credentials Data Model, Internet Engineering Task Force (IETF) Selective Disclosure for JWTs (SD-JWT), or ISO/IEC 18013-5/7 (mobile driver’s license) standards, which require cryptographic signatures to prove authenticity. Because these standard models use digital wallets, a secure, portable, and instant verification mechanism is created that can preserve privacy by requiring user approval prior to disclosure of sensitive information.

Deployment of VDCs is currently prolific in multiple regions around the world. For example, Asia-Pacific countries are embracing the idea of a VDC. Countries such as Japan, South Korea, and Australia are working together to ensure interoperability amongst their VDCs. Australia’s 2024 Digital ID Act created accreditation requirements for digital IDs and enhanced the trust framework between different providers. In the United States, the mobile driver’s license (mDL) movement is gaining momentum, and several states have already implemented or are piloting mDL programs. For a more detailed look at how government agencies are deploying VDCs, refer to the Appendix.

VDCs can be either a Person ID (PID) that represents a physical person or Attested Attributes which are documents that present properties of a person such as a driver license, age in a certain range, or educational degrees. The validity of VDCs can be expressed by Identity Assurance Levels (IAL) or Levels of Assurance (LOA) (depending on a country’s digital identity standards). In cases where a PID is used, a government entity may determine the types of verified information that are necessary to establish the identity of a person. In most cases, information about a person’s name, address, date of birth, place of birth, official government document number, phone number, and other attributes such as name of parents, gives a strong base for properly identifying a person through the controls available to government or private entities that perform verification on behalf of organizations. Modern technology may add records of biometrics such as fingerprints or face capture to further strengthen identity assurance. The European Digital Identity Wallet (EUDI Wallet) requires that PID be a high level of assurance (LoA), in accordance with the principles used by member states for their civil registration process. Refer to section 5.3 EU VDC Efforts for more information on EUDI Wallets. Similarly, the United States Digital Identity Guidelines[1] require remote identity vetting to be performed at an IAL2 for government use.

VDCs hold a digital representation of a document by defining the syntax of the issuer’s URL, the category of the document, and other standardized syntax such as a trust list (where government entities issue a certificate that guarantees that the URL is what it claims to be), then creating a cryptographic seal that guarantees the authenticity of the document when presented to a relying party (also known as a verifier). Under this W3C Verified Credentials Data Model, the URL serves as a trust model.

Because government entities are also building digital identity wallet schemes based on clear identity standards and mutual recognition mechanisms across different jurisdictions, the adoption of digital identity wallets internationally will facilitate the acceptance of VDCs by traffic police to validate a driver’s license from another country or for a bank to provide the proper checks and balances (for example, know your customer) during transactions.

[1]https://www.nist.gov/identity-access-management/projects/nist-special-publication-800-63-digital-identity-guidelines

1.2 Passkeys

Passkeys, a passwordless and phishing-resistant authentication mechanism, represent a significant advancement in privacy-preserving authentication and are designed to replace traditional passwords with a more secure and user-friendly alternative.

There are two types of passkeys: synced passkeys and device-bound passkeys. Synced passkeys are stored in the cloud and can be accessed across multiple devices, offering convenience and easy recovery. Device-bound passkeys, on the other hand, are stored locally on a specific device or security key, which provides enhanced security. For a more in-depth look at passkeys and how to implement them, refer to Passkey Central.

Passkeys exhibit a robust resistance to phishing attacks due to their foundational design principles. Each passkey is intrinsically tied to the specific origin of the service, identified by the Relying Party ID, thereby ensuring that authentication can only occur with the legitimate and intended service provider. This origin-specific challenge-response mechanism is inherently resistant to replication by phishing sites, rendering such attacks ineffective.

Equally significant is the privacy-preserving architecture of passkeys, which is designed to uphold user confidentiality and prevent tracking. During authentication, no personal or biometric data is transmitted or shared externally. Biometric verification processes, such as fingerprint or facial recognition checks, are conducted locally on the user’s device, ensuring that sensitive data remains under the user’s control.

Because passkeys generate unique cryptographic keys for each service and cannot be reused across platforms, cross-platform tracking is precluded and the privacy concerns associated with social logins that enable providers to monitor user activity across multiple services are avoided. Unlike traditional authentication methods (such as passwords or two-factor authentication), the use of unique cryptographic keys effectively mitigates the risk of cascading breaches that can result from a single compromised account. By replacing shared secrets with device-bound cryptographic keys, passkeys fundamentally neutralize phishing as a viable attack vector. When passkeys are synchronized across devices via cloud-based mechanisms, they are protected through end-to-end encryption.

This privacy-centric design fosters a sense of security and trust among users and reassures them that their personal information is not being tracked or misused by government entities or service providers. By combining phishing-resistant authentication with privacy-preserving principles, passkeys represent a significant advancement to secure and user-centric digital identity management.

1.3 The intersection of VDCs and passkeys

Both VDCs and passkeys enhance security and reduce friction during digital interactions. VDCs focus on securely representing qualifications and attributes (association with the real user), while passkeys specifically target phishing-resistant authentication (what a user has). When used together, these two technologies complement each other to enhance security within the digital world.

2. Core concepts of digital identities

This section covers the differences between digital identities, authentication, and authorization. It also examines how to enhance the use of verifiable digital credentials.

2.1 Verified identities vs authentication vs authorization

Digital identities play a crucial role in facilitating online transactions. Commonly, VDCs contain identity attributes that can be presented as evidence to verify the identity of the VDC holder. For example, an individual might use their VDC to assert attributes such as name, date of birth, and address in order to verify their identity and open a financial account with a bank. The bank can use these attributes to comply with regulations such as Know Your Customer (KYC) or Anti-Money Laundering (AML).

Identity verification is the process of confirming that a person is who they claim to be, often during onboarding, using trusted documents for proof of identity. Once a verified identity is established, authentication helps verify individuals on return visits. While passwords or two-factor authentication have traditionally been used for authentication, passkeys are not only more secure than traditional authentication methods but also provide users the convenience of quickly using a biometric to unlock a cryptographic key which is then used for authentication. Passkeys, unlike VDCs which may assert information about the user each time they are presented, are privacy preserving and do not provide user attributes during authentication.

VDCs can be used to transmit requested identity attributes to relying parties during authorization requests.[1] This method can decrease the relying party’s exposure to risk, as it provides a more holistic view of an end user through their verified set of attributes. The relying party can then make an informed decision regarding that user’s access based on their rules for access and the attributes presented.

[1] Authorization is the process of granting the correct level of access to a user after their identity is authenticated. As a specific function within Identity and Access Management (IAM) systems, authorization helps system managers control who has access to system resources and set client privileges. Access controls are used to assign a set of predetermined access rights to a user identity and use of attribute exchanges to help determine authorization requests is gaining traction in the cybersecurity industry.

Enhancing verifiable digital credentials use

VDCs are designed to enable individuals to make verifiable claims about identity attributes or entitlements without serving as direct authentication mechanisms. Unlike authentication methods that authenticate users to specific services, VDCs focus on sharing attested claims (for example, date of birth, and address) through a decentralized triangle of trust that involves issuers, holders, and verifiers. VDCs are designed with privacy in mind, as standards such as SD-JWT and ISO mdoc require that when users share specific claims from a credential (for example, age from a driver’s license), the integrity of the original document must be proven cryptographically. Users retain ownership of VDCs, enabling them to present credentials across platforms without relying on centralized authorities. This flexibility makes VDCs ideal for scenarios that require proof of identity.

Passkeys provide a phishing-resistant authentication method that binds authenticators to specific domains. Passkeys do not pass Personally Identifiable Information (PII) or similar information at the time of authentication, thus preserving the privacy of the user. However, VDCs can pass along unnecessary PII, when requested by a relying party and agreed upon by the Holder. For instance, a malicious actor could impersonate a bank’s identity verification portal, capture a user’s PII from their VDC, and exploit the user’s PII. While cryptographic signatures ensure credential integrity, they do not address contextual misuse, highlighting a gap in current standards. Therefore, it is better to utilize VDCs for user credentials and attributes.

Despite these risks, VDCs are increasingly adopted for high-assurance processes:

Universities can use VDCs to streamline enrollment by verifying academic records and extracurricular participation. Banks can employ VDCs for eKYC (electronic Know Your Customer), combining document verification (for example, passports), biometric liveness checks, and AML and Politically Exposed Person (PEP) screening to onboard customers remotely. VDCs mitigate fraud in transactions requiring stringent identity assurance, such as cross-border financial transfers or healthcare licensing. For example, selfie verification and document-centric checks ensure the physical presence of users during high-value agreements. In cross-border education, VDCs enable instant verification of international student credentials, reducing administrative delays and fraud risks.

However, these applications often require supplementary safeguards (for example, multi-factor authentication, liveness detection) to compensate for the PII phishing susceptibility of VDCs.

VDCs offer transformative potential for decentralized identity management, particularly for enrollment and high-assurance transactions. By integrating VDCs with phishing-resistant authentication mechanisms and advancing interoperability standards, government entities and organizations can harness their benefits while mitigating risks. As the ecosystem evolves, collaboration among government entities, standards bodies, and industry stakeholders will be essential to balance innovation with security.

3. Key considerations

VDCs and passkeys are built according to widely accepted standards, which promotes interoperability and offers seamless integration across various platforms and services. This standardization is crucial for widespread adoption and the creation of a truly interconnected digital identity ecosystem.

3.1 Privacy consideration

Privacy preservation is a key feature that must be present for both VDCs and passkeys. Combining VDC’s and passkeys benefit both the end users and relying parties. In this ecosystem, users maintain control over their credentials and can selectively share attributes as needed, for example when enrolling as a user with a relying party. Relying parties can be confident that the person behind the VDC is, more likely than not, who they say they are.

Moving forward however, the identity industry as a whole should address growing concerns about data privacy and control in the digital age. While VDCs are privacy-preserving mechanisms that hold and share verified credentials, relying parties should only request the minimum attributes required from the end user to enroll them in the application. Depending on the application type and legal and regulatory requirements, the attributes could be as little as email address and name or may also include verified home address and national identity number.

3.2 eIDAS 2.0 regulation

The European Digital Identity Regulation 2.0 (eIDAS 2.0) regulation describes where passkeys can be implicitly or explicitly used within the EUDI Wallet. The PID for an EUDI Wallet should be used with a high eIDAS LoA for initial authentication to a relying party. A FIDO passkey can be enrolled to the user’s EUDI Wallet to meet this requirement. The passkey can then be used for repeated authentication with pseudonyms to the relying party. In this way, passkeys can co-exist or complement the VDC in the EUDI Wallet ecosystem.

Section 5f.3[1] of the eIDAS 2.0 regulation reads that while very large online platforms must accept and facilitate the use of EUDI Wallets for authentication, they must do so “in respect of the minimum data necessary for the specific online service for which authentication is requested”. Therefore, eIDAS 2.0 allows online platforms to support pseudonymous PID authentication, rather than require those platforms to also accept VDCs that are issued by and tied to a government-issued credential.

Consequently, passkey providers will need to issue and restore passkeys. The passkey provider services can be operated by different entities in the EUDI Wallet ecosystem: by the cloud-based EUDI Wallet backend, by the PID provider, or by the Qualified Trust Services Provider (QTSP) that issues the (Qualified) Electronic Attestation of Attributes ((Q)EAAs).

[1] https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401183

Cloud-based EUDI wallets and passkey providers have an interesting synergy. PIDs and (Q)EAAs are hosted at a cloud-based EUDI Wallet backend that is accessed from the user’s devices using FIDO passkeys. If the user’s device is lost or replaced, the user would only need to restore the FIDO passkey, which will then give the user access to the PIDs and (Q)EAAs. This allows for rapid recovery of an EUDI Wallet since the user can first download the FIDO passkey to a device and then use the FIDO passkey to get instant access to the cloud-based EUDI Wallet.

3.3 Use cases for an integrated approach for VDCs and passkeys

Passkeys and digital identity wallets are not competing technologies, but rather complementary solutions that together create a robust, portable identity ecosystem. Passkeys serve as a secure gateway to digital identity wallets, providing strong defense against unauthorized access, and wallets serve to provide valuable data during high-risk transactions. This combination enhances security and streamlines the user experience by unifying verified identity (EUDI Wallet) with easy authentication (passkey).

Perhaps most importantly, this combination allows for the creation of reusable identities. Users can prove their identity once and then reuse the same VDC across multiple services, significantly reducing friction for digital interactions while maintaining high security standards. The potential applications for this integrated approach to digital identity and authentication are vast and span multiple sectors:

Online verification: In e-commerce, (for example, state-backed liquor stores) age verification for purchasing restricted products can be streamlined using verified credentials stored in a digital wallet and accessed securely with a passkey. Government services: Secure access to tax filing systems and other government benefits can be facilitated through this combined approach, enhancing security while improving user experience. Healthcare: Verifying prescribing doctor’s credentials across multiple hospitals becomes more efficient and the secure transfer of patient records between healthcare providers can be streamlined. Education: Higher education systems can more effectively prevent account takeovers and students can create reusable identities that carry their records throughout their academic careers and beyond. Financial services: Know Your Customer (KYC) processes can be significantly streamlined and enhanced security for high-risk transactions can be implemented more effectively.

In most use cases, relying parties should use VDCs to enroll their constituents at the beginning of their interactions and accept passkeys for further interactions with the constituent. If a constituent is conducting a high-risk transaction, then the relying party should ask for additional attributes from the VDC at the time of authentication. Additionally, a VDC should be used for passkey recovery, while a passkey should be used to access the VDC.

4. Recommendation

Government entities and organizations who adopt passkeys as the primary form of authentication for constituents can leverage their enhanced security and ease of use. To ensure trust and usability, passkeys should be backed by verified digital credentials (VDCs), especially in scenarios where a verified identity is required for passkey issuance or account recovery. VDCs provide a robust mechanism to securely recover passkeys while maintaining a high level of assurance.

To implement this effectively, a tiered approach based on Identity Assurance Level (IAL) requirements and the risk tolerance of the data being protected within the service is recommended. For services with moderate IAL requirements, relying parties (RPs) should:

Read a government-issued ID to verify the user’s identity. Create a passkey tied to the verified identity. Use the passkey for re-authentication in all subsequent interactions.

Government entities and organizations who adopt passkeys as the primary form of authentication for constituents can leverage their enhanced security and ease of use. To ensure trust and usability, passkeys should be backed by verified digital credentials (VDCs), especially in scenarios where a verified identity is required for passkey issuance or account recovery. VDCs provide a robust mechanism to securely recover passkeys while maintaining a high level of assurance.

To implement this effectively, a tiered approach based on Identity Assurance Level (IAL) requirements and the risk tolerance of the data being protected within the service is recommended. For services with moderate IAL requirements, relying parties (RPs) should:

Read a government-issued ID to verify the user’s identity. Create a passkey tied to the verified identity. Use the passkey for re-authentication in all subsequent interactions.

For services with higher IAL requirements, collaboration between government entities and organizations like the FIDO Alliance is essential. Together they can develop solutions to ensure that passkeys meet stringent assurance levels while maintaining user privacy and convenience.

Additionally, there is a pressing need for standards and mutual recognition arrangements for IAL across jurisdictions. Government entities should work to establish clear guidance that states which IAL levels are required for specific services for legal compliance and consumer protection. These standards should aim to be valid across as many countries as possible to facilitate interoperability and trust for cross-border digital interactions.

By adopting passkeys as the foundation of authentication and aligning them with verified credentials and standardized IAL frameworks, government entities can enhance security, improve user experience, and foster global cooperation in digital identity management.

5. Appendix 5.1 Verifiable digital credentials for government deployments

The digital identity landscape is undergoing significant transformations worldwide. This appendix explores how government entities around the world are implementing VDCs.

5.2 APAC VDC efforts

Asia-Pacific countries are embracing the idea of a VDC. Countries such as Japan, South Korea, and Australia are working together to ensure interoperability amongst their VDCs. Australia’s Digital ID Act 2024 created accreditation requirements for digital IDs and enhanced the trust framework between different providers.

In Asia, government-issued digital credentials are advancing rapidly but unevenly; a reflection of diverse economic, technological, and regulatory landscapes. Recent deployments highlight both the progress and the critical role of standards bodies (such as ISO/IEC and W3C) in shaping secure, interoperable systems. In Asia, countries such as India, Singapore, and South Korea are leading with robust digital ID systems, while Australia is harmonizing mDLs with international standards for secure, interoperable credentials.

Singapore’s SingPass is a benchmark for seamless public and private service access. South Korea leverages digital IDs for e-governance, incorporating FIDO and ISO/IEC 29115 standards. Japan’s Digital Agency[1] drives Individual Number (My Number) card enhancements through initiatives such as the Asia Pacific Digital Identity Consortium[2] launched in December 2024.

The government of Japan started issuing digital National IDs (individual number card/My Number card on smartphones) in mdoc format (standardized under ISO/IEC 18013-5) for iPhone users on June 24, 2025, and is planning to issue it for Android users in 2026. For the digital National ID, identity information such as name, birthdate, address, gender, and individual number (called My Number in Japan) are included. The aim is for digital National IDs to be used in various identity proofing use cases for both in-person and remote use cases.

In Southeast Asia, Thailand’s 2022-24 Digital ID Framework targets 10 million digital IDs and National Digital ID platforms. When discussing biometrics across Mobile ID, D.DOPA, the creators of this framework referenced NIST 800-63 and ISO/IEC 19794 standards. Malaysia’s MyDigital ID, which adheres to ISO/IEC 27001, and Sarawak’s planned Sarawakpass, aims to emulate SingPass for cashless transactions and service access. The Philippines’ PhilSys has enrolled 68 million, focusing on digital issuance to bypass physical card delays. In March 2025, Taiwan’s Digital Ministry introduced a prototype Taiwan Digital Identity Wallet (TW DIW), a non-mandatory mobile app for storing IDs and licenses, using biometric authentication and selective disclosure. A sandbox trial began in March, with broader testing planned for December, but it is not a full digital ID replacement and excludes medical data sharing.

Australia and New Zealand are harmonizing mobile driver’s licenses with ISO/IEC 18013-5. In Australia and New Zealand, harmonization efforts for mDLs center on adopting ISO/IEC 18013-5, which ensures secure, interoperable digital credentials verifiable domestically and internationally. In Australia, Austroads’ Digital Trust Service (DTS) leads the charge with a pre-production version tested successfully for national scalability. New South Wales, with 4.5 million users since 2019, is transitioning its Service NSW app, which offers app-based mDLs, to full compliance, ensuring legal equivalence to physical cards. South Australia’s mySAGOV app incorporates the standard’s verification features. The DTS, targeting a 2025-26 rollout, enables cross-jurisdictional and global verification, was demonstrated at the 2024 Identity and Verifiable Credentials Summit for uses like U.S. airport access and includes New Zealand in its interoperability framework. New Zealand is aligning its NZTA app-based digital licenses with ISO/IEC 18013-5, building on mutual recognition agreements with Australia.

5.3 EU VDC efforts

In Europe, the European Digital Identity Regulation 2.0 (eIDAS 2.0) regulations, which came into force in May of 2024, mark a pivotal shift in how digital identities are managed across the European Union. This updated framework introduces the European Digital Identity Wallet (EUDI Wallet), which aims to provide EU citizens with a secure, interoperable digital identity solution for accessing public and private services across member states.

The EUDI Wallet is a cornerstone of the eIDAS 2.0 regulation[3] and will be offered free of charge to all EU citizens. The purpose of the EUDI Wallets is to enable EU citizens to prove their identity when accessing both online and offline resources or to present specific personal attributes without revealing their full identity. The EUDI Wallets will be able to be used for use cases such as the mobile driving license, payments, access to public services, and opening a bank account.

The EUDI Wallet architecture is outlined in the European Digital Identity Wallet Architecture and Reference Framework (the ARF), which specifies the formats and protocols to be used by the EUDI Wallets. Each EUDI Wallet will be bootstrapped with a Personal Identity Document (PID), which will be enrolled at the high eIDAS Level of Assurance (LoA). In addition to the PID, users will have the option to add additional (Q)EAAs, which can prove the user’s identity and claims to relying parties.

The ARF has specified that the following formats are suitable for the PID and (Q)EAAs:

ISO/IEC 18013-5 mobile driving license (mDL) W3C Verifiable Credentials Data Model v1.1 IETF SD-JWT-based Verifiable Credentials (SD-JWT VC)

Furthermore, International Civil Aviation Organization (ICAO) Digital Travel Credentials (DTC) can also be used as a (Q)EAA with the EUDI Wallet.

5.4 US VDC Efforts

In the United States, the mobile driver’s license (mDL) movement is gaining momentum, and several states have already implemented or are piloting mDL programs. Unlike the centralized approach of eIDAS 2.0, the U.S. initiatives are being developed more organically, driven by individual federal agencies and state efforts, alongside industry collaborations. These developments reflect a growing recognition of the need for robust, user-centric digital identity solutions in an increasingly digital world, although they approach this goal through different regulatory and technological paths.

As US states provide their constituents with ID cards and driver’s licenses, the responsibility of creating mID and mDLs lies with each of the states. As such, the development and implementation of mDLs in the US has been a gradual and varied process across different states.[4] While only about a third of US states currently offer mDLs, many states are pushing forward, as they recognize the potential benefits of mDLs in improving remote transactions, reducing identity fraud, and enhancing digital identity verification for both government services and private sector services.

The Transportation Security Administration (TSA) is evaluating the potential impact of VDCs (such as mobile driver’s licenses) on aviation security and operations. The TSA has integrated digital identity capabilities, including the acceptance of state-issued mobile driver’s licenses, at TSA checkpoints using the Credential Authentication Technology 2 (CAT-2) system to provide for a secure and seamless method of verifying an individual’s identity. Currently, the TSA is accepting mobile driver’s licenses and mobile IDs from 15 participating states. In October 2024, the TSA published a final rule in the Federal Register that would allow passengers to continue using mobile driver’s licenses (mDL) for identity verification at TSA airport security checkpoints now REAL ID enforcement began on May 7, 2025.

In addition to publishing the Digital Identity Guideline SP 800-63, the NIST National Cybersecurity Center of Excellence (NCCoE) launched an mDL adoption acceleration project to bring together stakeholders from across the mDL ecosystem to work to build out a reference implementation to promote standards and best practices for mDL deployments and to address mDL adoption challenges. The first NCCoE use case will focus on helping consumers create financial accounts and helping financial institutions meet Customer Identification Program/Know Your Customer (CIP/KYC) requirements using mDLs.

For the US Federal government’s digital interactions with users, agencies are embracing the idea of a reusable identity stored in a digital identity wallet. Generally speaking, these VDCs are cloud-based and would be used to verify a user’s identity prior to interacting with a federal agency for actions such as enrolling in public benefits or filing taxes. These VDCs are also tied to an authenticator that the constituent would use to sign in to the agency’s application. Within the federal space, a VDC tied to an authenticator is called a credential service provider (CSP).

5.5 UK VDC Efforts

​The UK Government released the Digital Identity and Attributes Trust Framework (DIATF) gamma version (0.4)[5], in November 2024, which outlines the standards and roles for digital identity services which relate to digital wallets. The UK plans to introduce digital driving licences in 2025, that will be available through a new GOV.UK digital wallet app on smartphones.

[1] https://www.digital.go.jp/en

[2] https://www.apdiconsortium.org/

[3] The “Regulation (EU) 2024/1183 as regards establishing the European Digital Identity Framework” (eIDAS 2.0) was adopted by the EU parliament in April 2024. The eIDAS 2.0 regulation will be extended with Commission Implementing Regulations (CIRs), also known as “implementing acts”, which will elaborate certain legal aspects of the eIDAS 2.0 regulation. The eIDAS 2.0 CIRs continue to be specified.

[4] As of March 2025, the states that offer mDLs include Alaska, Arkansas, Arizona, California, Colorado, Delaware, Georgia, Hawaii, Iowa, Louisiana, Maryland, Mississippi, New York, Ohio, Puerto Rico, Virginia, Utah, and West Virginia.

[5] https://www.gov.uk/government/publications/uk-digital-identity-and-attributes-trust-framework-04

6. Contributors Jerome Becquart, Axiad John Bradley, Yubico Tim Cappalli, Okta Sebastian Elfors, IDnow Hideaki Furukawa, Nomura Research Institute, Ltd. William Fisher, NIST Henna Kapur, Visa Sue Kooman, American Express Matthew Miller, Cisco Jeff Nigriny, CertiPath, Inc. Joe Scalone, Yubico Alastair Treharne, Ingenium Biometric Laboratories 7. Document History ChangeDescriptionDateInitial publicationWhite paper first published.September 2025            

8. References

The Asia-Pacific Digital Identity (APDI) consortium. APDI consortium. https://www.apdiconsortium.org/

Digital Agency. Home. Digital Agency. https://www.digital.go.jp/en

The FIDO Alliance. Home. Passkey Central. https://www.passkeycentral.org/home

NIST. Digital Identities – Mobile Driver’s License (mDL). NIST National Cybersecurity Center of Excellence. https://www.nccoe.nist.gov/projects/digital-identities-mdl

NIST. (2025, July). NIST SP 800-63-4 Digital Identity Guidelines. NIST. https://www.nist.gov/identity-access-management/projects/nist-special-publication-800-63-digital-identity-guidelines

Office for Digital Identities and Attributes and Department for Science, Innovation and Technology. (2025, June 26). UK digital identity and attributes trust framework (0.4).  Gov UK. https://www.gov.uk/government/publications/uk-digital-identity-and-attributes-trust-framework-04

The European Parliament and the Council of The European Union. (2024, April 11). Regulation (Eu) 2024/1183 of the European Parliament and of the Council. EUR-Lex.europa.eu. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401183

Transportation Security Administration. (2024, October 7). TSA announces final rule that enables the continued acceptance of mobile driver’s licenses at airport security checkpoints and federal buildings. TSA. https://www.tsa.gov/news/press/releases/2024/10/24/tsa-announces-final-rule-enables-continued-acceptance-mobile-drivers


Oasis Open

Discover the new XLIFF 2.2: Join Us for an Interactive Webinar on October 7!

XLIFF 2.2 is the new version of the OASIS XLIFF (XML Localisation Interchange File Format). Developed by the OASIS XLIFF Technical Committee, XLIFF is the main bilingual bitext format in the localisation industry. This interoperability standard defines a normative method for storing and exchanging localisable data across the various stages of the localisation process. This […] The post Discover

By Lucía Morado, OASIS XLIFF TC Co-Chair

XLIFF 2.2 is the new version of the OASIS XLIFF (XML Localisation Interchange File Format). Developed by the OASIS XLIFF Technical Committee, XLIFF is the main bilingual bitext format in the localisation industry. This interoperability standard defines a normative method for storing and exchanging localisable data across the various stages of the localisation process. This new version (XLIFF 2.2) introduces valuable enhancements while remaining backward compatible with the previous ones (XLIFF 2.1 and 2.0). 

XLIFF 2.2: What’s New?

One of the main changes of the new version is the new presentation structure of its specification. XLIFF 2.2 is now available in two formats:

XLIFF 2.2 Core: Contains only the essential information needed to create a valid XLIFF file. XLIFF 2.2 Extended: Includes the XLIFF Core as well as all the additional modules.

For those unfamiliar with our terminology:

XLIFF Core is the minimal set of XML elements and attributes that allows to define a set of translation units organised by source and target language. If a tool developer wishes to claim support for XLIFF 2.2, they must implement XLIFF Core.

XLIFF Modules offer additional set of XML elements and attributes that allow the inclusion of  (potentially useful) information about specific processes. For example, the Size and Length Restriction Module provides a mechanism to annotate possible constraints on content size and length. Tool developers may choose to support the modules that are most relevant for their specific use cases. XLIFF 2.2 includes 9 modules: Translation Candidates, Glossary, Format Style, Metadata, Resource Data, Size and Length Restriction, Validation, ITS,and Plural, Gender and Select.

By introducing a simplified version of the specification (XLIFF Core), which contains only the essential information needed to implement XLIFF, we aim to facilitate adoption among developers who are primarily interested in supporting the core functionality of the standard.

The other major change in XLIFF 2.2 is the release of the new Plural, Gender and Select Module. This module, which was proposed by Mihai Nita (Google), provides a method to store information required to represent and process messages with variants, such as plural forms or gender distinctions.

Upcoming Webinar

On October 7, members of the OASIS XLIFF TC will host a free webinar to present XLIFF 2.2. This event will cover the aforementioned key changes introduced in this new version, along with other minor enhancements. The main presentation will be followed by a live Q&A session, offering attendees the chance to engage directly with the experts behind the standard.

This is a unique opportunity for anyone interested in this influential localisation standard to learn about its latest developments, from the people maintaining and developing it. Do not miss it!

For those unable to attend live, a recording of the webinar will be made available on the official OASIS XLIFF TC website after the event.

We also encourage everyone who wishes to share comments or suggestions about the standard with the OASIS XLIFF TC to use the official public mailing list, which is open for community feedback.

The post Discover the new XLIFF 2.2: Join Us for an Interactive Webinar on October 7! appeared first on OASIS Open.

Thursday, 18. September 2025

Hyperledger Foundation

Strengthening Blockchain Interoperability for Enterprises: Harmonia Lab Merges into LFDT Smoot

The blockchain industry has matured significantly over the past decade, but one challenge remains: interoperability. Indeed, with thousands of blockchain networks currently operating in isolation, the ability to securely and efficiently exchange data and assets between blockchains remains critical.

The blockchain industry has matured significantly over the past decade, but one challenge remains: interoperability. Indeed, with thousands of blockchain networks currently operating in isolation, the ability to securely and efficiently exchange data and assets between blockchains remains critical.


MyData

Public Statement: Protecting, Empowering and Inspiring Children as they navigate and construct their digital worlds

Executive Summary: This statement reflects the MyData community’s commitment to advancing children’s digital rights in alignment with the EU’s regulatory initiatives. While the EU leads globally in protecting children online, […]
Executive Summary: This statement reflects the MyData community’s commitment to advancing children’s digital rights in alignment with the EU’s regulatory initiatives. While the EU leads globally in protecting children online, […]

Tuesday, 16. September 2025

Oasis Open

Google Donates Secure AI Framework (SAIF) Data to Coalition for Secure AI, Advancing Industry-Wide AI Security Standards

Boston, MA – 16 September 2025 – OASIS Open, the international open source and standards consortium, announced that Google has donated data from its Secure AI Framework (SAIF) to the Coalition for Secure AI (CoSAI), an OASIS Open Project. The contribution includes the Coalition for Secure AI Risk Map (CoSAI-RM), now available as part of […] The post Google Donates Secure AI Framework (SAIF) Data

OASIS Open Project Accelerates Collaborative Development of Open Source AI Security Tools and Best Practices

Boston, MA – 16 September 2025 – OASIS Open, the international open source and standards consortium, announced that Google has donated data from its Secure AI Framework (SAIF) to the Coalition for Secure AI (CoSAI), an OASIS Open Project. The contribution includes the Coalition for Secure AI Risk Map (CoSAI-RM), now available as part of CoSAI Tooling. The CoSAI-RM is a framework for identifying, analyzing, and mitigating security risks in AI systems, providing a structured map of the AI security landscape and a common language to address vulnerabilities that traditional software security practices often miss. CoSAI will continuously update, develop, and expand the Risk Map to address emerging threats and evolving security challenges in AI systems.

This contribution strengthens CoSAI’s mission to enhance trust and security in AI development and deployment, directly supporting its four Workstreams: Software Supply Chain Security, Preparing Defenders for a Changing Cybersecurity Landscape, AI Security Risk Governance, and Secure Design Patterns for Agentic Systems. 

Heather Adkins, Google, VP Security Engineering, said, “Google developed SAIF to address the unique security challenges that emerge as AI systems become more sophisticated and widely deployed. By contributing this framework to CoSAI, Google is ensuring that organizations of all sizes can access the same security principles and practices that we use to protect our own AI systems.”

SAIF provides a comprehensive approach to AI security that spans the entire AI development lifecycle, including practical tools such as the SAIF Risk Assessment, which helps organizations identify and mitigate AI-specific vulnerabilities, including data poisoning, prompt injection, and model source tampering.

“Google’s SAIF contribution represents the kind of industry leadership that makes CoSAI successful by bringing proven security frameworks developed at scale directly into the hands of the global AI community,” said J.R. Rao, IBM, Co-Chair of the CoSAI Technical Steering Committee (TSC). “This donation will significantly accelerate our workstreams, especially on AI Security Risk Governance, and provide immediate, practical value to organizations working to secure their AI deployments. It’s a perfect example of how open collaboration can transform innovative research into accessible tools that benefit everyone.” 

Get Involved

CoSAI now includes more than 40 industry partners working collaboratively to address AI security challenges. Its Premier Sponsors, including EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Snyk, Trend Micro, and Zscaler, are leading the way in advancing secure AI practices. CoSAI’s work is also grounded in the support of its Founding Sponsors: Amazon, Anthropic, Cisco, Cohere, GenLab, Google, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, and Wiz. 

Technical contributors, researchers, and organizations are welcome to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide.

CoSAI operates under OASIS Open, an international standards and open-source consortium. www.coalitionforsecureai.org

Media Inquiries: communications@oasis-open.org

The post Google Donates Secure AI Framework (SAIF) Data to Coalition for Secure AI, Advancing Industry-Wide AI Security Standards appeared first on OASIS Open.


Hyperledger Foundation

LF Decentralized Trust at One: Accelerating Innovation, Collaboration, and Opportunity

Today marks one year since the launch of Linux Foundation Decentralized Trust (LFDT). Over the past 12 months, our community has grown, new projects have taken root, and existing projects have advanced. It’s a good moment to reflect on what we’ve built together and, more importantly, to look ahead at the opportunities still to come.

Today marks one year since the launch of Linux Foundation Decentralized Trust (LFDT). Over the past 12 months, our community has grown, new projects have taken root, and existing projects have advanced. It’s a good moment to reflect on what we’ve built together and, more importantly, to look ahead at the opportunities still to come.


Technical Advisory Council Report: Year One of LF Decentralized Trust

Introduction As we close out the first full year of the Linux Foundation Decentralized Trust (LFDT), it’s clear that our community has made meaningful strides in strengthening both our projects and our processes. Over the past twelve months, the Technical Advisory Council (TAC) has focused on refining our project lifecycle, expanding task forces to tackle critical areas like security
Introduction

As we close out the first full year of the Linux Foundation Decentralized Trust (LFDT), it’s clear that our community has made meaningful strides in strengthening both our projects and our processes. Over the past twelve months, the Technical Advisory Council (TAC) has focused on refining our project lifecycle, expanding task forces to tackle critical areas like security and contributor engagement, and ensuring that our governance structures meet the needs of a growing and diverse ecosystem.


Hiero Turns One! Celebrating a Community-Driven Journey, Milestones, and What’s Ahead

September marks an exciting milestone for Linux Foundation Decentralized Trust’s newest ledger project: Hiero! Over the past year, the Hiero project has grown from an ambitious idea into a developed, community-driven initiative along with other projects under the LF Decentralized Trust (LFDT) umbrella.

September marks an exciting milestone for Linux Foundation Decentralized Trust’s newest ledger project: Hiero! Over the past year, the Hiero project has grown from an ambitious idea into a developed, community-driven initiative along with other projects under the LF Decentralized Trust (LFDT) umbrella.


We Are Open co-op

The algorithms are not working for us

We’re makers and while we’ve managed to share much of what we make, we spend less time promoting ourselves than we probably should. We aren’t natural marketers, and platforms such as LinkedIn have soul-sucking algorithms that prioritise shinyness over depth.

If you’ve been following us for a while, you might know that We Are Open Co-op is a collective that believes in bringing our whole selves to work. We get together and talk about our feelings and about our worker-owned cooperative. Over the years, we’ve had extremely productive co-op days that have resulted in things like:

A site to collect our AI focused think pieces, frameworks and strategies:  https://ailiteracy.fyi  A portfolio page to round up some of our work in Digital Credentialing: https://digitalcredentials.fyi  A place to make our favourite community building tactics easily accessible: https://communitybuilding.fyi 

We’re makers and while we’ve managed to share much of what we make, we spend less time promoting ourselves than we probably should. We aren’t natural marketers, and platforms such as LinkedIn have soul-sucking algorithms that prioritise shinyness over depth.

So, without further ado, this is a promotional post to say that we have upcoming capacity for interesting work. Hire us! 

At the moment, we’re helping Amnesty International UK build an online community for their activists and we’re collaborating with Skills Development Scotland on a potential national system for Verifiable Credentials. Both of these projects are complex, interesting and belong to organisations that are actively trying to help people. 

Other projects we’ve worked on this year have involved helping MIT’s Digital Credentials Consortium host a summit for their network in The Netherlands, carrying out research for the BBC on how they can approach AI Literacy education, and publishing a report on the environmental impact of AI for Friends of the Earth.  

While we were thinking about what to write in this post, we reflected on why these projects feel right to us. Beyond the specific themes involved - we usually work at the intersection of learning, technology, and community - we also talked about these cross-cutting attributes:

Strategy 

WAO loves helping organisations figure out the strategy stuff. We are great at helping organisations develop and refine big visions and then figuring out the practical and tactical ways people can achieve those visions. Through our theory of change workshops, strategy sessions and participatory methodologies, we’ve helped all kinds of organisations dream big, achievable dreams.

We believe in proactive planning, even in reactive situations! For example, when we helped Greenpeace International develop their crisis comms training programme, we were helping to establish a community of practice that could react to potential future critical incidents. When we worked with Friends of the Earth, we researched and wrote a nuanced article that helped them thread the needle between AI and environmental activism. With the Digital Credentials Consortium, we helped them establish a community-focused engagement plan that reflected a complex network of organisations. 

We are great at seeing the big picture and developing practical strategies that help bring those visions to life.

Advocacy

It shouldn’t be surprising to anyone who has read any of our other posts that we have opinions. We really do. We are internet people who have been working at the intersection of technology and society for our entire careers. We have a lot to say about privacy, decentralisation, open source, environmentalism, education and so much more. At this moment in history, we are still working to create technology that respects our rights. 

It’s not always easy, but we firmly believe in modelling the behaviours that you want to see in the world. That means being open and honest and helping others understand what’s at stake in regards to our data, the environment and our futures. We believe in solidarity, not charity and are organised as a cooperative aligned with the International Cooperative Alliance’s identity, values & principles. The spirit of WAO page on our wiki further describes our philosophical bent ;)

Work that has real impact

We have been lucky to work with clients and on projects that are looking to positively impact people’s lives. This is the kind of work we love. Real impact for us means that at the end of the day, the work we’ve done has helped empower, educate or encourage people to live their best lives. As we said at the beginning of this post, we believe in bringing our full selves, and we believe in helping others do the same. Real impact comes from communities and connections with real people.  

Projects where we have a lot of agency

As our website states,  we don’t just think outside the box; we shred it (and then recycle it, obviously). Our services include consultancy, workshops and training, project and product management, research and development, community building, and everything in between. That means , we have yet to meet a pre-established set of KPIs that can handle our general awesomeness. We prefer projects that allow us to stretch our wings past meaningless metrics and foregone conclusions. We work in partnership.

What we do at WAO:

Collaborate with organisations on sensemaking, digital transformation, strategy, product and generally making their work awesome Work openly because that’s how innovation happens and the world becomes a better place Adapt our work to the realities of your organisation, because humans are messy Share our networks, ideas and brains with your organisation. When we go, we like to go big. Collaborate with empathy, understanding, and humanity because life is hard and no-one likes jerks.

Find other people if you want to…

Hand over a spreadsheet of unnegotiated deliverables and expect us to get them done like you’re a teacher and this is our homework.  Keep us in the dark about what’s going on in your organisation as though you’re our parents.  Require sign-off for every small decision by someone outside the project team as if we’re not grown up enough to decide what needs doing. Argue about paying us what you agreed (or pay us late every time). We’re arguably inexpensive. This isn’t a flea market, and we’re not haggling. Treat us as taskrabbits, data entry clerks, or otherwise insist that we’re merely “consultants”. This ain’t Deloitte. Responsive clients

Finally, we’ve been quite fortunate to have found clients that “get” us. We love working with people who are also bringing their full selves to work and who aren’t afraid to change course when new information comes in. We know authenticity when we see it, and we appreciate honest, reflective responsiveness. We don’t know everything, and it’s ok if you don’t either. Together we can figure it out.

So, do you have an upcoming project that you’d like to talk to us about? Schedule a free 30 minute call and let’s see if we can work together. 

All images in this post are licensed cc-by-nd Bryan Mathers for WAO

Monday, 15. September 2025

Hyperledger Foundation

ToIP and DIF Announce Three New Working Groups for Trust in the Age of AI

Trust Over IP (ToIP), an LF Decentralized Trust (LFDT) project, and the Decentralized Identity Foundation (DIF) have launched three new Working Groups focused on digital trust for agentic AI:

Trust Over IP (ToIP), an LF Decentralized Trust (LFDT) project, and the Decentralized Identity Foundation (DIF) have launched three new Working Groups focused on digital trust for agentic AI:


DIDAS

The E-ID and You: DIDAS Launches the Youth Challenge

DIDAS Position on the E-ID As an association, DIDAS advocates for the state-issued E-ID as well as the underlying trust infrastructure. Both are key building blocks for a secure, efficient, and trustworthy digital society in Switzerland. E-ID The E-ID enables reliable identification in the digital space – especially where secure identity verification is required: Public ...
DIDAS Position on the E-ID

As an association, DIDAS advocates for the state-issued E-ID as well as the underlying trust infrastructure. Both are key building blocks for a secure, efficient, and trustworthy digital society in Switzerland.

E-ID

The E-ID enables reliable identification in the digital space – especially where secure identity verification is required:

Public services, e.g., ordering official documents such as register extracts

Opening a bank account

Online purchases, e.g., buying alcohol or concluding a mobile phone contract

Use cases can either be digital versions of existing processes (such as opening a bank account) or entirely new scenarios that exist only online.

Trust Infrastructure

The E-ID is based on a cryptographically secured trust infrastructure. This provides the foundation for a wide range of digital proofs (verifiable credentials), which – independently of the E-ID – can be managed in a personal wallet.

Put simply: everything that today exists as a plastic card – and much more – can be securely digitized and stored, for example:

Learner’s permit and driver’s license

Insurance card

Proof of residence (e.g., for subsidized family transport services)

Membership cards (loyalty programs, fitness clubs, libraries)

Educational certificates such as diplomas

Digital medical prescriptions

Personal health data (see award-winning prototype from the GovTech Hackathon 2024)

With the trust infrastructure, access to digital services becomes simpler, and legal certainty increases – thanks to the use of original data rather than insecure copies. This boosts efficiency, trust, and reliability in the digital space. Applications range from familiar processes to new digital services that do not exist in the analog world.

DIDAS launches youth competition “The E-ID and You”

Switzerland is getting an E-ID! 

The E-ID is groundbreaking for the future of Switzerland’s digital identity infrastructure. Discussions prior to the vote have shown the importance to build digital literacy among the population. 

DIDAS is therefore launching an innovative competition «The E-ID and you», as «E-ID Challenge» for secondary school, high school, and vocational school classes. This competition offers young people the opportunity to engage creatively and critically with one of Switzerland’s most important digital policy issues. 

The competition challenges young people to explain in a short video what the E-ID and the associated trust infrastructure are and what possible applications there are for them. The focus is not only on creative presentation, but also on thoughtful contributions that foster the understanding of the paradigm shift and the democratic discourse. The three best entries will receive prizes totaling CHF 3,500.  

Further information and participation details can be found in the media release and will soon be available on this page.

E-ID Challenge  Press Release

Thursday, 11. September 2025

The Engine Room

Community Call: Website Security for CSOs and Nonprofits

Join our next community call for a discussion about website security and how civil society organizations, human rights defenders and nonprofits can prepare, respond to and recover from website threats.  The post Community Call: Website Security for CSOs and Nonprofits appeared first on The Engine Room.

Join our next community call for a discussion about website security and how civil society organizations, human rights defenders and nonprofits can prepare, respond to and recover from website threats. 

The post Community Call: Website Security for CSOs and Nonprofits appeared first on The Engine Room.


DIF Blog

Nominations open for Steering Committee Election 2025

DIF was established to represent our fast-changing community and create a safe space for designing and prototyping ambitious new identity architectures. From the original handful of members 9 years ago, DIF grew into an organization with over 400 member companies contributing thousands of lines of code and documentation and changing

DIF was established to represent our fast-changing community and create a safe space for designing and prototyping ambitious new identity architectures. From the original handful of members 9 years ago, DIF grew into an organization with over 400 member companies contributing thousands of lines of code and documentation and changing the conversation in the tech industry. 

Since its inception, DIF has been governed by a Steering Committee, like most Linux Foundation projects. The Steering Committee is comprised of volunteers from core member organizations; its primary functions are to set strategy, deliberate on finer points of DIF’s direction and identity, maintain the processes of the organization, provide guidance to the executive director, and tend to the health of the community and its conversations. 

Periodically, DIF holds elections for roughly half of the Steering Committee’s seats to keep representation and community needs aligned. The process is specified here, but the following overview may be a faster read:

Dates:  11 Sept: Announcement of election + Nomination period opens 9 Oct: Last day to propose questions to the candidates 10 Oct: DIF staff posts aggregated questions to all candidates to the DIF blog 16 Oct: Nomination period closes & Platform statements due 23 Oct: SC ballot opens 30 Oct: SC ballot closes 6 Nov: First meeting of newly-reconstituted SC Who votes: Associate Members (one ballot per organization) How private are votes: Seen and tallied only by DIF staff, stored in case of complications Who can stand on the steering committee:  Any DIF member can nominate and any DIF member can be nominated.  Reminder: DIF members include Associate members, Contributors, individuals who signed a Feedback Agreement, and DIF liaison organizations Nominations should be sent to nominations@identity.foundation To be on the ballot, nominees must, via email by Thurs 16 Oct: accept the nomination before the nomination closes provide a short biography and statement describing the nominee’s interest in and qualifications for serving (600 words max) provide answers to the "platform"/philosophy questions sent out 10 Oct (600 words max) these will be published all together on the DIF website and linked from the ballot Early nominations are encouraged! Questions to candidates  All members are encouraged to submit questions for all the SC candidates on issues regarding the management and direction of the Foundation. Email questions to nominations@identity.foundation DIF Staff will compile and synthesize the questions and provide an anonymized, representative sample to all SC candidates to prompt their statements Election Logistics Each DIF Associate member submits one organizational ballot with votes for up to the number of seats then contested for the Steering Committee (6). The election lasts for 1 week between 23rd Oct and 30th Oct Associate members will be contacted during the nomination period to confirm each organization's point-of-contact

Wednesday, 10. September 2025

Next Level Supply Chain Podcast with GS1

What's Really on Your Plate? Insights on Food Safety Risks

When food safety is on the line, every hour counts. The FDA's new FSMA 204 rule is raising the standard for traceability, with stronger requirements designed to track products faster and manage recalls more effectively. At stake is not just compliance, but the ability to protect both consumers and businesses when outbreaks occur. In this episode, Angela Fields from the FDA joins hosts Reid Jacks

When food safety is on the line, every hour counts. The FDA's new FSMA 204 rule is raising the standard for traceability, with stronger requirements designed to track products faster and manage recalls more effectively. At stake is not just compliance, but the ability to protect both consumers and businesses when outbreaks occur.

In this episode, Angela Fields from the FDA joins hosts Reid Jackson and Liz Sertl to explain what FSMA 204 means for supply chains. They explore why proactive traceability is replacing outdated reactive models and how better data is improving the speed and accuracy of investigations.

You'll also hear real stories from outbreak response, how electronic records can cut weeks off investigations, and why collaboration across the food industry makes a difference for everyone.

In this episode, you'll learn:

How FSMA 204 creates new opportunities for supply chain transparency

Why recalls work best when industry and regulators communicate clearly

What steps companies can take now to prepare for the 2028 compliance deadline

Jump into the conversation: (00:00) Introducing Next Level Supply Chain (01:58) Angela Fields' background

(02:59) Food safety from a regulatory perspective

(04:13) How the environment affects supply chain risks

(06:59) What FSMA 204 means for industry

(08:40) Spinach outbreaks and the cost of recalls

(09:53) Why regulations also protect food companies

(14:23) How electronic records speed outbreak investigations

(17:17) Who triggers recalls and how they happen

(19:33) Best practices companies use to prevent recalls

(22:15) Where consumers can track recalls and outbreaks

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: Angela Fields on LinkedIn Check out the FDA

Tuesday, 09. September 2025

Center for Internet and Society

Call for Final Specification Commitments for Verifiable Credential Rendering Methods v0.9

On 8 September 2025 the Credentials Community Group published the following specification: Verifiable Credential Rendering Methods v0.9 This is a Call for Final Specification Commitments. To provide greater patent protection for this specification, participants in the Credentials Community Group are … Continue reading →

On 8 September 2025 the Credentials Community Group published the following specification:

Verifiable Credential Rendering Methods v0.9

This is a Call for Final Specification Commitments. To provide greater patent protection for this specification, participants in the Credentials Community Group are now invited make commitments under the W3C Community Final Specification Agreement by completing the commitment form. Current commitments are listed on the Web. There is no deadline for making commitments.

If you represent a W3C Member, please contact your Advisory Committee Representative, who is the person from your organization authorized to complete the commitment form.

If you have any questions, please contact the group on their public list: public-credentials@w3.org. Learn more about the Credentials Community Group.


Oasis Open

OASIS Approves TOSCA V2.0 Standard for Cloud Orchestration

BOSTON, MA, 9 September 2025 — Members of OASIS Open, the global open source and standards organization, have approved the Topology and Orchestration Specification for Cloud Applications (TOSCA) Version 2.0 as an OASIS Standard, a status that signifies the highest level of ratification. TOSCA v2.0 marks a significant evolution of the standard, expanding beyond its […] The post OASIS Approves TOS

Cisco, Hewlett Packard Enterprise, Huawei, IBM, Red Hat, SAP SE, and US NIST Advance New Version of the Standard for Flexible, Cross-Domain Cloud Services

BOSTON, MA, 9 September 2025 — Members of OASIS Open, the global open source and standards organization, have approved the Topology and Orchestration Specification for Cloud Applications (TOSCA) Version 2.0 as an OASIS Standard, a status that signifies the highest level of ratification. TOSCA v2.0 marks a significant evolution of the standard, expanding beyond its computing-centric roots to become a universal orchestration language applicable across virtually any domain, from traditional IT infrastructure to IoT deployments, edge computing, and industry-specific automation requirements.

Developed by the OASIS TOSCA Technical Committee (TC), TOSCA v2.0 greatly increases the fields of applicability of the standard and marks a fundamental shift in cloud orchestration accessibility. By eliminating the restrictive “Simple Profile” terminology that previously limited the standard to computing environments, TOSCA v2.0 empowers organizations across industries to create and contribute user-defined profiles using terminology specific to their domain and use cases. 

“TOSCA v2.0 was the result of close collaboration between experts in the fields of cloud computing, telecommunications, cloud-native software applications, and open process automation,” said Chris Lauwers, Chair of the TOSCA TC. “It comes at a time when reports show that users are increasingly adopting multi-cloud solutions, and it establishes TOSCA as the only orchestration standard that can be used across multiple application domains and across all layers of the technology stack. As a result, it will greatly simplify the integration challenges associated with today’s complex technology infrastructure.”

The new version expands TOSCA’s technical capabilities with user-defined functions for specific operational needs and formal graph traversal syntax that enables more sophisticated relationship modeling. These enhancements support complex orchestration scenarios while maintaining TOSCA’s signature clarity and structured approach. TOSCA v2.0 also introduces a comprehensive operational model for Day 2 service management. 

TOSCA v2.0 preserves the core architectural principles that have made the standard successful. It continues to treat both nodes and their relationships as first-class entities with full inheritance capabilities, ensuring backward compatibility and maintaining the robust foundation that existing TOSCA users depend on.

The TOSCA TC actively encourages global collaboration and input from stakeholders to support the standard’s ongoing evolution and adoption. To learn more about how to get involved, contact join@oasis-open.org.

The post OASIS Approves TOSCA V2.0 Standard for Cloud Orchestration appeared first on OASIS Open.

Saturday, 06. September 2025

Project VRM

Protocols for MyTerms

MyTerms (IEEE P7012 Draft Standard for Machine Readable Personal Privacy Terms, unpacked here) has a simple conceptual structure that is open to many different protocols and roles for them. Note the arrows in this graphic: Protocols are required for those. Here is an alphabetized list of some protocols that I know so far, and what […]

MyTerms (IEEE P7012 Draft Standard for Machine Readable Personal Privacy Terms, unpacked here) has a simple conceptual structure that is open to many different protocols and roles for them. Note the arrows in this graphic:

Protocols are required for those.

Here is an alphabetized list of some protocols that I know so far, and what I think they might do (given my incomplete knowledge across all of them.). Note that the standard never says “user,” which has subordinate and dependent implications. It calls the first party a “person” or an “individual,” and the second party an “entity.”

A2A Protocol — “An open protocol enabling communication and interoperability between AI agents, giving them a common language – irrespective of the framework or vendor they are built on.” More here. ActivityPub — Can publish or reference a MyTerms URI in actor metadata or message extensions so follows/interactions and happen under the person’s terms. AT Protocol — Can include a MyTerms pointer in profile schemas or event metadata so interactions can be logged under the proffered terms. Beckn Protocol — Can carry a MyTerms URI (or the terms JSON) in discovery/order messages and bind acceptance in the async ACK/NACK flow. DIDComm v2 — Can attach MyTerms as a claim/document in DID-to-DID messages; the counterparty signs/acks to bind the contract. GNAP — Can pass a MyTerms URI/hash in the grant/interaction; record acceptance alongside the grant. HCP (Human/Hyper-Capability Protocol) — Called (at that link) “a user-owned, secure, and interoperable preference layer that grants individuals granular, revocable control over how their data steers AI systems,” it can store a MyTerms reference in the person’s preference set, gate releases on acceptance, and optionally include the URI/hash in OAuth flows to enable audit. HTTP Message Signatures (RFC 9421) — Can bind MyTerms to specific HTTP exchanges by signing requests/responses that include a terms reference. HTTPS — This is generic transport. It can attach or link MyTerms in headers/body and have the counterparty echo/ack to the transaction log. JLINC — Designed for MyTerms-like ceremonies, it can carry a MyTerms ID/hash for “data shared under an agreement.” Matrix — Can include a MyTerms pointer in a profile state or an event content so rooms/interactions are conducted under the person’s terms. Model Context Protocol (MCP) — Can send a MyTerms URI/hash in a tool/agent handshake or call metadata, so tools operate under those terms and log acceptance. NANDA (Internet of AI Agents) — Can expose MyTerms during agent discovery/handshake and metadata in registry so agents negotiate under the person’s terms. Nostr — Can include a MyTerms reference in profile/event tags so relays and clients can honor and log acceptance. OAuth 2.0 — Can carry MyTerms as a parameter or in a request object, recording consent/acceptance with the access transaction. OpenID Connect — Can include a MyTerms URI/hash as a claim (e.g., in the ID token) or request object with RP/OP log acceptance. Solid — Can host the person’s MyTerms in their wallet (formerly called a pod) and require apps or services to transact under those terms for resource access. UMA 2.0 — Can treat MyTerms as a policy at the resource server and share only with parties that have accepted the person’s terms. Web Linking (RFC 8288) — Can advertise a MyTerms URI via Link: headers or a /.well-known/ location for discovery and binding.

Please give me additions, corrections, and improvements.  And forgive the need for all of those changes. I think it’s important at this stage to get a list of possible protocols out there, and to get the discussion rolling. Thanks!


OwnYourData

OwnYourData @ Semantics 2025

OwnYourData @ Semantics 2025: Privacy-preserving Data Sharing for Renewable Energy Communities On September 3, 2025, we took part in Semantics 2025 in Vienna – specifically at the 2nd NeXt-Generation Data Governance (NXDG) Workshop. Together with our partners, we presented results from a current research project: “A Configurable Anonymisation Service for Semantically Annotated Data: A Case […] D
OwnYourData @ Semantics 2025: Privacy-preserving Data Sharing for Renewable Energy Communities

On September 3, 2025, we took part in Semantics 2025 in Vienna – specifically at the 2nd NeXt-Generation Data Governance (NXDG) Workshop. Together with our partners, we presented results from a current research project:

“A Configurable Anonymisation Service for Semantically Annotated Data: A Case Study on REC Data”.

Why anonymisation matters

With the energy transition and the growing adoption of Renewable Energy Communities (RECs), there is an increasing demand for data exchange – both within energy communities and with external stakeholders. However, energy data is highly sensitive: smart meter data can reveal detailed behavioral patterns.

The challenge: How can energy data be shared without compromising privacy?

Our solution: A configurable anonymisation service

As part of the USEFLEDS project, we developed an open, online anonymisation service that integrates seamlessly with our Semantic Overlay Architecture (SOyA).

Key features:

Semantic annotation of data to make privacy rules explicit and machine-readable.

Rule-based anonymisation pipelines that automatically apply generalisation and randomisation.

Configurable via YAML files, without requiring advanced programming knowledge.

Available as SaaS or On-Premises – with open-source code and Docker images for maximum transparency and reproducibility.

Try it now: anonymiser.ownyourdata.eu

Evaluation: Privacy vs. data value

For the evaluation of the service, we worked with synthetic datasets representing energy communities in Burgenland, Austria. The main question was how to achieve a sufficient level of protection without destroying the analytical utility of the data. The evaluation was based on k-anonymity, complemented by a similarity measure to also assess the effectiveness of randomisation. The results show that sufficient anonymisation was achieved in all tested scenarios: no dataset remained uniquely attributable to a single person, while the data could still be used meaningfully for analysis. This demonstrates that our approach provides a solid balance between privacy protection and data value.

Embedding in the European regulatory framework and outlook

The developed solution is closely aligned with European regulations. While the GDPR (2018) emphasizes the protection of personal data and the rights of individuals, the Data Governance Act (2022) focuses on trustworthy ecosystems and the role of neutral intermediaries. Anonymisation serves here as a key instrument to connect both levels: it protects individual privacy while at the same time providing the foundation for secure, compliant data sharing within data intermediaries and energy communities. This dual effect makes the approach highly relevant in practice.

In the next phases of the project, we plan to further expand the anonymisation service. This includes the integration of additional techniques to more flexibly handle different data types. In addition, the system will provide key performance indicators and risk assessments directly with the results, enabling users to better understand the effectiveness of the applied anonymisation. Finally, we will investigate how the service performs in large-scale productive environments to ensure its suitability as a building block in real-world data intermediary infrastructures.

Workshop Paper:

A Configurable Anonymisation Service for Semantically Annotated Data: A Case Study on REC Data (PDF)

Der Beitrag OwnYourData @ Semantics 2025 erschien zuerst auf www.ownyourdata.eu.

Friday, 05. September 2025

DIF Blog

DIF Newsletter #53

September 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Launches New Trusted AI

September 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Launches New Trusted AI Agents Working Group

DIF is launching the Trusted AI Agents Working Group, chaired by Nicola Gallo, Andor Kesselman, and Dmitri Zagidulin, to define an interoperable stack for trustworthy, privacy-preserving, and secure AI agents. As autonomous agents gain real-world responsibility in composing tools, making decisions, and exchanging verifiable data, this working group will build specifications, reference implementations, and governance patterns for enabling high-trust agent ecosystems using robust mechanisms for identity, authority, and governance.

The initial focus of the group will be exploratory and experimental, starting from use cases and a taxonomy of delegation mechanisms and patterns. The first formal work item will be a report evaluating use cases for agents (that make explicit dynamics of authorization), primarily focusing on the object capability school of distributed authorization. Future work may include standards for agentic identification mechanisms (applying DID and Verifiable Credential prior art), interoperability libraries for popular LLM frameworks, runtime trust enforcement workflows, and prototyping of human-to-agent delegation patterns. The working group welcomes in particular active participation from specification authors, security researchers, LLM infrastructure maintainers, and identity experts as it addresses the unique requirements of agentic workflows.

Stay tuned for meeting information!

DIF Labs Beta Cohort 2 Show & Tell

As we enter September, DIF Labs Beta Cohort 2 projects are in their final development phase, preparing for their showcase event later this month. The three selected projects have made significant progress on cutting-edge privacy-preserving technologies. Each project team has been receiving dedicated mentorship from DIF's expert community and is committed to delivering standards, protocols, implementations, or reports that will benefit the entire decentralized identity ecosystem.

👉 Register for DIF Labs Show & Tell, open to the public!

Join DIF Leadership: Executive Director & Steering Committee

This is your opportunity to shape the future of DIF! We have an upcoming Steering Committee election AND we're accepting applications for the next Executive Director term!

Steering Committee: The Steering Committee plays a crucial role in defining DIF policies and strategy. We will have 6 seats open for election. Stay tuned for election details, including the call for nominations (all members), the nominee's "platform" statements, and the voting process (Associate Members, one ballot per organization). Don't miss your chance to participate in this important process that will guide DIF's future. Executive Director: As we move towards the end of the 2025 Executive Director term, and current ED Kim falls sway to the siren call of building, we have an exciting opportunity to set the direction of DIF! We are accepting applications now. See application details here, and send questions to jobs@identity.foundation. Introducing "Scaling AI DIFferently": Misha Deville's New Blog Series on AI and Decentralized Identity

DIF Ambassador and Vidos Co-Founder Misha Deville has launched her blog series "Scaling AI DIFferently", exploring the critical intersection of artificial intelligence and decentralized identity technologies.

Misha examines how decentralized identity infrastructure serves as the foundation for building trust and accountability in autonomous AI systems. As AI agents become increasingly sophisticated, she investigates the essential role that verifiable digital identities play in ensuring responsible AI deployment at scale.

In this thought-provoking series, Misha addresses fundamental questions about identity verification, trust frameworks, and accountability mechanisms in AI-driven environments—tackling challenges at the heart of next-generation digital infrastructure.

Discover Misha's insights on how decentralized identity standards can help shape a more trustworthy AI future. Start reading here.

🛠️ Working Group Updates

Browse our active working groups here

Hospitality & Travel Working Group

The Hospitality & Travel Working Group has emerged as one of DIF's most active groups, conducting extensive work on the HATPro specification for traveler preferences and identity management. The group is currently developing comprehensive schemas for food preferences, dietary restrictions, accessibility requirements, and pet-related travel information, and engaging subject matter experts across various travel sectors. Subscribe to the H&T Working Group blog for updates.

👉 Learn more and get involved

Creator Assertions Working Group

The Creator Assertions Working Group made significant strides in media industry identifier standards and identity assertion frameworks. Major developments include:

Media Identifier Guidance: Developed comprehensive guidance for using metadata assertions with media industry identifiers, focusing on Dublin Core metadata fields and external authoritative data sources Identity Assertion Evolution: Continued refinement of identity assertion requirements to support manifest updates and improve data-model flexibility

The group's work on attribution assertions and metadata standards continues to advance the state of content authenticity and provenance tracking.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ work item achieved significant milestones in privacy-preserving cryptographic protocols:

Pseudonym Systems: Advanced work on cryptographic pseudonyms and blind signatures, with collaboration on general articles about cryptographic privacy techniques Performance Optimization: Completed performance testing and optimization work, with Rust implementations showing significant speed improvements over JavaScript Post-Quantum Research: Discussed post-quantum cryptography implications and integration with existing BBS+ protocols Test Vector Development: Continued development of comprehensive test vectors for the 0.9 draft release

The team's focus on practical implementation and standardization continues to drive adoption of privacy-preserving credential technologies.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused on standardization processes and method evaluation refinement:

W3C Charter Development: Continued work on the proposed W3C DID Methods Working Group charter, addressing concerns about scope and blockchain inclusion Method Evaluation Process: Refined the DIF recommendation process for DID methods, with DID:webvh currently in its 60-day comment period Method Champion Coordination: Established clearer processes for method champions and active development requirements Future Method Pipeline: Identified upcoming methods include did:webplus, did:cheqd and did:scid for evaluation

The group's DID method evaluation process ensures high-quality standards while maintaining transparency in the assessment process.

👉 Learn more and get involved

Identifiers and Discovery Working Group

The DIF Identifiers and Discovery Working Group advanced work on did:webvh (did:web + Verifiable History) and a new DID attested resources specification. Key progress includes deploying a sandbox server for did:webvh testing and proposing changes to DID resolution specifications to improve URL handling. The team discussed simplifying the webvh specification while preserving core functionalities and explored implementation details for blockchain DIDs generally, as well as key rotation patterns (and verification patterns for rotated keys) in verifiable credentials, with plans to continue collecting feedback through September 28th.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced protocol development and explored new application areas:

Binary Encoding Support: Progressed work on CBOR implementation for more efficient message encoding AI Agent Communications: Explored applications of DIDComm for AI agent-to-agent communications and autonomous system interactions Protocol Comparisons: Conducted detailed analysis comparing DIDComm with other protocols like OpenID for VC in various use cases Supply Chain Applications: Discussed new protocols for supply chain data management and product-recall systems

👉 Learn more and get involved

Claims & Credentials Working Group

The Credential Schemas team made significant progress on standardization and community engagement:

Community Schemas Initiative: Launched framework for organizations to contribute verifiable credential schemas to a shared repository Schema Standardization: Advanced work on aligning basic "person" schemata with schema.org standards while maintaining compatibility with existing frameworks

👉 Learn more and get involved

DIF Labs Working Group

DIF Labs Beta Cohort 2 projects entered their final development phase with impressive progress:

QES Project: Advanced development of Qualified Verifiable Credentials combining legal enforceability with privacy preservation Revocation Analysis: Completed comprehensive analysis of privacy-preserving revocation mechanisms with practical implementation guidance Anonymous Multi-Sig: Progressed work on anonymous multi-signature verifiable credentials for group-authentication scenarios

All projects are preparing for their final showcase event in September, with open-source implementations ready for community adoption. Register for DIF Labs Show & Tell

👉 Learn more and get involved

🌎 Special Interest Group Updates

Browse our special interest groups here

DIF Africa SIG

The Africa SIG continued its focus on identity challenges and solutions in Africa and beyond. The recent meeting featured the "Has Needs" project, a resource sharing protocol designed for humanitarian spaces, building on work since the Haiti earthquake of 2010.

The SIG's focus on humanitarian applications showcases the potential for decentralized identity to address critical social challenges.

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted comprehensive discussions on regional digital identity initiatives, including:

Korean Foreign Visitor System: Detailed presentation on digital identity solutions for foreign visitors to South Korea, including visa digitization and medical tourism applications Regional Interoperability: Explored challenges and opportunities for cross-border identity verification and credential recognition Privacy Frameworks: Discussed regulatory approaches across different jurisdictions and their implications for digital identity adoption Medical Tourism Applications: Examined specific use cases for digital credentials in healthcare and travel scenarios

The group's focus on practical, cross-border applications demonstrates the global relevance of decentralized identity solutions.

👉 Learn more and get involved

DIF Hospitality & Travel SIG

The Hospitality & Travel SIG showcased cutting-edge applications of decentralized identity in travel:

Agentic Commerce Platform: Featured comprehensive demonstration of AI-powered travel booking systems using decentralized identity for secure agent-to-agent communications Corporate Travel Innovation: Explored applications for corporate travel management with AI agents handling complex booking scenarios Industry Integration: Demonstrated practical implementations with major travel industry partners and payment systems Future Vision: Outlined roadmap for AI-driven, personalized travel experiences built on decentralized identity foundations

The SIG's presentations highlighted the transformative potential of combining AI agents with decentralized identity for seamless travel experiences.

👉 Learn more and get involved

📖 User Group Updates DIDComm User Group

The DIDComm User Group explored practical implementations and emerging applications:

AI Integration: Extensive discussions on integrating DIDComm with AI agents and home automation systems Technical Compatibility: Clarified DIDComm's compatibility with various cryptographic methods including BBS+ signatures Bluetooth Applications: Explored local mesh networking applications and proximity-based communications Implementation Challenges: Addressed practical questions about protocol implementation and deployment

The user group continues to drive practical adoption of DIDComm technologies.

👉 Learn more and get involved

📢 Announcements Join Steven McCown at Identity Week America

Steven McCown, Chief Architect at Anonyome Labs and DIF leadership member, will be speaking at Identity Week America Steve McCown | Identity Week America on Day 2 at 11:00 with "Digital identity: It may be secure, but will it protect your privacy?"

Steve McCown | Identity Week America A multi-faceted conference experience with over 150 sessions across eight tracks, providing the most comprehensive and multi-disciplined event. Identity Week America 2025

Steven will examine how "modern identity systems authenticate users with advanced identity, encryption, and communication technologies" yet still impact user privacy. His presentation will illustrate the privacy risks of "phone home" architectures and demonstrate how decentralized identity "helps platform providers overcome these problems through peer-to-peer identifiers and verification methods that don't require issuer participation."

The session will also explore how recent Utah legislation is creating new models for privacy-oriented digital identity in government systems.

Don't miss this essential discussion on building truly privacy-preserving identity solutions.

Join Nick Price and Doug Rice at HTNG Connect Europe

Join DIF H&T community leaders Nick Price and Doug Rice at HTNG Connect Europe in The Hague for two sessions on decentralized identity in travel and hospitality! Nick Price, CEO of NetSys Technology Ltd., will present "Redefining Trust: The Role of Decentralized Identity in the Future of Hospitality" (12:30-13:00, Mesdag Ballroom), exploring how decentralized digital identity offers a new paradigm for secure, guest-centric experiences and seamless travel solutions.

Doug Rice, chair of the DIF Hospitality & Travel Working Group, will join Nick for the interactive breakout session "Decentralized Identity in Action: Use Cases, Standards, and Industry Collaboration" (14:00-14:45, Red Room). This will be an open discussion on real-world pilots, emerging use cases, and the development of standard traveler profile schemas designed to support interoperability and privacy across the travel ecosystem.

Blog Series Exploring AI and Decentralized ID

DIF has kicked off the "Scaling AI DIFferently" blog series featuring guest articles from DIF Ambassador and Vidos Co-Founder Misha Deville on decentralized identity's critical role in building trust in agentic systems. Start here.

DIF Labs Beta Cohort 2 Final Showcase - September 23rd

Mark your calendars for the DIF Labs Beta Cohort 2 final showcase on September 23rd. Each project team will have 7 minutes to present their work followed by 8 minutes of Q&A. The event will feature:

Live demonstrations of privacy-preserving credential technologies Open-source implementations ready for community adoption Learning opportunities with DIF's expert community

Register here: https://luma.com/849iikfj

📅 Community Events Upcoming Industry Presentations

DIF community members will be presenting at several major industry events in September and October:

HTNG Connect in The Hague: Nick Price and Douglas Rice are presenting at HTNG Connect Europe. Identity Week America: Steve McCown is speaking at Identity Week America. IIW October 2025: Many in the DIF community will be attending the Internet Identity Workshop in October. Community Contributors

We continue to appreciate the dedicated contributions of our community members who drive DIF's technical work forward. Special recognition goes to the chairs and active participants of our most active working groups, whose consistent engagement and technical expertise make our progress possible.

🆔 Get involved! Join DIF

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

🐦 Follow us on Twitter/X 💻 Join us on GitHub 📺 Subscribe on YouTube 🔍 Read the DIF blog New Member Orientations

If you are new to DIF, join us for our upcoming new member orientations. Find more information on DIF's Slack or contact us at community@identity.foundation if you need more information.

Thursday, 04. September 2025

FIDO Alliance

PC Mag: Lose Your Device, Lose Your Accounts? Not If You Back Up Your Passkeys

Passkeys are more secure than passwords since they’re tied to a device, but what if you lose your phone? The trick lies in how you generate passkeys in the first […]

Passkeys are more secure than passwords since they’re tied to a device, but what if you lose your phone? The trick lies in how you generate passkeys in the first place.

Imagine you’ve created a host of passkeys on your iPhone or Android phone. If you lose your phone or it no longer works, what happens to the passkeys on your device? Are they gone? Can you get them back? Fear not. There’s a way to set up your passkeys so that they’re tied to your account and can follow you wherever you go.

The trick lies in how you generate passkeys in the first place. By using a password manager that supports passkeys, Google Password Manager, or Apple’s iCloud Keychain, you can save passkeys to your account and sync them across all your devices. If you lose your phone or upgrade to a new one, your passkeys will be available once you sign in. Here’s how this works.


Velocity Network

Enabling the Golden Age of Workforce Data

The post Enabling the Golden Age of Workforce Data appeared first on Velocity.

Wednesday, 03. September 2025

FIDO Alliance

Wired: How Passkeys Work—and How to Use Them

Passkeys want to create a password-free future. Here’s what they are and how you can start using them. Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game […]

Passkeys want to create a password-free future. Here’s what they are and how you can start using them.

Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game of cybersecurity whack-a-mole with your most important accounts. That’s where passkeys come into play. The so-called “war on passwords” has taken off over the past two years, with titans like Google, Microsoft, and Apple pushing for a password-less future that the FIDO Alliance (a consortium made to “help reduce the world’s over-reliance on passwords”) has been trying to realize for over a decade.

Like it or not, you’ll be prompted to create a passkey at some point, and you likely already have. That’s a good thing, as passkeys aren’t only much easier to use than a traditional password, they’re also a lot safer. Here’s everything you need to know about using them.


MyData

Why a decentralised web index puts people first?

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. A decentralised web index […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. A decentralised web index […]

Tuesday, 02. September 2025

FIDO Alliance

Passkeys Are Not Broken. The Conversation About Them Often Is

Nishant Kaushik, Chief Technology Officer, FIDO Alliance Every few months, like clockwork, a talk or article appears claiming that new research has uncovered a “vulnerability” with passkeys.  This can understandably […]
Nishant Kaushik, Chief Technology Officer, FIDO Alliance

Every few months, like clockwork, a talk or article appears claiming that new research has uncovered a “vulnerability” with passkeys.  This can understandably raise concern for executives and product leaders looking to uplift their authentication frameworks. But these reports have a pattern: they highlight opportunities for exploitation in the environment where passkeys are used, not any vulnerability in passkeys themselves.

Passkeys are FIDO authentication credentials that leverage public key cryptography. The authentication protocol relies on the user having control of their private key, which is generated on the user’s device (their smartphone, their FIDO Security Key, etc) and is never shared with the service they are authenticating to (all the service receives and saves is the corresponding public key). That design makes passkeys inherently resistant to phishing, credential stuffing, and large-scale data breaches. Breaking the security model of passkeys would require stealing the private key itself, something cryptographically and practically infeasible without compromising the device in some manner. 

Where the “Breaks” Actually Happen

When researchers announce they’ve “broken passkeys,” what they usually mean is that they’ve compromised something else in the operational environment:

Browser vulnerabilities that let malicious extensions hijack sessions or impact user behavior. Device compromises where malware takes control of the endpoint. Application weaknesses in how the authentication flow is integrated.

To be clear, these are real risks, but these are risks for any authentication solution (in addition to other secure tools such as encrypted messaging apps and VPNs). They are not flaws in passkeys themselves. Rather, they are examples of broader environmental compromise which can be mitigated with well-known security controls and policies that IT teams have been deploying for years.

Do Not Confuse Headlines with Reality: Passkeys Work as Intended

No reports have found vulnerabilities in the cryptography or the technical standards underpinning passkeys. What’s being demonstrated by researchers are scenarios where, if the user’s environment is already compromised, attackers may be able to misuse otherwise secure credentials or circumvent the secure authentication process. That’s a meaningful security discussion, and a good reminder that while passkeys are the gold standard for secure authentication, they don’t eliminate the need to have a comprehensive security program. 

Our Commitment to Security and Research

The FIDO Alliance is deeply committed to advancing security through ongoing research, rigorous testing, and collaboration with our members and the broader security community. Our members are actively exploring the impact of emerging technologies like post quantum cryptography, and emerging threats like deepfakes. We also welcome engagement with security researchers who approach their work responsibly, as constructive collaboration helps us strengthen our specifications, certification programs, and implementations. Sensationalist headlines may help a few to market their products or services, but the real win for strong, phishing-resistant authentication is when we combine forward-looking research with open, responsible dialogue. That’s at the heart of the Alliance’s ethos.

The Bottom Line

For anyone responsible for product, security, or compliance, here’s what this means when it comes to adopting passkeys:

Stay focused on fundamentals: Passkeys eliminate entire classes of attacks (phishing, credential theft, reuse) that drive the majority of breaches today. Adopt thoughtfully: Pay attention to the integration and rollout plans, following guidance and best practices with special attention to fallback models. Pair with environmental protections: Continuing to strengthen your security program remains essential, especially focusing on strong endpoint security, browser governance, and app hardening. Lean on certification: Certified implementations ensure consistency and reduce integration risk across platforms and devices.

Passkeys represent one of the most significant advances in digital identity security in decades, and they work as intended. Headlines suggesting otherwise often sensationalize research that demonstrates something we’ve known forever: no system is immune if the environment it runs in is compromised. Passkeys remain the best path forward to reducing fraud, lowering breach risk, and building customer trust in a digital-first world. 

Thursday, 28. August 2025

Project VRM

On Being Agentic

I wrote a chapter on Agency in The Intention Economy because back then (2012) the word mostly meant an insurance or advertising business. The earlier meaning, derived from the Latin agere, meaning “to do,” had mostly been forgotten. Now agency is everywhere, and is given fresh meaning with the adjective agentic. We can thank AI […]

This appears atop a DuckDuckGo search. A few years ago, numbers 1 and 2 would have been down next to number 6.

I wrote a chapter on Agency in The Intention Economy because back then (2012) the word mostly meant an insurance or advertising business. The earlier meaning, derived from the Latin agere, meaning “to do,” had mostly been forgotten.

Now agency is everywhere, and is given fresh meaning with the adjective agentic.

We can thank AI for that. The big craze now is to have AI agents for everything, and to make all kinds of stuff “agentic,” using AI.

Including each of us. We should all maximize our agency with our own personal AI.

With that in mind, and thinking toward upcoming conferences on AI (and our own VRM Day, this coming October 19th ), I just added this section to the VRM Development Work page in our wiki:

Personal AI

Balnce.ai † “Your personal AI, your loyal agents and a network that makes your data work for you.”

Base.org “Base is built to empower builders, creators, and people everywhere to build apps, grow businesses, create what they love, and earn onchain.”

Decentralized AI Agent Alliance “…offers a compelling alternative, giving individuals sovereignty, including ownership of their identity and data.”

GPTbuddy “Human in the loop AI” ([1] @GPTbuddy) is in development by FractalNetworks.

Kwaai “a volunteer-based AI research and development lab focused on democratizing artificial intelligence by building open source Personal AI.” Also, KwaaiNet “AI running distributed on a P2P fabric,” now (July 2025) with Verida “Create and deploy personalized AI agents with secure data connectors, custom knowledge bases, and configurable inference endpoints.”

NANDA: The Internet of AI Agents “Pioneering the Future of Agentic Web.”

The AI Alliance “building and advancing open source AI agents, data, models, evaluation, safety, applications and advocacy to ensure everyone can benefit.”

Please add more, or make corrections on what’s there. If you don’t have editing privileges, just write to me and I’ll make the changes. Thanks!


Oasis Open

Invitation to comment on UBL v2.5 CSD01

OASIS and the UBL TC are pleased to announce that UBL v2.5 CSD01 is now available for public review and comment.  The UBL TC facilitates interoperability in business data exchange by defining a semantic library and syntax bindings of business documents. The documents and all related files are available here: Universal Business Language Version 2.5 […] The post Invitation to comment on UBL v

Public Review ends - September 28th

OASIS and the UBL TC are pleased to announce that UBL v2.5 CSD01 is now available for public review and comment. 

The UBL TC facilitates interoperability in business data exchange by defining a semantic library and syntax bindings of business documents.

The documents and all related files are available here:

Universal Business Language Version 2.5

Committee Specification Draft 01

20 August 2025

XML:https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.xml (Authoritative)

HTML: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.html

PDF: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.pdf

Additional Artifacts

Code lists for constraint validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/cl/ Context/value Association files for constraint validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/cva/ Document models of information bundles: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/mod/ Default validation test environment: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/val/ XML examples: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xml/ Annotated XSD schemas: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xsd/ Runtime XSD schemas: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/xsdrt/ Endorsed XSD schemas for forward validation: https://docs.oasis-open.org/ubl/csd01-UBL-2.5/endorsed/xsd/

The ZIP containing the complete files of this release is found in the directory:

https://docs.oasis-open.org/ubl/csd01-UBL-2.5/UBL-2.5.zip

How to Provide Feedback

OASIS and the UBL TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review is now open and ends September 28, 2025 at 23:59 UTC.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility located here.

Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy applicable especially to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the UBL TC can be found at the TC’s public home page located here.

The post Invitation to comment on UBL v2.5 CSD01 appeared first on OASIS Open.

Wednesday, 27. August 2025

DIDAS

The Big Comparison: From Flop to Personal Digital Fortress

The E-ID proposal of 2019 was a classic own goal: privatized, centralized, and with privacy on the back burner. The result in March 2021: a resounding 64% No. The Swiss people didn’t want a “private flavour” national digital identity. Rightly so. Now, with the new BGEID (2025), Switzerland has made a serious course correction. Instead ...

The E-ID proposal of 2019 was a classic own goal: privatized, centralized, and with privacy on the back burner. The result in March 2021: a resounding 64% No. The Swiss people didn’t want a “private flavour” national digital identity. Rightly so.

Now, with the new BGEID (2025), Switzerland has made a serious course correction. Instead of half-baked privatization, there is now full state authority, privacy-by-design, and open source. The E-ID resides only on the smartphone in the user’s wallet – not in some cloud run by whoever. Age verification? No longer “name + date of birth correlated with a profile,” but simply “over 18.”

Even more exciting: this is no longer just about an E-ID, but about a trust infrastructure. Driver’s licenses, diplomas, tickets – all digital, tamper-proof, in your own wallet. A federal network-of-networks in which municipalities, cantons, universities, companies, or associations can issue their own credentials.

In short: Switzerland is finally translating its political DNA motto “diversity through federalism” into the digital realm. And this was not cooked up in a backroom, but through a participatory process with NGOs, business, academia, and civil society. Democratically legitimized, technically modern, internationally interoperable.

But here’s the real point: the E-ID is only the key – the lock and the doors are built by the ecosystem. A digital identity alone is of little use if it cannot be applied anywhere. Everyday value emerges only when government, business, and society actively use this infrastructure – with credentials we still carry around on paper today: from residence certificates to debt enforcement extracts, from bank guarantees to e-prescriptions and medical reports. Only then does SWIYU become a universal tool for secure, privacy-preserving, and efficient processes.

The ecosystem does more than create convenience – it strengthens trust: less fraud, less bureaucracy, more automation, genuine freedom of choice, and strict data minimization. That’s the difference compared to the old proposal – and compared to global login solutions offered by tech giants. Without an ecosystem, the E-ID remains a key without doors. With a broadly established ecosystem, it becomes a digital public service, open to innovation and anchored in Swiss values.

For those who want to dig deeper: we’ve compared the 2019 proposal and the new 2024 BGEID in detail (to the best of our knowledge). The document shows in black and white why the new E-ID is not a reheated version, but a true paradigm shift – from a private product to a state trust infrastructure.

The bottom line?

For citizens: more sovereignty, stronger privacy, less “Big Brother” feeling.

For the economy: legal certainty, less dependency, more innovation.

For Switzerland: digital infrastructure as a public service – as essential as roads, bridges, water, and electricity.

Resources:

Read/Download our Comparison: BGEID 2021 vs. 2025

 

 


Next Level Supply Chain Podcast with GS1

What Retailers Can Learn from Carter's RFID Journey

Carter's just pulled off what many retailers thought was impossible.  In only three months, the iconic children's apparel brand rolled out RFID technology across 700 stores:  improving accuracy on every item and making life easier for both store teams and customers. In this episode, Gina Maddaloni and Anna Marie Blackburn from Carter's join hosts Reid Jackson and Liz Sertl to dis

Carter's just pulled off what many retailers thought was impossible. 

In only three months, the iconic children's apparel brand rolled out RFID technology across 700 stores:  improving accuracy on every item and making life easier for both store teams and customers. In this episode, Gina Maddaloni and Anna Marie Blackburn from Carter's join hosts Reid Jackson and Liz Sertl to discuss how RFID became central to Carter's retail operations, what it took to win buy-in across the company, and how it is improving both inventory management and customer experience.

You'll also hear how Carter's uses RFID to cut payroll costs for year-end inventory by 50 percent, why the rollout became a recruiting tool for store teams, and where the company sees new opportunities to extend RFID into supply chain operations.

In this episode, you'll learn:

How Carter's achieved one of the fastest RFID deployments in retail

Why RFID is no longer "too complex" or "too expensive"

What's next as Carter's expands RFID use into its supply chain operations

Jump into the conversation: (00:00) Introducing Next Level Supply Chain (01:29) Anna Marie and Gina's backgrounds (03:52) What RFID technology means for retail (06:47) The process of rolling out RFID across Carter's stores (13:21) RFID's impact on Carter's operational efficiency (17:49) RFID as a recruiting tool for store teams (18:54) Asset protection benefits and peace of mind (19:34) Expanding RFID into DC operations (21:35) What's next, Carter's move toward serialization (23:01) Advice for companies starting their RFID journey (24:02) Busting RFID myths: cost, complexity, and adoption (26:29) Favorite tech beyond RFID (29:22) What Gina and Anna Marie want to learn next

Connect with GS1 US: Our website - www.gs1us.org GS1 US on LinkedIn

Connect with the guests: Gina Maddaloni on LinkedIn Anna Marie Blackburn on LinkedIn Check out Carter's

Learn more about the GS1 US Solution Partner Program: https://www.gs1us.org/industries-and-insights/partners

 

Tuesday, 26. August 2025

Energy Web

Energy Web Foundation Announces Technology Partnership with BlockDeep Labs

Energy Web Foundation (EWF), the nonprofit accelerating the energy transition with open-source, decentralized technologies, has announced a new partnership with BlockDeep Labs, a leading blockchain engineering firm specializing in Polkadot and Substrate Through this collaboration, BlockDeep Labs will support the development of new features on the Energy Web X (EWX) parachain, with an initial focu
Energy Web Foundation (EWF), the nonprofit accelerating the energy transition with open-source, decentralized technologies, has announced a new partnership with BlockDeep Labs, a leading blockchain engineering firm specializing in Polkadot and Substrate

Through this collaboration, BlockDeep Labs will support the development of new features on the Energy Web X (EWX) parachain, with an initial focus on liquid staking, multi-token support, and decentralization analysis

Both organizations share a strong commitment to open-source innovation and community collaboration, aiming to deliver impactful solutions that can accelerate the digital energy transition.

“We’re excited to partner with Energy Web Foundation on advancing the Energy Web X chain and shaping its future. At BlockDeep Labs, our mission is to lower barriers for users by building robust and efficient Web3 solutions. Collaborating with EWF gives us the opportunity to apply that expertise in a sector where trust, scalability, and interoperability are critical. Together, we aim to deliver infrastructure that not only strengthens the Energy Web ecosystem but also showcases how Polkadot SDK technology can drive real-world impact in energy and beyond.”— Gautam Dhameja, Founder, BlockDeep Labs
“Partnering with BlockDeep Labs brings deep Polkadot expertise to the Energy Web ecosystem at a critical moment in our roadmap. Their support on the Energy Web X parachain will accelerate key features like liquid staking and multi-token functionality, while ensuring our chain remains secure, scalable, and open-source. Together, we are building the digital infrastructure needed to enable the energy transition at global scale.” — Mani Hagh Sefat, CTO, Energy Web
About Energy Web

Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

About BlockDeep Labs
BlockDeep Labs is a Berlin-based blockchain engineering company with deep expertise in Polkadot SDK, tooling, and blockchain innovation.

Energy Web Foundation Announces Technology Partnership with BlockDeep Labs was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 25. August 2025

DIF Blog

Scaling AI DIFferently: Translating Promise into Value

In the second of a series of guest posts by DIF Ambassador Misha Deville, Misha explores how decentralized identity provides the missing trust infrastructure needed for AI systems to scale delegation, personalization, and content authenticity. Read the first post in the series, "The Missing Growth Lever." Everyone

In the second of a series of guest posts by DIF Ambassador Misha Deville, Misha explores how decentralized identity provides the missing trust infrastructure needed for AI systems to scale delegation, personalization, and content authenticity. Read the first post in the series, "The Missing Growth Lever."

Everyone is talking about the promises of AI. Faster decisions, tailored experiences, intelligent agents. But delivering on that promise requires more than powerful models. It requires trusted infrastructure.

AI systems create value through delegation, personalisation, and decision-making. Yet these capabilities can’t scale consentfully or securely without the ability to prove who the system is working for (context), what it’s been authorised to do (consent), or whether its outputs can be trusted (credibility). Decentralised identity models and verifiable credentials can provide the missing infrastructure to ensure AI systems can deliver on their promises.

Agentic Delegation at Scale

“To scale humans, we deploy agents. But to scale agents, we must manage them like humans.” 1 - Director Product Management, Writer.

Agentic AI is no longer a future proposition, it’s a present bottleneck. Organisations are deploying more autonomous agents, but they’re hitting “scaling cliffs” as agents multiply faster than their supervision and governance systems can manage them. Unlike APIs or scripts, agents are semi-autonomous systems with memory, tool access, and significant sensitive data exposure. Without clear scopes, audit trails, or authority checks, most delegation turns into vast liability surfaces, and a system of patchwork permissions quickly becomes unmanageable.

As Huang et al. write, ““Failure to address the unique identity challenges posed by AI agents operating in Multi-Agent Systems (MAS) could lead to catastrophic security breaches, loss of accountability, and erosion of trust in these powerful technologies” 2.

Most AI systems today are still rooted in prediction, but this is rapidly shifting toward delegated action. The agentic AI market has already reached $13.8 billion in 2025, and as agents start taking action, the question becomes: who is acting, on whose behalf, and under what authority?

‘Authenticated delegation’ enables third parties to verify that:

“(a) the interacting entity is an AI agent, (b) that the AI agent is acting on behalf of a specific human user, whether pseudonymised or identifiably known, and (c) that the AI agent has been granted the necessary permissions to perform specific actions”. 3

This sounds simple, but delegated ‘trust’ doesn’t end with a single permission. Especially in asynchronous flows or agent-to-agent communication, it must be enforced dynamically over time.

Most systems today use token-based models, like OAuth, that assume trust within a known and bounded hierarchy is established when a token is issued, and meaningful wherever invoked. Once the token is delivered however, there is no enforcement that the agent will continue to act within its authorised scope over time, or the domain within which it is meaningful. A delegation-first model like ZTAuth, or other Authorization languages based on Object Capabilities (e.g. ZCaps, UCAN, Hats Protocol), adds runtime checks, making sure agents are still trusted, acting on behalf of the right user, and following the right rules, every time they take an action rather than with a token's expiry period.

Similar ideas are emerging in The MIT Computer Science and AI Lab’s framework for ‘Identity-Verified Autonomous Agents’, which introduces cryptographic proofs of authority and full auditability into multi-agent workflows2. Meanwhile, projects like the A2A protocol are building agent registries to support discovery, entitlements, and secure agent-to-agent communication across trust boundaries and OAuth-style enterprise hierarchies 4.

At the standards level, DIF’s Trusted AI Agents Working Group is building open specifications to support these use cases. Their work spans data models, object capability frameworks, interoperability libraries and runtime trust enforcement patterns. This is about more than securing agent-to-agent interactions. It’s about enabling a full lifecycle of trust, from credentialed instantiation of agents to delegated (logged and fully-auditable) execution, all the way through to forensic audit and remediation in the worst case scenario.

Hyper-personalisation that works

AI-driven hyper-personalisation promises to unlock entirely new value in digital experiences. McKinsey reports show meaningful increases in customer engagement and spend when personalisation is done right 5. But it can just as easily backfire. A 2019 Gartner study found that 38% of users will walk away from a brand if personalisation feels “creepy” 6, and recent research with Gen Z confirms the duality that personalisation is welcome up until it crosses the line 7.

That line is defined by context and consent. When AI systems infer personal data from web-scraping profiles, browser fingerprinting, adtech data, and other opaque signals, they significantly undermine trust and user agency. When they employ algorithmic transparency, ethical frameworks, and user-authorised data inputs they mitigate the risks of conscious and unconscious mistrust and backlash 8.

Verifiable credentials in this context offer a solution that can give AI systems structured, consent-based attributes that users explicitly approve. This helps shift personalisation away from prediction and toward permission. It reduces the risk of misfires and irrelevant outputs, and increases both system reliability and user trust.

The travel industry is a clear example of the opportunity gap. Identity and preference checks occur at nearly every step, yet the ecosystem remains fragmented 9. Travellers routinely overshare sensitive data multiple times, with little visibility into where it’s stored or how it’s used. Providers, in turn, struggle to deliver seamless or personalised services without duplicating traveller effort or violating privacy regulations.

That’s starting to change. Initiatives like IATA’s One ID aim to eliminate repetitive ID checks using biometric-backed credentials, creating a more secure, contactless experience. Live pilots by SITA and Indicio, in partnership with Delta Airlines and the Government of Aruba, have also shown how digital travel credentials can streamline identity verification at check-in, boarding, and border control.

These foundational shifts pave the way for more advanced personalisation use cases. With credential infrastructure in place, providers can begin supporting traveller-owned profiles that store personal preferences, enable selective data sharing, and allow AI agents to act on a traveller’s behalf. The DIF Hospitality & Travel Working Group is developing schemas to support this, with traveller profiles that are dynamic, revocable, and built for interoperability. As Nick Price notes, when preferences are embedded in credentials and shared on the traveller’s terms, personalisation becomes possible while still preserving privacy and trust 10.

Decision-Making and Sense-Making in Synthetic Noise

Identity fraud isn’t new, but AI has supercharged its scale, speed, and sophistication. In 2025, 1 in 20 ID verification failures are already being linked directly to deepfakes, while synthetic audio and video forgeries have increased 20% and 12% respectively in fraud attempts year-over-year 11. National Security Agencies of the US, UK, Canada, and Australia, have warned that the quality and pace of AI-generated forgeries “have reached unprecedented levels and may not be caught by traditional verification methods” 12.

Ironically, fraud detection is one of AI’s strongest use cases. But its success depends on the quality of input data. Risk models tend to rely on patterns from historical data to flag anomalies. If the data is synthetic, spoofed, or unverifiable, the model can learn the wrong patterns or miss the threat altogether. It's a clear case of “attacker's advantage,” since automated attacks are almost free to launch at brute-force scale. What's worse,s AI adversaries are improving at impersonation, so hallucinated and forged content is proliferating across search engines, media outlets, and public discourse, contaminating LLMs at the lowest level of training data.

“As AI agents grow increasingly adept at mimicking human behavior - crafting text, creating personas, and even replicating nuanced human interactions - it becomes harder to maintain digital environments genuinely inhabited by real people.”2

Detecting ‘what’s real’ at scale now requires cryptographic certainty. Verifiable credentials offer a solution to the ‘garbage in, garbage out’ problem by allowing systems to verify data attributes without exposing raw personal data. Content credentials, as standardised by C2PA, provide tamper-evident metadata that can trace authorship, modification history, and usage rights across files, namespaces, and industry associations. This helps prevent both fraud in high-stakes transactions and reduce the risk of model “pollution” by synthetic content 13.

These mechanisms are quickly moving from optional to operational. California’s SB 942, set to take effect in 2026, will require that all AI-generated or AI-altered content be disclosed and tied to an immutable record of provenance. As Erik Passoja writes, “Compliance is just the on-ramp… the real destination is an authenticated digital ecosystem.” 14. Infrastructure built on signed manifests, cryptographic consent, and watermark durability won’t just prevent fraud, it will underpin new forms of value, from automated licensing to portable and just-in-time reputation.

In all of these cases, a reliable identity layer isn’t a ‘nice to have’, but a prerequisite for trust, adoption, and real-world value. Decentralised identity and verifiable credentials provide the infrastructural foundation that lets AI scale and deliver new opportunities. DIF’s working groups are tackling these challenges head-on, from authenticated AI agents to verifiable travel profiles and content authenticity.

The next article in this series will dive into the work of the Content Authenticity Initiative and DIF’s Creator Assertions Working Group, exploring how open standards are enabling AI to be used confidently in media, preserving trust, provenance, and creative integrity.

Find out more about DIF’s working groups here:

Creator Assertions Hospitality and Travel DIF Labs

If you’d like to stay updated on the launch of DIF’s Trusted AI Agents Working Group, reach out to contact@identity.foundation.

Endnotes M. Shetrit (2025). “Supervising the synthetic workforce: Observability for AI agents requires managers, not metrics”. Writer. ↩︎ Huang et al. (2025). “A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control”. arXiv. ↩︎ South et al. (2025). “Authenticated Delegation and Authorized AI Agents”. arXiv. ↩︎ A2A Project. “Agent Registry - Proposal”. GitHub. ↩︎ McKinsey & Company (2025). “Unlocking the next frontier of personalised marketing”. ↩︎ Gartner (2019). “Gartner Survey Shows Brands Risk Losing 38 Percent of Customers Because of Poor Marketing Personalization Efforts↩︎ Peter et al. (2025). “Gen AI – Gen Z: understanding Gen Z’s emotional responses and brand experiences with Gen AI-driven, hyper-personalized advertising”. Frontiers in Communication. ↩︎ Park, K & Yoon, H (2025). “AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI”. Nature. ↩︎ DIF (2025). “DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group↩︎ Dock (2025). “How Digital ID is Reshaping the Travel Industry↩︎ Bondar, Ira (2025). “Real-time deepfake fraud in 2025: Fighting back against AI-driven scams”. Veriff. ↩︎ NSA et al. (2025). “Content Credentials: Strengthening Multimedia ↩︎ Adobe (2025). “Content Credentials”. ↩︎ Passoja, Erik (2025). “From Compliance to Prosperity” LinkedIn. ↩︎

Friday, 15. August 2025

FIDO Alliance

Dark Reading: NIST Digital Identity Guidelines Evolve with Threat Landscape

In a bid to improve overall security of the identity ecosystem, the National Institute of Standards and Technology updated its Digital Identity Guidelines earlier this month. The first revision since 2017, many […]

In a bid to improve overall security of the identity ecosystem, the National Institute of Standards and Technology updated its Digital Identity Guidelines earlier this month. The first revision since 2017, many organizations should be able to implement the updated guidelines without much difficulty as part of their identity strategy.

Attackers are always sharpening their skills to bypass organizations’ identity and access management (IAM) protocols – the key to gaining critical access – and artificial intelligence (AI) is making phishing attacks even more effective, and deepfakes are tricking even the most security-savvy mind. New authentication measures such as passwordless technologies, exist, but implementation challenges have hindered adoption.


Research Snipers: Microsoft Authenticator Deletes All Stored Passwords, Pushes Users Toward Passkeys

As announced, Microsoft today deletes all stored passwords from his authenticator app. Users have to secure their access data before they are lost permanently. Other functions of the app are […]

As announced, Microsoft today deletes all stored passwords from his authenticator app. Users have to secure their access data before they are lost permanently. Other functions of the app are preserved.

Users must secure passwords

For users of the Microsoft Authenticator, it is a critical deadline: As announced at the beginning of May, all stored passwords are irrevocably deleted from the app on August 1, 2025. If you do not ensure or transfer the data, you permanently lose access to the stored passwords.

The changeover takes place gradually and has been going on for months. Since June 2025, it has no longer been possible to save new passwords in the app or to import from external sources. This affects both manual entries and the synchronization with other services. In July 2025, the autofill function was then deactivated, so that the app no longer automatically entered login fields on websites or in apps.


Security.World: HID Unveils Next-Generation FIDO Hardware And Centralized Management At Scale

HID, a worldwide leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials—now powered by the new Enterprise Passkey Management (EPM) solution— designed to help organizations deploy and manage passkeys at the […]

HID, a worldwide leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials—now powered by the new Enterprise Passkey Management (EPM) solution— designed to help organizations deploy and manage passkeys at the enterprise scale. 

Newresearch from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers.HID’s solution streamlines the shift to passwordless authentication.


Oasis Open

OASIS Staff Honors Gershon Janssen with the Distinguished Contributor Award

It is our privilege to announce that Gershon Janssen has been awarded the OASIS Distinguished Contributor Award — an honor he has richly deserved for many years.  With his recent announcement of transitioning off the OASIS Board of Directors, it’s a fitting time to reflect on Gershon’s extraordinary contributions. Since joining OASIS in 2007, he […] The post OASIS Staff Honors Gershon Janss

Recognizing 18 years of exceptional leadership, global advocacy, and dedication to OASIS open standards

It is our privilege to announce that Gershon Janssen has been awarded the OASIS Distinguished Contributor Award — an honor he has richly deserved for many years. 

With his recent announcement of transitioning off the OASIS Board of Directors, it’s a fitting time to reflect on Gershon’s extraordinary contributions. Since joining OASIS in 2007, he has given more than his time — he has offered steady leadership, sharp insight, and unwavering commitment to our mission. Over the past 18 years, he has served in numerous roles, including:

OASIS Member since 2007, elevated to Sponsor Member status in 2022
Board Member since 2012, serving as Secretary (2012–2016) and as President and Chairman (2016–2025)
Secretary and Contributor for the PMRM and WS Calendar Technical Committees
Contributor to the IDtrust and WS-I Member Sections, as well as the PKCS #11, KMIP, MQTT, VIRTIO, BPEL4People, OData, and many other Technical Committees
Member of the OECD Internet Technology Advisory Council and Chair of its Security and Privacy Working Group
Board of Managers member for the OASIS Open Development Foundation
Treasurer for the OASIS Open Europe Foundation
Active member of the Process, Finance, Staffing, Governance, and Technology Committees

Much of Gershon’s work has been in roles that rarely receive deserved recognition — yet are vital. His steady hand has been a stabilizing force during turbulent times, including leadership transitions in which he shouldered the responsibilities of both Chair and Interim Executive Director on multiple occasions, some lasting from several months to nearly a year. 

Gershon’s dedication has gone far beyond formal responsibilities. On his own time — and at his own expense — he has served as an ambassador for OASIS around the globe, personally sponsoring activities when resources were stretched, representing our work as a TC member at key events, and acting as a liaison to numerous industry and government bodies. 

Through it all, Gershon led with quiet strength, patience, and a deep respect for the collaborative process, OASIS staff, and the members. His influence has helped guide OASIS through challenges and successes alike, ensuring our community remained strong and forward-looking. 

Words cannot truly capture our appreciation for Gershon’s years of service, nor can they measure the impact he has made. We thank him not only for his leadership, but for his enduring dedication to OASIS and the global open standards community. 

Please join us in congratulating Gershon on this well-deserved recognition. 

— OASIS Staff

The post OASIS Staff Honors Gershon Janssen with the Distinguished Contributor Award appeared first on OASIS Open.


Velocity Network

Velocity Network Foundation Joins Global Leaders at Inaugural Geneva Conference on Digital Trust 

The post Velocity Network Foundation Joins Global Leaders at Inaugural Geneva Conference on Digital Trust  appeared first on Velocity.

Velocity Charitable Foundation Elevates Statewide Approach to Verifiable Credentials and Open Ecosystems

The post Velocity Charitable Foundation Elevates Statewide Approach to Verifiable Credentials and Open Ecosystems appeared first on Velocity.

Velocity Network Technology Becomes “Verii” Under the Linux Foundation’s LF Decentralized Trust Initiative 

The post Velocity Network Technology Becomes “Verii” Under the Linux Foundation’s LF Decentralized Trust Initiative  appeared first on Velocity.

What is a Trust Framework? 

The post What is a Trust Framework?  appeared first on Velocity.

The post What is a Trust Framework?  appeared first on Velocity.

Thursday, 14. August 2025

Energy Web

From Off-Chain Execution to On-Chain Trust: Inside Energy Web’s Consensus Overhaul

Energy Web has rolled out a major upgrade to the consensus mechanism governing Worker Nodes on the Energy Web X (EWX) network. This enhancement aligns Energy Web X and the Worker Node Networks with a core vision: using secure on-chain consensus and rewards to validate off-chain computations while incentivizing the highest level of node performance. Why This Matters A growing number of appli
Energy Web has rolled out a major upgrade to the consensus mechanism governing Worker Nodes on the Energy Web X (EWX) network. This enhancement aligns Energy Web X and the Worker Node Networks with a core vision: using secure on-chain consensus and rewards to validate off-chain computations while incentivizing the highest level of node performance. Why This Matters

A growing number of applications in the energy sector and beyond are leveraging decentralized Worker Node networks on Energy Web X. For example, Green Proofs for Bitcoin (GP4BTC) uses EWX to verify green Bitcoin mining, and the recently launched Carbon-Aware Nomination system orchestrates compute workloads to maximize the use of clean energy. As these and other DePIN (Decentralized Physical Infrastructure Networks) use-cases expand, it becomes ever more critical to enforce accurate and verifiable computation in a secure, scalable manner.

This consensus upgrade directly addresses that need. It introduces several improvements to how EWX validators reach agreement on Worker Node outputs and distribute rewards: changes that align incentives with performance and ensure the integrity of off-chain execution. This upgrade empowers enterprises to pair their off-chain computational systems with highly configurable on-chain reward mechanisms, creating strong business and financial incentives for both enterprises and community members operating Worker Nodes to actively contribute to these decentralised systems.

Setting The Bar on Performance

Only the most consistent, high-performing worker nodes will now be rewarded for their contributions. The upgrade introduces an SLA performance threshold, a minimum standard for correct vote submissions that a node must meet to qualify for any rewards. In other words, a worker’s voting accuracy over each reward period has to exceed a predefined percentage (set on a per-Solution Group basis) for that node to earn a share of the rewards. A “correct” vote means the worker’s submitted result from their off-chain execution aligns with the majority consensus for a given round (as determined by EWX validators). If a node’s correct vote rate falls below the threshold, it won’t receive rewards for that period, no matter how many votes it cast.

This change pushes every Worker Node operator to perform above a clearly defined bar. Energy Web X validators now track each worker’s voting performance across rounds (via on-chain metadata) and calculate the percentage of that worker’s votes that matched the accepted consensus. Only those exceeding the SLA threshold are deemed eligible. Among those that qualify, rewards are weighted by accuracy and stake, meaning those who contribute more correct results and have more stake on the table earn proportionally more. See the reward formula below:

worker_reward = (worker_correct_votes × user_stake) / total_weighted_correct_votes × voting_reward_per_block × active_blocks

Where: worker_reward: the amount of EWT distributed to the worker node operator as their active reward for participation in eligible voting rounds within the concluded reward period. worker_correct_votes: The number of correct (consensus aligned) votes submitted by the worker node in the eligible voting rounds within the concluded reward period. user_stake: The amount of tokens locked by the operator upon registration. total_weighted_correct_votes: Sum of correct votes weighted by stake across all worker nodes ( Σ(correct_votes_i * stake_i) ). voting_rewards_per_block: Amount of tokens allocated to voting rewards per block (configured by the solution registrar). active_blocks: Number of blocks spanning the reward period. Worked example:

A Solution Group contains 150 operators. At the end of a reward period, 100 operators submitted sufficient votes to exceed the SLA threshold and are eligible for rewards.

From the eligible 100 operators, the average correct votes during the reward period is 100 and the average user stake is 1000.

Therefore: total_weighted_correct_votes = 100 * 100 * 1000 = 10,000,000

Worker Node A had 110 correct votes with a stake of 1000. There are 7200 active blocks in a voting round, and the voting rewards per block are set to 1 token.

Therefore: worker_node_A_reward = (110 * 1000) / 10000000 * 1 * 7200 = 79 tokens

Consensus Validated Via Quorum & Majority Threshold

The consensus mechanism employs a two-tier validation process to guarantee both sufficient participation and accuracy of submissions before finalising the result on-chain. Energy Web X requires two conditions for a voting round to produce a valid result:

Quorum: A minimum percentage of eligible worker nodes must participate by submitting their votes. Majority Threshold: Within the specified quorum of participants, a majority of workers, over the defined threshold, must agree (i.e. submit matching results) for the result to be accepted as the round’s consensus.

If either condition isn’t met the round is marked Unresolved. These thresholds optimise for security and trust without sacrificing scalability. Quorum ensures that a sufficiently broad sample of the network contributes to each consensus decision, while the majority threshold ensures accuracy and trust in the result. Only when both conditions are satisfied will the Energy Web X validator set record the final result on-chain.

Improving Withdrawal Delay

The upgrade also refines how and when worker node operators can withdraw their stake from the network. The withdrawal delay is the period a user must wait between submitting an unsubscription request and receiving their tokens. This means every action (vote) has a consequence: correct votes will always be rewarded while protecting against malicious actors who may otherwise submit false votes and then withdraw to evade penalties.

With the upgrade, withdrawal delays are measured in reward periods instead of blocks. In practice, after an operator initializes a withdrawal, they must wait a defined (by the registrar) number of additional reward periods before withdrawing their collateral. During this delay period, the node can still participate in voting rounds and continue to earn rewards, except for the final reward period in which the stake is released and voting eligibility ends. This ensures all pending rounds are properly settled and any rewards or penalties processed before a node can exit the network.

For example, a solution group has a withdrawal delay of 2 reward periods. A subscribed worker node operator votes in Reward Period 1 then submits an unsubscribe request in Reward Period 2. The operator would need to wait for Reward Period 3 and 4 to conclude, before receiving their funds back during a block (specific timing depends on system load) in Reward Period 5. The operator can participate (vote) in Reward Period 3 and 4 but not in 5.

The Outcome: Accurate, Scalable and Secure Off-Chain Compute

Together, these enhancements bring Energy Web X’s consensus mechanism in line with the vision of incentivizing top-performing worker nodes to deliver accurate, verifiable outputs from off-chain computation.

What does this mean for Energy Web ecosystem participants? Solution owners and their users can have complete trust in the outputs of decentralised compute. Energy Web X’s blockchain will securely handle the heavy lifting of coordinating nodes, validating results, and distributing rewards, all in the background. This allows developers and enterprises to focus on what they do best: building high-value applications, confident that a robust, trusted decentralized compute layer is reliably powering their workloads behind the scenes.

About Energy Web

Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

How to Get Involved

Review the docs
Join the conversation

From Off-Chain Execution to On-Chain Trust: Inside Energy Web’s Consensus Overhaul was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 13. August 2025

Velocity Network

Velocity Network Foundation – MidYear Updates

The post Velocity Network Foundation – MidYear Updates appeared first on Velocity.

Tuesday, 12. August 2025

FIDO Alliance

GB News: Microsoft will start DELETING your passwords from today, and there’s only one way to save them

Microsoft has started to delete all passwords saved in its Authenticator app — and if you want to make sure you’re not locked-out of your favourite websites and apps, there’s one way […]

Microsoft has started to delete all passwords saved in its Authenticator app — and if you want to make sure you’re not locked-out of your favourite websites and apps, there’s one way to save your previous logins.

It might seem a little extreme, but the decision to start wiping passwords from its users has been months in the making. Microsoft has been slowly winding down Microsoft Authenticator, a free mobile app developed by Microsoft for Android and iOS that lets you securely sign-in to online accounts using two-factor authentication (2FA) or passwordless logins.


ZDNet: Syncable vs. non-syncable passkeys: Are roaming authenticators the best of both worlds?

Like or not, a replacement for passwords — known as passkeys — is coming your way, if it hasn’t already. The three big ideas behind passkeys are that they cannot be guessed […]

Like or not, a replacement for passwords — known as passkeys — is coming your way, if it hasn’t already. The three big ideas behind passkeys are that they cannot be guessed in the way passwords often can (and are), the same passkey cannot be re-used across different websites and apps (the way passwords can), and you cannot be tricked into divulging your passkeys to malicious actors, often through techniques such as phishing, smishing, quishing, and malvertising. 


Intelligent CISO: HID unveils next-generation FIDO hardware and centralised management at scale

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – […]

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – designed to help organizations deploy and manage passkeys at the enterprise scale. 

New research from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers. HID’s solution streamlines the shift to passwordless authentication.

The next phase of HID’s passwordless authentication roadmap gives enterprises choice, flexibility and speed to deploy FIDO without compromising user experience or security posture. The expanded portfolio delivers phishing-resistant authentication with enterprise-grade lifecycle management, making scalable passwordless security accessible to organisations of all sizes. The solution works seamlessly across diverse work environments while reducing IT support requirements through centralised visibility and control.


ZDNet: What if your passkey device is stolen? How to manage risk in our passwordless future

Part of the “passkeys are more secure than passwords” story is derived from the fact that passkeys are non-human-readable secrets — stored somewhere on your device — that even you have […]

Part of the “passkeys are more secure than passwords” story is derived from the fact that passkeys are non-human-readable secrets — stored somewhere on your device — that even you have very limited access to. 

OK, so what happens to those passkeys if your device is stolen?


Intelligent CISO: HID unveils next-generation FIDO hardware and centralised management at scale

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – […]

HID, a leader in trusted identity and access management solutions, has announced a new line of FIDO-certified credentials – now powered by the new Enterprise Passkey Management (EPM) solution – designed to help organizations deploy and manage passkeys at the enterprise scale. 

New research from FIDO Alliance shows that while 87% of enterprises are adopting passkeys, nearly half of those that are yet to deploy cite complexity and cost concerns as primary barriers. HID’s solution streamlines the shift to passwordless authentication.

Friday, 08. August 2025

DIF Blog

Scaling AI DIFferently: The Missing Growth Lever

In the first of a series of guest posts by DIF Ambassador Misha Deville, Misha sets the stage for an underappreciated business problem one layer deeper than traditional User Experience analyses. The Trust Ceiling The paradox is that we want AI systems that understand us, but only on our own

In the first of a series of guest posts by DIF Ambassador Misha Deville, Misha sets the stage for an underappreciated business problem one layer deeper than traditional User Experience analyses.

The Trust Ceiling

The paradox is that we want AI systems that understand us, but only on our own terms. We’re uneasy when AI appears to use personal data without permission, yet we’re frustrated when it doesn’t ‘get us’ on the first try (in ways that it would need lots of personal data to fill in the context). This might seem like a philosophical problem, but I am pointing to an underappreciated technical one. Without embedding trust negotiation into the system design itself, through features such as transparent data flows and consent mechanisms, these systems cannot negotiate subtext and context; without this capability, we will fail to see AI adoption (and utility) scale as promised. 

Friction in trust, and therefore adoption, stems from the lack of transparency and control, not from ‘poor model intelligence’. 66% of U.S. shoppers, for example, say they would not allow AI to make purchases for them, even if it meant securing better deals, because “consumers suspect AI is working for the retailer, not them. Until that trust gap closes, AI will remain a product discovery tool” 1.

Public patience with AI product rollouts is already wearing thin. Global brands like Duolingo are facing significant backlash after announcing ‘AI-first’ strategies 2, and the perception gap between those building AI systems and the addressable markets expected to adopt them is ever widening. 51% of adults interviewed in a recent Pew Research Study said they were more concerned than excited about AI, which contrasts sharply with the mere 15% of ‘AI experts’ that held this view 3

The generative-AI race to market that made AI more powerful and more personal, also made systems more opaque, leaving users in the dark about how and why decisions are made. In the absence of transparency, even well-intentioned systems lose public trust. The WEF frames this as a missed opportunity to build new markets on a healthy footing: “Without transparency, AI systems might be used that are not value-aligned at a level that is acceptable to users, or users might distrust AI systems that actually are sufficiently value-aligned because they have no way of knowing that.” 4

To embed trust into the system itself, people need to be able to verify:

who the AI is working for (context),  what it’s allowed to do (consent), and,  whether the output can be trusted (credibility). 

The solution therefore isn’t more intelligent AI models, it’s a complimentary, verifiable identity layer. An identity layer doesn’t just enable trust at the level of individual users trusting individual interfaces; it also supports a healthier marketplace overall by making AI systems traceable, comparable to one another, and accountable to the users and services they interact with. It helps users in aggregate trust AI more in aggregate.

Verifiable credentials built on a backbone of decentralised digital identifiers, enable cryptographic proofs of user and object attributes. Context, consent, and credibility become programmable, and the user experience transforms from coercive to empowering. 

The Market Opportunity

Digital identity and AI are fundamentally interdependent, but the current investment landscape and dominant business strategies do not reflect this. Today, AI is seen as a growth engine and identity infrastructure is seen as compliance overhead. This mental model is not just outdated, it’s economically limiting.

In 2024, global VC investment into AI-related companies exceeded $100 billion, marking an 80% increase from 2023 5. Meanwhile, investment in digital identity declined. In the UK, one of the world’s leading digital identity markets, the sector saw only $58 million in VC funding in 2024, a 69% decline from the year before 6. This stark investment gap reveals a misunderstanding of the technology stack required for trustworthy, scalable AI. 

The convergence of these technologies is both ethically necessary and commercially advantageous. An identity layer that’s fit for this new era will enable AI breakthroughs to scale with direction, grounding, and accountability. If AI is the engine, then digital identity is the navigation system. It doesn’t slow the rocket down. It ensures it lands where we need it to. 

The companies that align AI with verifiable digital identity will capture disproportionate market share where others hit trust ceilings. Strategies that capitalise on both technologies will unlock the promised value in use cases such as:  

In fintech, verified delegation allows AI agents to execute trades securely, with cryptographic proof of authority and clear audit trails. In healthcare, patient-controlled access to verified medical records enables truly personalised care without compromising consent or privacy. In global supply chains, AI systems can confirm the authenticity of every product and actor, preventing counterfeits, improving traceability, and automating trust at scale.

Digital identity is not a constraint on AI. It’s the infrastructure that allows it to scale responsibly and profitably. Standards like W3C’s Verifiable Credentials Data Model, provide a vital foundation for AI systems to verify context, consent, and credibility without compromising privacy. The companies that embrace this interdependence will define the next wave of digital infrastructure. Those that don’t, will risk building impressive technology that nobody trusts enough to use.

In the next article, we’ll explore how decentralised identity unlocks the real-world value of AI, starting with three core functions behind its promises: personalisation, delegation, and decision-making.

If you’d like to stay updated on the launch of DIF’s AI-focused working group, reach out to contact@identity.foundation.

Endnotes Charleston, SC (2025). “Two-Thirds of Shoppers Say ‘No’ to AI Shopping Assistants – Trust Issues Could Slow Retail’s AI Revolution”. Omnisend. Braun, S (2025). “Duolingo’s CEO outlined his plan to become an ‘AI-first’ company. He didn’t expect the human blacklash that followed.” Fortune. McClain et al. (2025). “How the U.S. Public and AI Experts View Artificial Intelligence”. Pew Research Center. Dignum et al. (2024). “AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals”. World Economic Forum. Fairview Capital. (2024). “Preparing for the Agentic Era in Venture Capital”. Wyman, O. (2025). “Digital Identity Sectoral Analysis 2025”. Gov.UK

Thursday, 07. August 2025

DIF Blog

DIF celebrates v1.0 release of did:webvh, with 3 complete implementations

After years of steady and detailed work, DIF celebrates v1.0 release of did:webvh, with two implementations and even a dockerized installation option.

Including recently-donated Rust implementation by Affinidi. More below!

Wait, is that a typo? Do you mean did:web? 

I did not, but I also did not not mean did:web. did:webvh is a new DID method, but it is also specifically designed to be a more production-worthy, “grown up” version of did:web, everyone’s favorite “training-wheel” DID method and growth hack for getting VCs issued from highly-trusted issuers. Crucially, did:webvh is a “backwards-compatible upgrade” to did:web, meaning it can be consumed in "legacy mode" by any software that already resolves did:web, but upgrades the featureset and trust-model when consumers upgrade their resolution logic to take advantage of the new syntax.

It was incubated at the Identifiers and Discovery Working Group at DIF, meaning it got extensive review and input from many other DID connoisseurs deep in the trenches of DID method design and “user research” as to what use-cases developers really want DIDs for in the first place. Having previously incubated other DID methods in the Sidetree family, early versions of the KERI system, and the did:peer specifications foundational to DIDComm, DIF is happy to see another DIF-incubated DID method graduate to v1!

If you’re hearing about did:webvh for the very first time here and now, let’s start at the beginning with a few high-level differences between did:web and did:webvh:

The VH stands for Verifiable History. Each DID version links cryptographically (“chains”) back to its predecessor and ultimately to the verifiable self-certifying identifier (SCID) that is embedded in the DID identifier. The SCID is derived from the DID’s initial state. Further, each update is signed by a key authorized to update the DID. Each valid did:webvh (the URL) can be easily and deterministically translate to a did:web. Just delete the “vh” in the method segment and the subsequent segment (the “SCID” — self-certifying identifier), and you have a valid did:web identifier.  Resolve that the way you would any other did:web, and you get a valid did:web DID Document. Easy-peasy, 100% interoperability with the most widely-deployed DID method other than did:key. Verifiability is not dependent on DNS. While DNS is used for discovery and retrieval of the DID Log file that contains a did:webvh history, the cryptographic verifiability of that history is not dependent on DNS. The DID Log can be retrieved from other locations/caches and fully verifies the history of its corresponding DID to date. Verifiability is independent of its DID Documents. A did:webvh DID Doc can contain anything the DID Controller wants it to have — different keys, key types, services and so on. The verifiability of the DID is secured by a more complex resolution mechanism, not the contents of the document. That allows the specification to be very opinionated on how the DID is secured (making interoperability easy and reliable), without limiting the purpose of the DID. Secure DID generation can now happen off-server. This allows DID doc hosting to be separate from key management, de-risking malicious hosts and allowing for “dumber” (e.g. key-oblivious) hosting pipelines, e.g. without access to /.well-known. “Common sense” DID URL Handling. The simple DID URL syntax <did>/path/to/{file} maps to just what one would expect: the "/path/to/" subdirectory where the <did> document is stored. Cross-Host Portability. A DID on one host, if configured from inception to allow it, can migrate to a host on another domain and still keep its long-lived and globally-unique “SCID” and verifiable history. This further decouples web hosting from the long-term value of the SCID as a stable, host-independent UUID, which (webvh-aware) consumers can use to link and deduplicate DIDs which migrate across multiple hosts. Optional extensions allow stronger guarantees for historical resolution and duplicity/forgery detection on the part of malicious hosts. Certificate Transparency-style “witnessing networks” (a web-of-trust approach to gossiping DID document histories, also used in the KERI key management/ID system) and/or aggressively cache-busting trusted resolvers can detect host malfeasance of various kinds. The origins of did:webvh

From the start, did:web had some production implementations that were too load-bearing to break… and many unhappy implementers. When a given website/domain only needed one collective DID (common for issuers already highly-trusted and with highly-secured websites, e.g. institutional issuers of credentials), or even in cases where the entirety of a DID’s lifecycle could be assumed to live on the same unchanging domain (e.g., employees or appointees or departments of an institutional issuer), did:web was great, and no one needed to consider upgrades.

Corner-cases and unhappy implementers kept popping up over time, though: 

What about historical resolution of old signatures after rotations and de-activations? What if a merger or acquisition or rebrand needs to update a domain name, but keep all its old DIDs resolving after the change? What if students want to keep their DIDs (or at least keep old signatures validating) after they graduate and they get incrementally locked out of their university’s IT systems?  What if anti-trust regulations forced portability (they way they do for phone numbers) and users wanted to keep connections after switching identity providers (or “hosts” in webvh terminology? What if hosts are perfectly willing to issue did:webs to hundreds of users, but cannot directly automate /.well-known/ documents for each, whether due to their JS framework, choice of contractor/service-provider, security policies, etc? What if consumers can’t fully trust all did:web hosts to faithfully produce the same document for all callers? What duplicity-detection mechanisms or historical guarantees could be offered to those consumers? Similarly, some use-cases are too high-stakes to trust all hosts equally; some consumers were demanding baseline guarantees about how securely DID documents get hosted over DNS-addressed HTTP beyond what is required for did:web

A backwards-compatible “version 2” of did:web was imagined but effectively deferred for years, before work started in earnest on an heir to did:web that focused primarily on improving its trust model. As the idea kept iterating and different requirements and design goals were debated by willing co-designers, the portability guarantees came to the fore as another key goal.

This portability goal dovetailed nicely with the requirements of the Open Wallet Foundation’s ACA-Py framework, which requires DID “agents” (a kind of general-purpose “backend” for many DID and VC operations) to be smoothly swappable over time by simply updating the services property in a DID document. As ACA-Py had originally been developed for blockchain-based did:indy DIDs, there was some amount of portability, key-management, and trust-model parity that needed to be achieved, which brought a lot of other difficult design problems in tow since websites are not stateful or append-only in the way blockchains are! 

Another parity goal was in achieving many of the host-independent “microledger” properties central to the KERI identifier system (and Sidetree before it), whereby the entire history of every DID can be walked to detect duplicity or other malice in any part of the system. Another KERI feature which the designers wanted to achieve was KERI’s “witnessing” properties, inspired by Certificate Transparency, which keeps hosts honest with tamper-detection and caching. This survives in optional extensions, i.e. as a distinct mechanism that consumers and hosts can opt into for stronger guarantees in resolution.

The did:webvh method's requirements informed the design and its original name, “Trust did:web”. Namely, the eponymous goal was to achieve an HTTP-published but ultimately host-independent trust layer for DID-based interactions, without a common blockchain and with more conventional cryptography and tooling than KERI requires. Trust nerds can think of this as a move from the “CA-grounded” web trust model, the only one in which did:web is usably trustworthy and secure, to one usable in a more modern “Let’s Encrypt” trust model that the web has largely moved to in recent decades.

Playing Nice in a Multi-DID World

As mentioned above, a formative goal in the design of did:webvh was smooth interoperability with did:web and did:key, as well as properties that would normally require an append-only VDR to achieve. Contrary to many DID methods which strive to stand out from the crowd by offering unique features enabled by their particular VDR or low-level mechanisms, this method sought instead to achieve a superset of features already in production today, to offer existing DID-based and DID-like systems a beter option to migrate to or translate to. Anything permitted in the DID Core specification is available in did:webvh — no restrictions! If the early phase of DID design was one of innovation and experimentation, did:webvh strives to consolidate and unite in this phase marked by convergence and productionization.

A good example of this is how did:webvh appends a digest (which can be used as a checksum) of the DID log entry to the id property of each DID document. In the HTTP context, this makes each DID document “tamper-evident” thus reducing many of the opportunities for a malicious did:web host to produce inaccurate or adulterated DID documents on behalf of a DID controller. This also makes all did:webvh DIDs “self-certifying,” in the sense that their documents’ integrity can always be checked against this checksum. Similarly, the provenance back to inception of updated DID documents can be integrity-protected and proven by the same “SCID” (Self-Certifying Identifier) mechanism.

SCIDs are a powerful design pattern, which undergird early IPFS-based DID methods like did:ipid and the Sidetree methods like did:ion and did:element, both of which were incubated in the DID Sidetree WG and used the IPFS identifier scheme to address each DID document by its hash.  Older mechanisms of hash-identification include RFC 6920, which underwrites the recent did:ni method; newer examples include did:iscc and the growing body of SubResource Integrity specs at W3C that allow location-independent, self-certifying identifiers for slow-moving, cacheable web resources. KERI pushed SCIDs to the fore and influenced a lot of design work at ToIP, but zooming out a little they can be seen across modern web development as a powerful counterbalance to the “same-domain principle” of modern web security.

SCIDs are also crucial to lots of ongoing design work at ToIP and elsewhere, such as the First Person Project which mandates the portability and self-certifying properties described above for ANY DID used across a wide patchwork of infrastructures and tooling. Does that mean any SCIDs will do? Can a did:webplus, used in the OCI ecosystem, be translated to a did:webvh, or a did:scid to let a controller of a did:webplus be part of such a bridged network? 

Similarly, the optional witness and watcher capabilities defined in the did:webvh specification were defined to be open-ended building-blocks as well. The specification defines a clean and simple technical mechanism for each capability to ensure interoperability between different applications of these, while leaving use-cases and ecosystem-specific governance questions outside the specification, where they belong. One application of these has been to backfill the did:indy concept of "endorsers" in a web-based VDR, but many other trust mechanisms or policy engines could be built combining DID logs and witnesses and/or watchers. The did:webvh Work Item at DIF welcomes implementors with divergent use-cases and policy assumptions to collaborate on profiling these capabilities and specifying them in more detail in ongoing work.

These broader questions of interoperability beyond the boundaries of any one DID method are increasingly being tackled in other workstreams at DIF. One of these isthe DID Traits evaluative specification (which also hit V1.0 recently!) which defines shared or equivalent/translatable properties across methods to facilitate bridging and converging DID systems. Similarly, the DID Methods WG is trying to draw attention to how well DIDs can be combined or translated, as well as how “production-grade” their implementations can be tested (or even audited) to be. WebVH is the first method being evaluated for production-readiness and documentation completeness, literally setting the bar for others!

Where to from here?

Having reached v1.0, the focus will now turn to refinements and adoption, working with implementers and deployers to gather feedback and grow their numbers.

Innovators are encouraged to review the didwebvh.info website and walk through the basic step-by-step "Understanding did:webvh". Implementors should check out the Implementations page to find the did:webvh tooling you need for your initiatives, including three mature and complete implementations: Python TypeScript Rust Standards developers can jump directly to the did:webvh spec

The Python and Typescript implementations were both incubated by the Government of British Columbia, Canada.

The Rust implementation was recently contributed to DIF by Affinidi. We're excited by this contribution as it brings the performance, memory safety, and systems-level capabilities that make Rust increasingly popular for performance-critical applications.

These three implementations serve as mutual validation tools, continuously testing interoperability and compliance against each other to ensure robust spec maturity and cross-platform reliability.

Whether you prefer leveraging existing tooling or building from scratch, did:webvh makes both paths straightforward. The clean specification and comprehensive interoperability test suite provide clear guidance for new implementations, while the mature tooling ecosystem offers battle-tested components you can integrate immediately.

We welcome contributions from the community - whether that's enhancing existing implementations, creating new language bindings, improving documentation, or developing deployment tools that make it even easier to adopt did:webvh DIDs in production environments!

Want to learn more and get involved? Join DIF or contact us at contact@identity.foundation.

Wednesday, 06. August 2025

Internet Safety Labs (Me2B)

2022 K-12 Edtech Benchmark Revisited: Unvetted Off-the-Shelf Apps Outnumber Licensed Apps 2-to-1

A new school year is upon us and ISL wants to remind educators and school technologists that they need to take as much care scrutinizing safety risks in off-the-shelf (OTS) technologies recommended to students as they do for technologies licensed by the schools.  We recently went back over the data from our 2022 benchmark and […] The post 2022 K-12 Edtech Benchmark Revisited: Unvetted Off-t

A new school year is upon us and ISL wants to remind educators and school technologists that they need to take as much care scrutinizing safety risks in off-the-shelf (OTS) technologies recommended to students as they do for technologies licensed by the schools. 

We recently went back over the data from our 2022 benchmark and confirmed that most technologies pushed to K-12 students in the US are recommended (not required), and unvetted off-the-shelf technologies that students use completely independently of school privacy controls or oversight.   

Schools Recommend Too Many Technologies to Students 

One of the most striking findings from our 2022 US K12 EdTech Benchmark was seeing just how many technologies schools were pushing students to use.  

We ended up counting apps used in each school in three different ways. We initially looked for a single list of all the technologies that a school was either recommending or requiring. Ideally, we were looking for discrete lists of each type—recommended or required. However, most schools did not have clean, singular lists of apps, which meant we needed to hand count the number of all apps mentioned by the school and/or district websites as being used by students (“manual app count”). We hand-counted all 663 schools.  Some schools or more frequently school districts had full app lists maintained by their IT departments, and many districts maintained lists of technologies in the Student Data Privacy Consortium (SDPC) database. We called these lists “simple aggregated lists”. Finally, for many of the schools that had simple aggregated lists, they also indicated if an app was “approved” or not. We called these lists of approved apps “approved technology lists”. To summarize the types of app list counts: 

(1) Manual app count: Researchers hand counted the number of apps found across school and district websites1. This was                    performed for all 663 schools. 
(2) Simple aggregated list: For 222 schools we found simple aggregated lists of apps that were larger than the manual count.
(3) Approved technology list: For the 222 schools that had a simple aggregated list [larger than the manual count], 153 of them          distinguished approved from unapproved apps on those lists.


As can be seen in Table 1, the manual count yielded an average of 19 apps per school, but schools with simple aggregated lists averaged 191 apps, and strangely, the subset of schools with approved apps averaged 214 apps. Yikes.  

Table 1   LIST TYPE AVERAGE NUMBER OF APPS Manual app count (n=663) 19 Simple aggregated list (n=222) 191 Approved technology list (n=153) 214


Do schools really need to recommend 200 different apps and websites for student use? 

Recommended Versus Required Apps 

As mentioned earlier, in our manual counting of technologies we designated an app as either “required” or “recommended”. Apps were deemed “required” due to prominent presence on school websites, often with a login.2 Similarly, custom apps that were clearly branded for the school or district were also designated as required. Thus, required apps in our research were always licensed by schools. As such, these technologies were held to greater scrutiny and vetting, and student accounts were generally provisioned and managed by the schools3. In this way, the school had “joint data controller” responsibilities alongside the app developer.4

“Recommended” technologies, however, were always off-the-shelf (OTS) technologies, which students would access or download at their own discretion, creating their own accounts independent of the school. 

We knew that the percentage of required apps was small compared to the recommended apps. But what was the breakdown of “required” versus “recommended” apps? We thought the distribution might follow the 80-20 Pareto principle: that 20% of the apps were required, and 80% were recommended. We decided to go back and run the numbers.  

Table 2 below shows the numbers for the different types of app count lists. The manual app count method failed to account for the sometimes massive, aggregated lists. Similarly, the aggregated list numbers distorted the overall data set. The bottom-line row in the table, “Manual + Approved list”, combines the manual counts for schools without simple aggregated lists with the approved technology counts [for schools that had them] to best provide a number for the national results.  

As can be seen, it was closer to 70-30 than 80-20. On average, schools were pushing nearly 58 technologies to students, with 28.9% of them being required, and 71.1% being recommended.5 The vast number of apps schools are pushing on students are merely recommended unvetted off-the-shelf apps. Despite the apps being “approved” by the schools in the approved technology lists, we know that in many cases, the only vetting is whether or not a Privacy Policy exists. This is not a sufficient form of vetting to ensure student data privacy. Schools are subjecting students to unvetted and ungoverned technologies—sometimes more than 200 such technologies. Recall also that nearly 30% of the recommended technologies for students are neither strictly educational apps nor apps designed for children, and in the latter case, they are not covered by COPPA compliance. 

Table 2  Type of App List #
Schools Average Total # of Technologies Average # of Required / Licensed Technologies Required / Licensed Techs Average % of All Tech  Avg # of Recommended / OTS technologies  Recommended / OTS Techs Average % of All Tech  Max # of Technologies Manual App Count 663  19.0  5.1  33.4%  13.9  66.6%  106  Simple Aggregated List 222  190.7  5.5  9.7%  185.2  90.3%  1411  Approved Technology List 153  214.3  5.3  6.3%  209.0  93.7%  1411  Manual + Approved List 663  57.7  5.1  28.9%  57.8  71.1%  1411 
Conclusions  

When we consider “edtech” as the combination of licensed and OTS technologies as in our 2022 benchmark, a primary risk for students is the high—sometimes exceedingly high—number of unvetted off-the-shelf technologies that schools are recommending they use.6 Until technology is reasonably safe for children ISL recommends schools undertake the following:

Minimize the number of technologies recommended to students. 
a. Especially minimize the number of apps that are not specifically for children.
    i. ISL doesn’t propose (or support a paradigm of) age-gated versions of commonly recommended mixed-audience apps like         news, museum, zoo, and reference apps. These apps must be made safer for children of all ages (i.e. for all of us). Screen all OTS technologies recommended for student use. Ideally, these should be vetted as carefully as licensed technologies, though we know that’s not practical for all schools. 
a. Use ISL’s App Microscope to learn more about privacy risks in commonly recommended apps. Can’t find your app? Contact us        at schools@internetsafetylabs.org.  
b. Recommend only apps that are COPPA certified. This won’t stop all commercial surveillance and data sharing, but it at least             minimizes some data sharing.
c. Put in place Data Privacy Agreements (DPAs) for all technologies recommended to students, i.e. for both licensed and OTS                technologies. This requires some dedicated personnel to administer, but Access4Learning’s Student Data Privacy Consortium          has agreement templates readily available here https://privacy.a4l.org/.  
d. Annually audit DPAs against the actual technology behavior. This is a service that ISL has provided for one US state school board      and is more than happy to provide at reasonable rates. Contact us schools@internetsafetylabs.org.  

 

Footnotes: Note that the apps found via the manual count were the apps that were audited in the research. Due to the volume of listed apps, ISL did not audit all of the apps found in the simple aggregated lists. More discussion can be found in the first findings report. “2022 K-12 EdTech Safety Benchmark National Findings – Part 1”, Internet Safety Labs, December 13, 2022, Section 7.2.1, p. 89, https://internetsafetylabs.org/wp-content/uploads/2022/12/2022-k12-edtech-safety-benchmark-national-findings-part-1.pdf These required apps also more narrowly align with traditional “EdTech” categories, whereas the recommended technologies included a large percentage of apps not intended for children.  We wrote about this in a blog post from 2023 called, “Data Controller Confusion in EdTech”,   https://internetsafetylabs.org/blog/insights/data-controller-confusion-in-edtech/. NOTE that the average percentages shown in Table 1 reflect an average of each school’s percentage of required/recommended apps. Licensed technologies are also risky, especially the Community Engagement Platforms which shared data [on purpose] with the most third party entities and data brokers, like this app, no longer available on the app store: https://appmicroscope.org/app/1579/

The post 2022 K-12 Edtech Benchmark Revisited: Unvetted Off-the-Shelf Apps Outnumber Licensed Apps 2-to-1 appeared first on Internet Safety Labs.


Next Level Supply Chain Podcast with GS1

How Armored Transport is Going Digital

The way cash moves through the supply chain is evolving.  What was once a paper-heavy process is now embracing digital transformation. In this episode, Robert Skitt, Senior Manager at Axiom Armored Transport, joins hosts Reid Jackson and Liz Sertl to discuss the shift from traditional, manual methods to more secure, efficient, and accountable digital solutions.  Financial instituti

The way cash moves through the supply chain is evolving. 

What was once a paper-heavy process is now embracing digital transformation.

In this episode, Robert Skitt, Senior Manager at Axiom Armored Transport, joins hosts Reid Jackson and Liz Sertl to discuss the shift from traditional, manual methods to more secure, efficient, and accountable digital solutions. 

Financial institutions and the Federal Reserve are demanding greater transparency and control over cash movement, and armored transport teams are under increasing pressure to adapt.

Robert walks us through the process of how Axiom is modernizing armored transport, replacing handwritten logs with barcode scans and eManifests. In this episode, you'll learn:

How GS1 standards are helping digitize cash logistics

What "cash visibility" looks like in practice

Why early adoption gave Axiom a seat at the table and a competitive edge

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(04:52) What cash visibility looks like

(07:23) How GS1 standards changed the workflow

(11:32) Turning paper-heavy processes into data

(13:32) Barcode tech in armored car logistics

(15:14) How digitization improves accuracy and trust

(19:25) Advice for starting your visibility journey

(20:44) Robert's favorite technology today

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Robert Skitt on LinkedInCheck out Axiom Armored Transport


Blockchain Commons

2025 Q2 Blockchain Commons Report

Not only did Blockchain Commons close out its major Zcash project in Q2 and start work on a big, new FROST push, but we also did continued work on many other initiatives: Working with Partners Zcash Zingo Labs FROST Ethereum Articles & Presentations Interoperability Provenance Marks Post-Quantum URs Permits in Gordian Envelope Thinking about Identity HackerNoon No Phone Home XID Core Concepts F

Not only did Blockchain Commons close out its major Zcash project in Q2 and start work on a big, new FROST push, but we also did continued work on many other initiatives:

Working with Partners Zcash Zingo Labs FROST Ethereum Articles & Presentations Interoperability Provenance Marks Post-Quantum URs Permits in Gordian Envelope Thinking about Identity HackerNoon No Phone Home XID Core Concepts Fair Witness Bitcoin Policy Summit Web Updates Updated Projects Page New Envelope Seeds Page CLI Updates dCBOR-CLI Envelope-CLI Library Updates Mass Crate Update Argon2ID envelope-pattern dcbor-parse Working with Partners

Blockchain Commons is supported by patronage and by grants. (If you want to become a major patron and partner with us on a project, let us know, and if there are grants that you think would allow us to fulfill our Gordian Principles, that we may not be applying to, again drop us a line.) In Q1, some of our major projects were closely related to grants that we’d applied for last year.

Zcash. The Zcash ZeWIF project took up the majority of the our Q1. The goal was to create an interchangeable wallet format that would make Zcash wallets more interoperable and so give users more freedom to move their funds to an app of their choice. We continued that work in April and then closed it out in May.

Our work product for Q2 of the ZeWIF project included the final drafts of our best practices for importing & exporting wallet data and our doc on using Envelope attachments for ZeWIF. We also held our fourth and final (to date) ZeWIF meeting, which included a demo of our zmigrate-cli tool. Our final two reports from May give all the details on the apps, crates, and docs that we published as we closed out the project.

Zingo Labs. We were proud to do our Zcash work with Zingo Labs, who provided us with some of the Zcash-chain knowledge we needed to extend our interoperability expertise into the Zcash community. (We also got lots of support from other experts in the community through meetings, which is the same way we advance standards in all the ecosystems we work with.) We hope to continue that partnership in the future, and to support that we offered a presentation to Zingo Labs in Q2 highlighting our technologies, how they work, and why they’re useful. We focused on some of the low hanging fruit such as SSKR, which allows for the secure backup of secrets, and OIB, which makes it easier for users to see what they’re doing. We’ll let you know if anything comes of this!

FROST. As soon as we closed out work on our Zcash grant, we began work on a new FROST grant that we received from HRF. This grant’s work will come in three parts: creating new FROST signing tools; writing “Learning FROST from the Command Line”; and holding FROST meetings. We’ve been pushing hard on this work in July and August, so we’ll be writing more about it in the Q3 report.

Ethereum. Though most of Blockchain Commons’ work has traditionally been on the Bitcoin blockchain, our principles of independence, resilience, privacy, and openness apply to all blockchains. Our recent work with Zcash proved that, and so in Q2 we also had some talks with a variety of parties in the Ethereum ecosystem about possibly doing work with them on securing secrets at the level zero of their stack. We’re still waiting to see if anything gels, but generally: if you know of a blockchain that might be looking for interoperability or resilience support, let us know!

Articles & Presentations

Blockchain Commons’ major articles and presentations demonstrate our fundamentals and highlight our newest work. Here’s what that included in Q2.

Interoperability. We talk a lot about ecosystem “openness” and user “freedom”, or more generally “interoperability.” This is a pretty important foundation of Blockchain Commons’ work, so in Q2 we explored it more with the article “Interop, What Is It Good For?” and slides and video at our May Gordian Meeting. We encourage you take a look at the article or the meeting presentation to gain some more insight into one of the core principles of Blockchain Commons’ work.

Provenance Marks. One of our newest innovations is “provenance marks,” which allow for the creation of a cryptographically-secured chain of marks. We gave a presentation at our June Gordian meeting and also have a research paper on the technology. We additionally presented in provenance marks to the W3C Credentials Community Group, who is forming a working group on provenance technology of this type: Blockchain Commons’ provenance marks are one of three possibilities under consideration.

Post-Quantum URs. Post-Quantum Cryptography support was one of our most exciting expansions in Q1. We’ve now published a new research paper on integrating PQC with URs.

Permits in Gordian Envelope. Gordian Envelope is a more mature technology at Blockchain Commons, but we’re still exploring its fullest capability. Part of that capability is the “permit,” which is a way to lock your Gordian Envelope. The great thing about permits is that you can apply multiple permits to an Envelope, so that it can be opened by different people in different ways! We wrote a research paper on “Permits in Gordian Envelope” to offer more insights into the possibilities.

Thinking about Identity

Christopher Allen has been long associated with digital identity, dating back at least to his authorship of “The Path to Self-Sovereign Identity” and his founding of the Rebooting the Web of Trust workshops. Blockchain Commons did a variety of scattered work on identity in Q2.

HackerNoon. Christopher talked to Hackernoon about how “We’ve Lost the Privacy Plot”, which generally discusses privacy and the internet.

No Phone Home. Digital identity is closely associated with digital credentials, which detail who and what an identity represents. Unfortunately, credential design is growing problematic because much of it is phoning home: alerting issuers when and where credentials are used. That’s why Blockchain Commons recently signed on to the No Phone Home initiative, to try and bring attention to this fundamental problem in digital identity.

XID Core Concepts. Blockchain Commons has its own answer for self-sovereign identity: the XID, or extensible identifier. We’ve been working on a tutorial course to show everything about how XIDs work. So far, we’ve developed a set of core concepts docs, which not only overview how XIDs work, but also how they link into Blockchain Commons’ larger architecture. This is still a work in progress. (We’re about to begin work finalizing the linked tutorials.) But, if you want to take an early look, the core concepts files have all been closed out as revised drafts.

Fair Witness. Some of Blockchain Commons’ work is advocacy (like our discussion with Hackernoon and our signing on to No Phone Home) and some is pragmatic (like our XID work). But we also try to be future-looking. That’s what Christopher’s “Fair Witnessing” Musings was about. It’s a new look at Verifiable Credentials that focuses on the limitations of what we actually can perceive.

Bitcoin Policy Summit. We’re thrilled to see some of our thinking about identity starting to have an effect on the larger world. A few of Christopher’s identity articles were referenced in the Bitcoin Policy Institute’s recent white paper on “Building a Trustworthy Digital Future” and as a result, Christopher was asked to talk at the Bitcoin Policy Summit in Washington D.C. this June. (More on the results of that in the coming quarters!)

Web Updates

Our web pages are intended as a resource for developers so that they can understand and implement our technologies. Here are some of the updates we made this quarter:

Updated Projects Page. Our projects page has always been a central index to our most important work, but that type of thing gets out of date as priorities change, so we’ve done a big update to align it with our most recent iteration of our developer pages and to otherwise highlight important recent work like our meetings for FROST. Take a look at what we consider our most relevant work as of early 2025!

New Envelope Seeds Page. We also released a new page on Seeds in Gordian Envelope: how and why you’d want to store seeds in envelopes, complete with examples of how to use envelope-cli for experimentation. (This was also the heart of the demo we made to Zingo Labs, so you can take a look at several of our easiest-to-implement technologies here.)

CLI Updates

As usual, we’ve been making updates to our apps and libraries. In Q2 that included two CLI releases:

dCBOR-CLI. We have a new CLI for dCBOR! It validates dCBOR input (using CBOR diagnostic as its default input) and produces output in several formats (using hex as its default output).

Envelope-CLI. Our Envelope CLI now has new pattern matching to make it easier to find specific leaves.

Library Updates

There were also lots of updates to our libraries, focusing on our Rust stack.

Mass Crate Update. The vast majority of our Rust crates have been updated to support new developments that occurred while we were working on ZeWIF. This includes:

- bc-rand 0.4.0 - bc-crypto 0.9.0 - bc-shamir 0.8.0 - dcbor 0.19.0 - bc-tags 0.2.0 - bc-ur 0.9.0 - sskr 0.8.0 - bc-components 0.21.0 - known-values 0.4.0 - bc-envelope 0.28.0 - provenance-mark 0.8.0 - bc-xid 0.8.0 - bc-envelope-cli 0.14.0 - gstp 0.8.0

Argon2id. Argon2id support has been added to bc-crypto and bc-components, as well as the EncryptedKey type.

envelope-pattern. The new envelope-pattern crate is a pattern matcher and text syntax pattern parser for Gordian Envelope, allowing you to match specific structures within envelopes.

dcbor-parse. The new dcbor-parse crate parses and composes the CBOR diagnostic notation into dCBOR (deterministic CBOR) data items.

Here are our major dcbor crate updates:

- dcbor-parse 0.1.1 (NEW) - dcbor-cli 0.7.1 (heavy update) - dcbor 0.19.1 (minor changes)

Coming up, we have work on FROST, XID, and more. Sign up for the Gordian Developer Meeting announcement-list to be informed of our upcoming presentations, and please consider becoming a patron of Blockchain Commons or talking with us about partnering on a specific project.

Monday, 04. August 2025

We Are Open co-op

Just talk to people: Why we love community calls

Community calls have been a cornerstone of my engagement and community building practice for well over a decade. I started a couple community calls at Greenpeace International, one of which continues a good 8 or 9 years later. Regular readers have probably heard of the Open Recognition is for Everybody

Community calls have been a cornerstone of my engagement and community building practice for well over a decade. I started a couple community calls at Greenpeace International, one of which continues a good 8 or 9 years later. Regular readers have probably heard of the Open Recognition is for Everybody (ORE) call. Now, we’re spinning one up with Amnesty International UK.

In this blog post, I reflect on why community calls hold such significance to me. For me, community calls are vital hubs where connections, ideas, and growth converge.

Being open and inclusive cc-by-nd Bryan Mathers for WAO

We all stand on the shoulders of giants. I learned about community calls way back when I was working at Mozilla. At Mozilla, we had community calls for communities, sub-communities, projects and procedures. The majority of our meetings were simply open – if you knew about it, you could join. Sometimes we promoted calls and asked for participation, sometimes we just waited to see who showed up. If you showed up to a call, you were included, whether anyone had specifically invited you or not.

Working openly is a transformative way to bring people together and create safe spaces where people can share challenges, celebrate successes, and co-create solutions. The regularity of designated community calls helps build trust and camaraderie, turning strangers into collaborators. By creating space for diverse voices, we enrich our problem-solving approaches and ensure that decision-making reflects a wide range of perspectives. These inclusive practices have strengthened communities, making them more resilient and adaptable.

Strengthening the community cc-by-nd Bryan Mathers for WAO

If I think about all the different community calls I’ve been a part of, and all the people I’ve met because of open community calls, I’m reminded of the power those calls have to shape things. These calls aren’t presentations, webinars or regular meetings, they are spaces where people have the power to shape the agenda and talk about issues that matter to them.

Part of building successful community calls is to let go of trying to control conversations. While we might put together a loose agenda to guide a community call, we strive to make sure that everyone in attendance feels like their being there matters. Community calls are not transactional, they are spaces that help us be part of a community, find ways to amplify more voices and work together.

Just talk to people cc-by-nd Bryan Mathers for WAO

Part of what makes a community thrive is helping people find a place of belonging. Community calls help by providing a flexible space where people can just talk to each other. They are not just about discussing details or progress in a project, but also about celebrating one another and figuring out what a collective future for a project or an idea might be. They’re great spaces to figure out problems, while also getting to know the people you’re collaborating with.

Community calls have been instrumental in shaping open projects and building meaningful connections. Their role in driving innovation, inclusivity, and adaptability is immeasurable. As I look forward to building open calls for the AIUK community, I am reminded that these calls are not just meetings—they're milestones in a journey towards a more connected and collaborative world.

If you are looking for a way to encourage a group of people to co-create and collaborate, check out 11 steps to running an online community meeting. It’s a resource that I wrote almost a decade ago and continue to return to over and over as I work to connect people working to make the world a better place.

Friday, 01. August 2025

FIDO Alliance

MobileIDWorld: Google Chrome Enhances Security with Mandatory Biometric Authentication for Password Autofill

Google has implemented significant enhancements to biometric authentication and security features in Chrome and Google Workspace, marking the latest step in the company’s broader push toward stronger authentication methods. These […]

Google has implemented significant enhancements to biometric authentication and security features in Chrome and Google Workspace, marking the latest step in the company’s broader push toward stronger authentication methods. These updates build upon previous Chrome security improvements while addressing critical vulnerabilities in desktop password management.

Tuesday, 29. July 2025

The Engine Room

PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA

How can technology serve as a tool for justice, not harm? Our recent six-month Matchbox partnership with TechHer Nigeria helped us explore exactly that.  The post PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA appeared first on The Engine Room.

How can technology serve as a tool for justice, not harm? Our recent six-month Matchbox partnership with TechHer Nigeria helped us explore exactly that. 

The post PARTNERING FOR IMPACT: BUILDING SAFE DIGITAL SPACES WITH TECHHER NIGERIA appeared first on The Engine Room.


Digital Identity NZ

Open spaces, open hearts, open minds…time to add Open data to the mix!

The time has come to put open data at the heart of the New Zealand story.  By this I mean the deployment of digital public infrastructure to secure our data … Continue reading "Open spaces, open hearts, open minds…time to add Open data to the mix!" The post Open spaces, open hearts, open minds…time to add Open data to the mix! appeared first on Digital Identity New Zealand.

The time has come to put open data at the heart of the New Zealand story.  By this I mean the deployment of digital public infrastructure to secure our data so it can flow smoothly and safely with the correct permissions. Change is hard. System change is even harder. But that doesn’t mean we shouldn’t be aspirational.

It is encouraging to see Hon Judith Collins‘ “we’ll have a Government app by Christmas” programme take a significant step forward with the appointment of two exceptional New Zealand tech companies.

Congratulations to both Dave Clark NZ and MATTR teams. This is exactly the kind of partnership between government and local tech that strengthens our digital economy and showcases New Zealand innovation on the world stage – DIA announcement here.

The structural separation of digital identity from the traditional tech stack represents a seismic shift for the industry and for how systems are designed and developed. Despite extensive consultation as the technology evolved over the past few years, the long-awaited reference architecture and design for government implementation are now a priority.

As we build our profile, we are grateful that so many accomplished changemakers are stepping up to speak at our highly anticipated Digital Trust Hui on 12 August:

Matthew Evetts – Partner – Digital & Cyber, KPMG Tim Ransom – Product Manager – Public, Community and Consumer Health, Te Whatu Ora Joel Foster – Chief Commercial Officer, Lumin  Helen Littlewood – Senior Product Manager,Worldline Contactless Silona Bonewald – President, LeadingBit Solutions | Open Source Evangelist & Standards Expert Kristy Phillips – Chair, Hospitality New Zealand Anna Curzon – Chair, B416 Don Christie, Managing Director, Catalyst IT Myles Ward – Deputy Government Chief Digital Officer, DIA Maria Roberston, Chair, Digital Identity New Zealand

At the Hui, it will be valuable to see the DIA spell out the work that the Department has underway to resolve any policy and regulatory barriers for accredited providers, issuing credentials (driving, education, travel, age assurance), along with DIA issued credentials under DISTF accreditation.

It will be beneficial at the Hui to see DIA outline the work underway to address policy and regulatory obstacles for accredited providers, specifically regarding the issuance of credentials such as driving, education, travel, and age assurance, as well as DIA-issued credentials under DISTF accreditation.

Have you secured your spot at the Digital Trust Hui Taumata?

Join us on 12 August in Wellington for a full day of keynotes, panels, roundtables, and exhibits, expertly guided by MC Ngapera Riley. Hear from leaders including Ministers Judith Collins and Scott Simpson, James Monaghan (MISSION), Myles Ward (DIA), Liz MacPherson (Privacy Commissioner), Helen Littlewood (Worldline), Christopher Goh (Austroads), and Andrew Weaver (Payments NZ). With 30+speakers and support from sponsors including Payments NZ, Worldline, IMB, KPMG, Lumin, Ping Identity, NEC, DIA, Westpac, MATTR, Middleware Group, JNCTN, and MinterEllisonRuddWatts, this is a must-attend event shaping Aotearoa’s digital trust future. View the Programme and Register now!

DINZ Strategy Refresh

New Zealand stands at the frontier of a digital future where open, trusted data unlocks better services, richer insights, and empowered citizens.

There is an increasing consensus among government, industry, Iwi, communities, and citizens to collaboratively undertake a bold, values-led transformation of our data systems. This transformation will be underpinned by transparent and auditable public infrastructure.

The upcoming digital identity strategy refresh will be looking at, amongst other things, how to accelerate credential uptake, how to unlock productivity with digital identity and the role of identity solutions in the world of AI and agentic systems.

I’m looking forward to this session with our Executive Council in the coming weeks as we work to focus our energy on making real change to support our vision of a country where people can express their identity using validated and trusted digital means in order to fully participate in a digital economy and society.

In Other News
 

Dave Clark NZ and MATTR secure major government contract
Dave Clark NZ and digital identity firm MATTR have won a significant New Zealand government contract to develop a new app focused on secure digital identity services. Read more.
  Digital ID developments won’t replace wallets just yet
Despite progress in digital identity systems, experts say physical IDs and wallets will remain necessary for the foreseeable future due to infrastructure and adoption challenges. Read more.
  New Zealand launches world-first deepfake experiment to build public trust
New Zealand is leading a groundbreaking deepfake detection trial aimed at boosting public awareness and resilience against synthetic media manipulation. Read more.
  Department of Internal Affairs (DIA) announces updated rules for the DISTF
The feedback has been thoroughly analysed and presented to both the Trust Framework Board and the Minister for Digitising Government. As a result, the updated rules went into force on 24 July 2025. For a comprehensive overview of the updated rules and a summary of the feedback received, please visit the Trust Framework for Digital Identity Legislation page.

With a realistic government timeline now established, we must collectively prepare for launch. There is a significant amount of work ahead, and no time to lose.

Fortunately, our Digital Identity NZ membership possesses world-class capabilities, ideally suited to assist with consultation and bridge delivery gaps as we move forward.

Ngā mihi nui,

Andy Higgs
Executive Director, Digital Identity NZ

Banner image credit: Rocket Lab

The post Open spaces, open hearts, open minds…time to add Open data to the mix! appeared first on Digital Identity New Zealand.

Monday, 28. July 2025

MyData

Saving Journalism with Sectoral Data Bargaining

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. High-quality journalism is critical […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. High-quality journalism is critical […]

DIF Blog

DIF Newsletter #52

August 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Partners with MIT's Project NANDA

August 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Partners with MIT's Project NANDA for Packed Event

DIF partnered with MIT's Project NANDA for a well-attended event at Circuit Launch on July 18, 2025, where Ramesh Raskar shared his vision for architecting an Internet of AI Agents. DIF Technical Steering Committee chair Andor Kesselman served as co-host, with robust discussion focusing on the relevance of decentralized identity for the agentic web.

During and after the event, discussion covered how DIF's decentralized identity standards can provide the cryptographic foundation for Project NANDA's agentic web, enabling persistent, verifiable identifiers for AI agents operating autonomously across organizational boundaries. This collaboration addresses the critical challenge of maintaining human oversight and trust as AI systems gain greater agency, ensuring that robust identity infrastructure underpins the future of AI collaboration.

DIF co-hosted event with Project NANDA in Mountain View, CA on July 18, 2025

Global Digital Collaboration Conference in Geneva

DIF staff and community had a strong presence at the Global Digital Collaboration (GDC) conference in Geneva on July 1-2, 2025, with our community delivering key presentations across multiple tracks. Highlights included our opening session on Agentic AI and Digital ID, addressing critical trust challenges as AI agents become more autonomous. Additional DIF-led sessions covered identity foundations for Industry 4.0 transformation, decentralized identifiers for global interoperability, and digital human rights.

DIF community at GDC

The conference demonstrated the maturity of decentralized identity solutions, with presentations spanning from privacy-enhancing technologies and trust management for wallets to enterprise blockchain implementations and regulatory compliance frameworks. The diverse roster of sessions highlighted how decentralized identity is being deployed across sectors—from government digital infrastructure projects to enterprise security applications—while fostering meaningful dialogue between technologists, policymakers, and standards bodies globally.

Visit the GDC website for session videos and stay tuned for the upcoming Book of Proceedings featuring detailed insights from all presentations.

DIF Labs Beta Cohort 2 Projects Launch

Following the selection process completed in June, DIF Labs Beta Cohort 2 is now in full swing with three innovative projects that push the boundaries of decentralized identity technology. The selected projects focus on:

Privacy-preserving verifiable credentials using advanced cryptographic techniques Anonymous multi-signature protocols for group credentials Novel approaches to privacy-preserving revocation mechanisms.

Each project team is receiving dedicated mentorship from DIF's expert community and is committed to delivering open-source implementations that will benefit the entire ecosystem. The projects are scheduled to run through September 2025, with regular check-ins and a final showcase event planned.

See the DIF Labs web site for more information.

DIF Showcases Decentralized Identity for Seamless Travel Experience at HITEC Conference

DIF presented at HITEC 2025 in Indianapolis, where hospitality technology veterans mapped out how self-sovereign identity can align data privacy with friction-free travel experiences. The session featured Douglas Rice (Managing Director, Hospitality Technology Network), Nick Price (former CIO, Mandarin Oriental Hotel Group & citizenM), Kim Hamilton Duffy (DIF Executive Director), and Bill Carroll (CEO, Marketing Economics; retired Cornell professor). The speakers demonstrated how traveler-controlled digital wallets can enable seamless journeys—from AI-powered trip planning with auto-completing visa forms to face-scan boarding, NFC room keys, and verified guest reviews.

The presentation highlighted real-world momentum with examples including EU "Seamless Travel" pilots across 27 member states, Cathay Pacific's Hong Kong-Tokyo trial packing seven credential types into passenger wallets, and Bhutan's nationwide digital identity program built on decentralized identifiers. The speakers emphasized that centralized data silos are incompatible with the hyper-personalization guests demand, while self-sovereign identity delivers immediate benefits including reduced breach risks, lower cyber-insurance premiums, and improved revenue through verified, up-to-date traveler data.

See Alex Bainbridge's article for a detailed recap, as well as the recording and presentation deck. Additional coverage available at PhocusWire.

DIF H&T team at HITEC conference

🛠️ Working Group Updates

Browse our active working groups here

Creator Assertions Working Group

The Creator Assertions Working Group made significant progress on terminology standardization and integration with broader credential frameworks. A key development was the group's decision to transition from "identity assertions" to "attribution assertions," recognizing that "attribution" better captures the essence of content creation claims while being less controversial in industry discussions. The team continued advancing their integration with the C2PA ecosystem and made substantial progress on metadata assertion standards. Work also progressed on media identifier systems and the development of flexible metadata frameworks that can accommodate various content types and use cases.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ team achieved significant milestones in pseudonym generation and post-quantum security considerations. Key developments included the finalization of polynomial evaluation methods for pseudonym generation, addressing potential security vulnerabilities from adversarial users through more robust cryptographic approaches. The team made substantial progress on test vector development for the 0.9 draft release and continued coordination with IETF standardization efforts. Discussions also covered the efficiency implications of different cryptographic commitment schemes and their practical applications in large-scale deployments.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused intensively on W3C standardization efforts and refining the evaluation process for DIF-recommended DID methods. Significant progress was made on the proposed W3C DID Methods Working Group charter, with the team addressing concerns about blockchain inclusion and standardization scope. The group refined evaluation criteria for DID method proposals, emphasizing the need for multiple implementations, significant deployments, and clear compliance with DID traits. Work continued on balancing objective criteria with expert evaluation to ensure high-quality recommendations while maintaining transparency in the assessment process.

👉 Learn more and get involved

Identifiers and Discovery Working Group

Multiple work streams advanced significantly this month. The DID:webvh team made substantial progress toward their 1.0 specification release, with multiple implementations now passing comprehensive test suites and performance analysis demonstrating efficient handling of DID updates. The DID Traits team prepared for their 1.0 release, focusing on key validation capabilities and long-term availability requirements. The group also explored applications in software supply chain contexts and examined compliance with emerging regulations like the EU's Cyber Resilience Act, demonstrating the practical relevance of decentralized identifiers in enterprise environments.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced work on binary encoding support through the CBOR implementation, positioning it as an optional feature for version 2.2 with potential to become the default in future major releases. The team addressed important technical challenges around message encoding detection, MIME type handling, and implementation compatibility. Significant discussions covered privacy considerations and "phone home" concerns in credential verification systems, with the group exploring how verifiable credentials can be presented without requiring direct communication with issuers. The group also examined DIDComm applications in AI agent-to-agent communications.

👉 Learn more and get involved

Claims & Credentials Working Group

The Credential Schemas team launched their community schemas initiative, creating a framework for organizations to contribute verifiable credential schemas to a shared repository for potential standardization. Significant progress was made on aligning their basic person schema with schema.org standards while maintaining compatibility with existing frameworks like OIDC and UK ID assurance. Key developments included extending postal address schemas for banking KYC requirements, refining terminology around personhood verification credentials, and establishing processes for schema synchronization between repositories. The team also began exploring employment credentials and anti-money laundering certifications as future development priorities.

👉 Learn more and get involved

Hospitality & Travel Working Group

The newly launched Hospitality & Travel Working Group hit the ground running with substantial progress on the HAT Pro specification. The team developed comprehensive schemas for food preferences, dietary restrictions, and accessibility requirements, utilizing graph-based models to avoid data duplication and improve cross-referencing capabilities. Key developments included the creation of UML models and JSON schemas for complex preference structures, exploration of AI-assisted data input to simplify user experiences, and the establishment of engagement processes for subject matter experts across various travel sectors. The group is preparing for major presentations at industry events and has launched a dedicated website to showcase their work.

👉 Learn more and get involved

DIF Labs Working Group

DIF Labs Beta Cohort 2 projects are now in active development phase, with three selected projects working on cutting-edge privacy-preserving technologies. The projects focus on legally binding verifiable credentials using Qualified Electronic Signatures (QES), comparative analysis of privacy-preserving revocation mechanisms, and anonymous multi-signature verifiable credentials. Each project team is receiving dedicated mentorship from DIF's expert community and is committed to delivering open-source implementations that will advance the broader decentralized identity ecosystem. The program continues to demonstrate the value of focused, mentored development in advancing the state of the art.

👉 Learn more and get involved

🌎 DIF Special Interest Group Updates

Browse our special interest groups here

DIF Africa SIG

The Africa SIG featured an impressive deep-dive presentation on Ethiopia's national digital identity system, Faida, and its associated digital credential platform FaidaPass. Representatives from Ethiopia's national ID system provided detailed insights into the architecture, features, and implementation of this groundbreaking system, which serves as one of the first full-scale standards-compliant deployments globally. The presentation highlighted the use of decentralized verification models, biometric authentication capabilities, and self-sovereign identity principles, while addressing innovative solutions for non-smartphone users and future monetization strategies. The session demonstrated Africa's leadership in practical digital identity implementation.

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted comprehensive presentations on digital identity solutions in Australia, featuring True Vault's approach to creating a decentralized identity ecosystem. Discussions covered recent regulatory changes including the Digital Identity Act, international standards alignment, and the challenges of achieving interoperability across different jurisdictions. The group explored the evolution from manual to digital identity verification methods and examined the potential for global expansion of digital identity solutions while addressing privacy concerns and user control requirements. The session highlighted Australia's voluntary approach to digital identity and its implications for regional adoption.

👉 Learn more and get involved

DIF Japan SIG

The Japan SIG focused on recent developments in DID and AI agent authentication, with participants sharing updates on their organizations' initiatives and emerging challenges. The group explored the intersection of decentralized identity with AI systems and examined potential applications across various sectors. Key discussions included the unique requirements for AI agent identity management and the challenges of implementing decentralized identity principles in automated systems. The group also considered future meeting formats and potential offline events to enhance community engagement and collaboration.

👉 Learn more and get involved

DIF Hospitality & Travel SIG

The Hospitality & Travel SIG hosted presentations highlighting decentralized identity adoption in the travel industry. Key sessions included discussions with Microsoft on AI-driven cybersecurity solutions for hospitality, analysis of Apple's digital identity announcements and their implications for travel, and exploration of AI agents' potential to revolutionize customer interactions in travel and hospitality. The group examined both opportunities and challenges in implementing decentralized identity solutions across various travel scenarios, from border crossing to personalized service delivery.

👉 Learn more and get involved

📖 DIF User Group Updates DIDComm User Group

The DIDComm User Group explored practical implementations and emerging applications of the DIDComm protocol, with particular focus on AI agent communications. Key discussions included demonstrations of new systems and their communication protocols, exploration of generative AI communication frameworks and their similarities to DIDComm approaches, and examination of security considerations for AI agent interactions.

👉 Learn more and get involved

📢 Announcements H&T Working Group Launches Blog

As Hospitality & Travel activity increases at DIF, Autoura CEO Alex Bainbridge has launched a special H&T focused blog. We encourage you to visit and subscribe for updates.

🆔 Get involved! Join DIF

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

🐦 Follow us on Twitter/X 💻 Join us on GitHub 📺 Subscribe on YouTube 🔍 Read the DIF blog New Member Orientations

If you are new to DIF, join us for our upcoming new member orientations. Find more information on DIF's Slack or contact us at community@identity.foundation if you need more information.

Thursday, 24. July 2025

Oasis Open

LegalXML – Electronic Court Filing V5.01 Errata Public Review

The LegalXML – Electronic Court Filing pleased to announce that ECF V5.01 Errata 01 is now available for public review and comment. The public review is now open and ends August 9, 2025 at 23:59 UTC. Electronic Court Filing Version 5.01 Errata 01Committee Specification Draft 0121 June 2025 https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.docx https://doc

ends August 9th

The LegalXML – Electronic Court Filing pleased to announce that ECF V5.01 Errata 01 is now available for public review and comment. The public review is now open and ends August 9, 2025 at 23:59 UTC.

Electronic Court Filing Version 5.01 Errata 01
Committee Specification Draft 01
21 June 2025

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.docx

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.html

https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/errata01/ecf-v5.01-errata01-csd01.pdf

How to Provide Feedback

OASIS and the LegalXML – Electronic Court Filing value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. TC members should send in comments via the TC mailing list. All others should submit to the comment mailing list after following instructions listed here.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the LegalXML – Electronic Court Filing TC can be found at the TC public home page here.

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] http://www.oasis-open.org/committees/legalxml-courtfiling/ipr.php

Intellectual Property Rights (IPR) Policy

RF on Limited Terms Mode

The post LegalXML – Electronic Court Filing V5.01 Errata Public Review appeared first on OASIS Open.


Anthropic, EY, and Johns Hopkins APL Executives Elected to OASIS Board to Drive Open Development and Global Collaboration

Boston, MA, 24 July 2025 – OASIS Open, the international standards and open source consortium, announced the results of its 2025 Board of Directors Elections. Three newly elected members and two re-elected members joined the Board in providing strategic governance and leadership to advance OASIS’s mission of solving global challenges through open development and collaboration.  […] The post

Jay White of Microsoft and Pablo Breuer Secure Additional Terms, Reinforcing Continuity and Strategic Momentum at OASIS

Boston, MA, 24 July 2025 – OASIS Open, the international standards and open source consortium, announced the results of its 2025 Board of Directors Elections. Three newly elected members and two re-elected members joined the Board in providing strategic governance and leadership to advance OASIS’s mission of solving global challenges through open development and collaboration. 

OASIS is pleased to welcome new board directors Jason Clinton of Anthropic, Charles Frick of Johns Hopkins Applied Physics Laboratory (APL), and Sarah Liang of EY. OASIS also congratulates Jay White of Microsoft and Pablo Breuer on their re-election to additional terms. The continuing members of the Board are Jim Cabral, Gershon Janssen, Bret Jordan, Vasileios Mavroeidis, Daniel Rohrer, and Daniella Taveau.

“I’m delighted to welcome Jason, Charles, and Sarah to the Board, and to congratulate Jay and Pablo on their re-election,” said Gershon Janssen of Reideate, OASIS Board President and Chair. “Their expertise will be instrumental as we continue shaping open standards that address global challenges. I also want to thank our departing directors Jason, Daniel, and Omar for their outstanding service and lasting impact they’ve made on the OASIS community.”

New Board Members

Jason Clinton serves as Chief Information Security Officer (CISO) at Anthropic, where he guides security strategy, including detection and response, compliance, physical security, security engineering, and IT. He brings over a decade of experience in infrastructure security, having previously led efforts in defense against advanced persistent threats and contributed to major operating system and payment platform development. Jason also serves on the Coalition for Secure AI’s (CoSAI) Project Governing Board.

“The rapid advancement of AI makes robust, open standards more crucial than ever,” said Clinton. “Through initiatives like CoSAI, OASIS brings together industry leaders to develop frameworks that protect users while enabling innovation. I’m honored to join the OASIS board and work alongside leaders who share this commitment to responsible AI development.”

Charles Frick, a Chief Scientist in the cyber capabilities development group at Johns Hopkins Applied Physics Laboratory (APL), leads multiple research and pilot efforts focused on cybersecurity automation, machine-speed threat information sharing and operational resilience. He chairs the Indicators of Behavior (IoB) sub-project within the Open Cybersecurity Alliance (OCA), guiding development and adoption of new standards for behavior-based threat intelligence.

“As cybersecurity threats continue to evolve, transparency, interoperability and collaboration are essential,” said Frick. “I’m honored to join the OASIS board and contribute to its vital mission at the intersection of open standards and open source. I look forward to advancing standards that support automation, resilience and trust—especially in areas like behavior-based threat intelligence and cyber-physical system security—so that we can better protect critical infrastructure and global digital ecosystems.”

Sarah Liang is a Partner at EY and serves as the Global Responsible AI Leader, where she drives comprehensive AI governance initiatives throughout the firm and for client solutions worldwide. Her expertise encompasses monitoring the regulatory landscape, aligning with legal and compliance standards, designing AI risk management solutions, and developing responsible AI frameworks that operationalize governance without hindering innovation. Sarah actively participates in key standards organizations, including CoSAI, bringing cross-industry insights and collaborative approaches to standards development.

Liang noted, “I’m honored to join the OASIS Board of Directors. OASIS and the EY organization share a vision of driving the beneficial long-term impact of AI use through transparency, security, trust, and scalability. We have a responsibility to act now and define global standards for AI development and deployment. I look forward to working alongside my fellow board members to help transform businesses through sustainable growth and innovation, while contributing to long-lasting positive change.”

Outgoing Board Members

OASIS expressed sincere appreciation to outgoing Board members Jason Keirstead, Daniel Riedel, and Omar Santos for their valuable service during their tenure as directors. Everyone at OASIS extends our heartfelt thanks for their dedicated leadership and lasting contributions to the organization’s mission. To view the current Board of Directors, please visit our website.

Media inquiries: communications@oasis-open.org

The post Anthropic, EY, and Johns Hopkins APL Executives Elected to OASIS Board to Drive Open Development and Global Collaboration appeared first on OASIS Open.


Internet Safety Labs (Me2B)

PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections

ISL had the opportunity to present at USENIX Association’s PEPR 2025 conference with a presentation entitled, “Safetypedia: Crowdsourcing Privacy Inspections”. The full video of the presentation can be viewed below: The post PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections appeared first on Internet Safety Labs.

ISL had the opportunity to present at USENIX Association’s PEPR 2025 conference with a presentation entitled, “Safetypedia: Crowdsourcing Privacy Inspections”. The full video of the presentation can be viewed below:

The post PEPR ’25 – Safetypedia: Crowdsourcing Privacy Inspections appeared first on Internet Safety Labs.


FIDO Alliance

9to5Mac: Apple @ Work: Passkey portability is finally here in iOS 26 and macOS Tahoe 26

With iOS 26 and macOS Tahoe 26, Apple is solving a key problem, though For the first time, Apple is adding support for true passkey portability. This means you can […]

With iOS 26 and macOS Tahoe 26, Apple is solving a key problem, though For the first time, Apple is adding support for true passkey portability. This means you can move your credentials from Apple Passwords to a dedicated password manager like 1Password, Dashlane, or Bitwarden, and even move them back. The system handles the transfer securely and locally, so you don’t have to worry about exporting plaint text CSV files and crossing your fingers that nothing gets exposed.

Passkey portability is built on a new standard from the FIDO Alliance that lets apps exchange credentials in a private and encrypted way. It uses Face ID, Touch ID, or your device passcode to approve the transfer. From the user’s perspective, it just works. And that’s exactly how it should be.


Reddit Implements Mandatory ID Verification for UK Users Under Online Safety Act

Reddit has implemented mandatory age verification for UK users to comply with the country’s Online Safety Act, which took effect in July 2025. The legislation requires digital platforms to prevent […]

Reddit has implemented mandatory age verification for UK users to comply with the country’s Online Safety Act, which took effect in July 2025. The legislation requires digital platforms to prevent minors from accessing unsafe content, particularly mature or adult material, following Ofcom’s broader push for stricter online age verification across digital platforms.

The platform’s verification system requires UK users to submit either a government-issued ID, such as a passport, or a selfie through Persona, a third-party identity verification company. The approach follows successful implementations by other platforms, including Discord’s recent rollout of facial scan and ID verification in the UK. Persona handles the sensitive data to maintain user privacy, storing uploaded photos or IDs for a maximum of seven days without sharing the information with Reddit.

Reddit retains only the user’s verification status and birthdate, eliminating the need for repeated verification when accessing restricted content. Persona has confirmed it does not access Reddit user data, including subreddit activity. The privacy-focused approach matches emerging standards in digital identity verification, consistent with the principles established by the FIDO Alliance’s certification program for face-based remote identity verification.


The Engine Room

DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE

Over the past three months, The Engine Room and Puentes have been exploring how to nurture connection, online and beyond the screen, in a world where division often seems to be the norm. The post DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE appeared first on The Engine Room.

Over the past three months, The Engine Room and Puentes have been exploring how to nurture connection, online and beyond the screen, in a world where division often seems to be the norm.

The post DANCES, KEYS, AND GARDENS: NURTURING COLLECTIVE DIGITAL CARE appeared first on The Engine Room.

Wednesday, 23. July 2025

Project VRM

How CMPs Can Make Hay With Real VRM

By now you’ve seen one of these: Never mind that you’re not running an ad blocker, but merely blocking tracking. Instead, note the small print in the lower right: “VRM by Admiral.” By “VRM,” Admiral means this: What we’re looking at here is the $.5 billion Consent Management Platform business, currently dominated worldwide by OneTrust, […]

By now you’ve seen one of these:

Never mind that you’re not running an ad blocker, but merely blocking tracking. Instead, note the small print in the lower right: “VRM by Admiral.”

By “VRM,” Admiral means this:

What we’re looking at here is the $.5 billion Consent Management Platform business, currently dominated worldwide by OneTrust, with a 40% market share. In the US, Admiral is the leading provider to publishers, giving it a high profile there. In Europe, the leaders are OneTrust, Usercentrics, and CookieYes.

So here is a challenge for Admiral , OneTrust, and the rest of them: make VRM  mean Vendor Relationship Management (like it says in Wikipedia).

Our case: real relationships are based on mutual trust, which can only happen if personal privacy is fully respected as a starting point. Consent management by cookie notice can’t cut it.  For real trust, we need people to bring their own terms to every website’s table, and have agreements to those. This is why we, the ProjectVRM community, through Customer Commons (our nonprofit spinoff) and the IEEE P7012 (aka MyTerms) working group, created the draft standard (on track to become official early next year) for machine-readable personal privacy terms. Three years ago, I called MyTerms The Most Important Standard in Development Today. The CMP business can help make it so, by getting on the Cluetrain.

Here are some opportunities:

CMPs can provide sites & services with easy ways to respond to MyTerms choices brought to the table by visitors. Let’s call this a Terms Matching Engine.The current roster of terms we’re working with at Customer Commons (abbreviated CuCo, hence the cuco.org shortcut) starts with  CC-BASE, which is “service provision only.” It says to a website, “just give me your service, and nothing more.” In other words, no tracking. Yet. Negotiation toward additional provisions comes after that. Those can be anything, but they should be in the spirit of We’re starting with personal privacy here, and the visitor sets the terms for that. There is a whole new business (which, like the VPN, grammar-help, and password management businesses, people would pay for) in helping people present, manage, remember, and monitor compliance with their terms, and what additional agreements have been arrived at. This can involve browser add-ons such as the one pictured  on the ProjectVRM r-button page. CMP companies can make money there too, adding a C2B business to their B2B ones. Go beyond #2 to provide real VRM. Back in the last millennium, Iain Henderson pointed out that B2B relationships tend to have hundreds or thousands of variables over which both parties need to agree. Nitin Badjatia, another CRM veteran (and a Customer Commons board member like Iain and myself), has also pointed out that companies like Oracle have long provided AI-assisted ways for B2B relationships to arrive at contractual agreements. The same can work for C2B, once the base privacy agreement is established. There can be a business here that expands on what gets started with that first agreement. Verticals. There can be strong value-adds for regulated industries or companies wanting to acquire and signal accountability, or look for firmer ways to establish a privacy regime better than the called consent, which doesn’t work (except as thin ass-covering for companies fearing the GDPR and the CCPA). For example: banks, insurers, publishers, health care providers. For people (not just corporate clients), CMPs could offer browser plugins or apps (mobile and/or computer) that help people choose and present their privacy terms, track who honors them, notify them of violations, and have r-buttons mean something. Or multiple things.

Here is what a VRM-friendly person in the UK came up with as a prototypical first by a CMP away from cookie notices:

That was after this post went up.  (Which is great.)

Obviously, we want cookie notices (and other forms of friction) to go away, but we also want CMPs to have a nice way to participate in a customer-led world in which intention-based economies can grow.

And here is an example of r-buttons in a browser:

Real relationships, including records of agreements, can be unpacked when a person (not a mere “user”) clicks on either the ⊂ or the ⊃ symbols. There are golden opportunities here for both VRM and CRM vendors. And, of course, companies such as Admiral and OneTrust working both sides—and being truly trusted.

Give us more. (Like that cookie notice above.)


The Rubric

Verifying a Web of Truth (did:webvh, Part 2)

did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds
did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds Specification https://hyperledger.github.io/anoncreds-spec/  DID...

Verifying a Web of Truth (did:webvh, Part 1)

did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds
did:webvh adds historical verifiability to did:web, using cryptographic provenance to establish that the current DID document is the result of legitimate updates by the DID controller. Today on the show we talk with Stephen Curran and John Jordan, co-creators and implementers of the did:webvh specification.   References ACA-py Plug In https://aca-py.org/latest/features/PlugIns/  AnonCreds Specification https://hyperledger.github.io/anoncreds-spec/  DID...

Next Level Supply Chain Podcast with GS1

Beyond the Box: Navigating EPR in Modern Supply Chains

The packaging around your product now matters just as much as the product itself. For companies navigating Extended Producer Responsibility (EPR) laws, that's quickly becoming the reality. And it's reshaping how teams think about data, packaging, and compliance. In this episode, Lindsay Savage, Senior Director of Data Governance and Business Platforms at Georgia-Pacific LLC, joins hosts Reid J

The packaging around your product now matters just as much as the product itself.

For companies navigating Extended Producer Responsibility (EPR) laws, that's quickly becoming the reality. And it's reshaping how teams think about data, packaging, and compliance.

In this episode, Lindsay Savage, Senior Director of Data Governance and Business Platforms at Georgia-Pacific LLC, joins hosts Reid Jackson and Liz Sertl to demystify what EPR means for manufacturers and retailers. 

With legislation ramping up across states, Lindsay explains how brands are preparing for complex reporting requirements, coordinating across departments, and turning sustainability regulations into opportunities for smarter product innovation.

In this episode, you'll learn:

Why EPR is more than a packaging issue and why it matters now

How Georgia-Pacific is building scalable systems to manage regulatory data

Tips for companies just getting started, from legal teams to logistics

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:21) Lindsay Savage's background

(03:16) What EPR means for your business

(04:50) Every state has its own rules

(06:15) Data overload and the push for standards

(07:36) Breaking down product, packaging, and pallet

(09:27) Two ways to report EPR data

(11:20) How to get started with EPR compliance

(12:40) Building cross-functional teams for success

(17:55) Embracing AI tools to stay ahead

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Lindsay Savage on LinkedIn Check out Georgia-Pacific

Tuesday, 22. July 2025

MyData

A Human-Centric Roadmap for Europe

MyData Global has submitted A Human-Centric Roadmap for Europe to the European Commission’s public consultation on Data Union Strategy. The Roadmap has incorporated input from the MyData community, and the […]
MyData Global has submitted A Human-Centric Roadmap for Europe to the European Commission’s public consultation on Data Union Strategy. The Roadmap has incorporated input from the MyData community, and the […]

Thursday, 17. July 2025

Oasis Open

Coalition for Secure AI Marks First Anniversary with New Principles for Agentic Systems and Defender Frameworks

Boston, MA – 17 July 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, celebrates its first anniversary since launching at the Aspen Security Forum in 2024. Over the past year, CoSAI has grown into the industry’s leading collaborative ecosystem for AI security, expanding from its initial founding sponsors to more than […] The post Coalition for Secure AI Marks First Anniversary

Global Participation Expands as the Coalition Releases Essential AI Guidance

Boston, MA – 17 July 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, celebrates its first anniversary since launching at the Aspen Security Forum in 2024. Over the past year, CoSAI has grown into the industry’s leading collaborative ecosystem for AI security, expanding from its initial founding sponsors to more than 45 partner organizations worldwide. Its mission to enhance trust and security in AI development and deployment has resonated widely, attracting premier sponsors EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Protect AI, Snyk, Trend Micro, and Zscaler. Through multiple workstreams, the coalition has produced practical frameworks and research addressing real-world challenges in securing AI systems. Central to CoSAI’s impact this year are the most recent releases of the “Principles for Secure-by-Design Agentic Systems,” which establishes three core principles for autonomous AI, and the “Preparing Defenders of AI Systems” whitepaper. 

Security Principles Help Safeguard Agentic AI Systems

CoSAI’s Technical Steering Committee (TSC) has released the “Principles for Secure-by-Design Agentic Systems,” a foundational document aimed at helping technical practitioners address the unique security challenges posed by autonomous AI. 

The principles offer practical guidance on balancing operational agility with robust security controls, establishing that secure agentic systems should be Human-governed and Accountable, architected for meaningful control with clear accountability, constrained by well-defined authority boundaries aligned with risk tolerance, and subject to risk-based controls ensuring alignment with expected business outcomes. They must be Bounded and Resilient, with strict purpose-specific entitlements, robust defensive measures including AI-specific protections, and continuous validation with predictable failure modes. Finally, they should be Transparent and Verifiable, supported by secure AI supply chain controls, comprehensive telemetry of all system activities, and real-time monitoring capabilities for oversight and incident response. 

This blog post provides additional context on the principles and how they can be applied in real-world environments.

“As agentic AI systems become more embedded in organizations’ operations, we need frameworks to secure them,” said David LaBianca, Project Governing Board co-chair at CoSAI. “These principles provide a technical foundation for organizations to adopt AI responsibly and securely.”

New Defender Frameworks Help Organizations Operationalize AI Security

CoSAI has published another landscape paper, “Preparing Defenders of AI Systems,” developed through our workstream on Preparing Defenders for a Changing Cybersecurity Landscape. The paper provides practical, defender-focused guidance on applying AI security frameworks, prioritizing investments, and enhancing protection strategies for AI systems in real-world environments.

A companion blog post offers additional insights on how this evolving resource bridges high-level frameworks with practical implementation and will continue adapting as AI threats and technologies advance.

“This paper provides defenders with specific guidance on how security frameworks must be adapted to mitigate risks in the AI transformation─pinpointing gaps in current approaches and prioritizing critical investments,” said Josiah Hagen of Trend Micro and Vinay Bansal of Cisco, CoSAI’s Workstream 2 Leads. “As security practices are aligned with AI adoption realities, organizations are empowered to make informed decisions and protect their assets while ensuring innovation doesn’t outpace defenders. This exemplifies CoSAI’s commitment to connecting emerging threats to AI systems with practical security solutions.”

These foundational outputs from CoSAI’s first year set the stage for even greater impact ahead.

Looking Ahead: Building a Secure AI Future

As CoSAI enters its second year, the coalition is positioned to further accelerate AI security innovation through expanded research initiatives, practical tool development, and increased global engagement. With active workstreams producing actionable guidance and a growing community of practitioners, CoSAI continues to drive adoption of secure-by-design AI systems across industries. Its commitment to open source collaboration and standardization remains central to establishing trust in AI technologies. Realizing this vision requires continued collaboration across the AI security community.

Get Involved

Technical contributors, researchers, and organizations are invited to join CoSAI’s open source community and help shape the future of secure AI. To learn more about how to get involved, contact join@oasis-open.org.

One year in: What CoSAI members are saying about our impact

Premier Sponsors: 

EY:
“At the EY organization, we believe it is our responsibility to shape the future and not leave it to chance, so that the next generation inherits a world improved by AI, not made worse by it. It has been a privilege to serve as a founding member of CoSAI, a powerful platform for EY teams to collaborate with global technology leaders in shaping secure and responsible AI. As we enter the exponential age, we remain committed to leading with clarity, confidence and purpose.”
– Sarah Liang, EY Global Responsible AI Leader
Google
“It’s been great to see CoSAI grow with so many new partners and instrumental frameworks since we first introduced it last year. Google is proud to have been a co-founder for this initiative and we look forward to seeing more work from CoSAI’s workstreams, specifically across critical areas like agentic security.”
– Heather Adkins, VP of security engineering, Google
IBM: 
”From establishing critical work streams to launching innovative initiatives around Security Principles of Agentic AI, AI model signing and attestation, and MCP Security, CoSAI has built real momentum in securing AI at scale—all in just one year. It’s been rewarding to co-chair the Technical Steering Committee and collaborate with this talented, cross-industry community to tackle the evolving challenges of AI security and help shape industry standards.”
– J.R. Rao, IBM Fellow and CTO, Security Research, IBM
NVIDIA: 
“As AI becomes increasingly integral to critical infrastructure and enterprise operations, security must be foundational at every stage of development and deployment. As an industry enabler of AI for both hardware and software, NVIDIA is proud to support CoSAI’s collaborative efforts to advance practical, open standards across industries to democratize and scale AI for the entire market.”
— Daniel Rohrer, Vice President of Software Product Security, NVIDIA
Palo Alto Networks:
“As public and private organizations increasingly integrate advanced and agentic AI models into critical networks, the development of industry-driven AI security frameworks, such as CoSAI’s ‘Principles for Secure-By-Design Agentic Systems,’ will be vital for the security of our digital ecosystem. CoSAI’s initiatives over the past year are commendable, and we eagerly anticipate continuing our contributions to their mission.”
– Munish Khetrapal, VP of Cloud Management, Palo Alto Networks
Trend Micro: 
“As AI continues to reshape how businesses operate, we see tremendous value in collaboration that drives open standards and innovation across the industry. Over the past year, our work with CoSAI has reflected a shared commitment to raising the bar for security. We’re proud to stand alongside CoSAI in helping lead the way to a more secure and resilient digital future.”
– Kevin Simzer, COO at Trend Micro

General Sponsors: 

Adversa AI:
“At Adversa AI – an Agentic AI Security startup – we are proud to be a COSAI sponsor and a co-lead of the Agentic AI Security workstream. As pioneers of AI security and continuous AI red teaming, we believe Agentic AI demands a new security paradigm—one that goes beyond traditional guardrails to test cognition, autonomy, and context. COSAI’s Agentic AI Security Principles mark a pivotal step forward, and we’re committed to shaping the future of secure Agentic AI systems.”
— Alex Polyakov, Co-Founder of Adversa AI
Aim Security:
“CoSAI is building what the industry urgently needs: clarity and collaboration in securing AI systems. As pioneers in AI security, we at Aim are excited to work alongside this diverse community to help define the future of AI defense –  for agentic systems and beyond.”
– Matan Getz, CEO and Co-Founder, Aim Security
Amazon:
“The first year of CoSAI highlights how industry collaboration can advance AI security. As a founding member, Amazon supports the coalition’s mission to develop open standards and frameworks that benefit the entire AI ecosystem. Together, we look forward to strengthening the foundation of secure AI.”
– Matt Saner, Sr. Manager, Security Specialist Solution Architecture; CoSAI Governing Board and Executive Steering Committee member
Anthropic: 
“Safe and secure AI development has been core to our mission from the start. As AI models become more autonomous, CoSAI’s work is increasingly vital for ensuring that AI systems remain secure, trustworthy, and beneficial for humanity. We’re proud to continue this important work alongside other industry leaders.”
– Jason Clinton, Chief Information Security Officer, Anthropic Cisco: 
“As AI systems become more agentic and interconnected, securing them is now more important than ever. During the last year, CoSAI’s workstreams helped empower defenders and innovators alike to advance AI with integrity, trust, and resilience. We’re proud to help shape industry frameworks with this global coalition; uniting leaders across disciplines to safeguard the future of AI. Together, we’re ensuring that security is foundational to every phase of AI’s evolution.”
– Omar Santos, Distinguished Engineer, Advanced AI Research and Development, Security and Trust, Cisco
Cohere: 
“We’re proud to support CoSAI and collaborate with industry peers to ensure AI systems are developed and deployed securely. Over the last year, these collective efforts have built an important foundation that helps drive innovation while protecting against emerging threats. Our shared commitment to secure-by-design principles is increasingly important as AI adoption accelerates.”
-Prutha Parikh, Head of Security, Cohere
Fr0ntierx:
“CoSAI has united a global community around one of the most critical opportunities of our era: advancing safe, responsible, and innovative AI. At Fr0ntierX, we’re proud to contribute to this mission by helping build an infrastructure foundation rooted in trust, interoperability, and privacy. As AI continues to evolve, we remain committed to ensuring that innovation goes hand in hand with alignment and meaningful user control.”
– Jonathan Begg, CEO, Fr0ntierX
GenLab: 
“Over the past year, CoSAI has brought clarity to securing AI across the AI supply chain. The Six Critical Controls give leaders something concrete to work from, and GenLab has been proud to support that work from the start. As AI adoption accelerates, these frameworks are going to be essential—not just for safety, but for trust across sovereign systems.”
– Daniel Riedel, Founder & CEO, GenLab Venture Studios
HiddenLayer:
“As one of the earliest members of CoSAI, HiddenLayer recognized the urgency of securing AI from the outset. CoSAI’s work over the past year has provided much-needed clarity in a rapidly evolving space, offering actionable frameworks that empower organizations to operationalize AI security and governance. Its mission has reinforced our belief that trust must be embedded into AI systems by design. As threats become more advanced and the AI attack surface expands, our continued collaboration with CoSAI remains essential to ensuring that AI innovations are safe and secure.”
— Malcolm Harkins, Chief Security & Trust Officer, HiddenLayer Intel:
“CoSAI’s first year has been marked by strong momentum—from the release of landscape papers of technical workstreams to the timely initiation of the Agentic AI Systems workstream. These milestones reflect the coalition’s ability to anticipate and act on emerging security needs. At Intel, we’re proud to partner with CoSAI members to ensure that secure-by-design principles are embedded early in the AI system design and deployment.”
– Dhinesh Manoharan, VP Product Security & GM of INT31, Intel
Lasso Security
“At Lasso, we believe secure-by-design must be the foundation of AI innovation. CoSAI has played a critical role in turning complex AI security challenges into practical, actionable guidance—from agentic systems to defender frameworks. As proud contributors to this effort, we’ve seen firsthand how CoSAI is helping shape a more trustworthy AI future and laying the groundwork for secure, enterprise-grade solutions.”
– Elad Schulman, CEO & Co-Founder, Lasso Opal Security: 
“Opal is a proud early supporter of CoSAI—because Opal helps customers track agents and other NHIs in our platform, we’re deeply invested in securing AI’s future. CoSAI assembles leading minds to set standards for a world in which every employee calls on multiple agents. Opal is honored to contribute to this organization and learn from luminaries at trailblazing member organizations. We look forward to future consensus-building, standards setting, and insights.”
–Umaimah Khan, CEO, Opal Security
Operant:
“In a world where AI is rapidly reshaping everything from infrastructure to decision-making, collaboration is our best defense. I’m proud to have joined the board of Coalition for Secure AI as it brought together industry leaders, researchers, and policymakers under one roof, filling a major gap in the evolution of Responsible AI that is now more urgent than ever. CoSAI represents the kind of cross-industry partnership that will shape how we build a more secure and trustworthy AI ecosystem for everyone. A secure AI future is only possible if we build it together.”
– Priyanka Tembey, CTO, Operant AI
Red Hat:
“Security is the foundation of trustworthy AI, not an afterthought. At Red Hat, we believe security-first principles and processes must be woven into the fabric of every platform from day one, including AI, which the recently released CoSAI Principles for Secure-by-Design Agentic Systems helps to address. We are proud to have been part of CoSAI for the past year and look forward to helping further advance the foundational components of the community.”
– Garth Mollett / Product Security Lead Architect, Red Hat
TrojAI: 
“CoSAI’s collaborative and transparent approach to make AI safer and more secure for everyone closely reflects TrojAI’s own mission. We’re proud to support this important initiative and celebrate its first year of progress. As AI adoption increases, we believe that security will be integral to the sustainable growth of AI. CoSAI’s efforts to develop best practices and unified methodologies are invaluable for secure AI development and deployment.”
– Lee Weiner, CEO, TrojAI VE3: 
“We joined CoSAI right at the beginning because its mission aligned with our belief that AI must be built securely, responsibly, & transparently. CoSAI insights and frameworks like critical controls, have deeply influenced how we approach AI security and governance at VE3. From shaping internal practices to launching our own AI safety, security and governance whitepaper, CoSAI’s work has been instrumental for us. As AI systems grow more complex and autonomous, this partnership becomes more vital and we’re honored to be part of CoSAI’s journey.”
— Manish Garg, Managing Director, VE3
Wiz:
“AI’s growth echoes the early cloud era, when innovation outpaced security and the industry had to close the gap together. At Wiz, we believe that securing AI takes more than technology — it requires collaboration among industry leaders. Over the past year, CoSAI has driven these critical conversations, and Wiz is proud to stand with this coalition as new AI security challenges emerge, from autonomous AI agents to MCP.” 
– Alon Schindel, VP of AI & Threat Research, Wiz About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide.

CoSAI operates under OASIS Open, the international standards and open source consortium. www.coalitionforsecureai.org

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries:
communications@oasis-open.org

The post Coalition for Secure AI Marks First Anniversary with New Principles for Agentic Systems and Defender Frameworks appeared first on OASIS Open.

Wednesday, 16. July 2025

Trust over IP

TOIP EGWG 2025-07-10: Kyle Robinson, Digital Trust Ecosystems. Why they don’t make sense.

Learn why Kyle's practical experience with the Canadian Province of British Columbia's digital trust initiative has led him to focus on specific, high-impact digital credentials over broad "ecosystems." Documenting these well fosters trust and enables organic growth and unpredictable efficiencies, naturally building interoperable digital trust networks. The post TOIP EGWG 2025-07-10: Kyle Robins
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/TOIP-EGWG-2025-07-10_-Kyle-Robinson-Digital-Trust-Ecosystems_-Why-they-dont-make-sense_.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Excerpt

Learn why Kyle’s practical experience with the Canadian Province of British Columbia’s digital trust initiative has led him to focus on specific, high-impact digital credentials over broad “ecosystems.” Documenting these well fosters trust and enables organic growth and unpredictable efficiencies, naturally building interoperable digital trust networks.

Briefing Document: A Smarter Approach to Digital Credentials and Ecosystems

Date: July 10, 2025

Sources:

“Digital Credentials Presentation” (Presentation Excerpts) “GMT20250710-145520_Recording.cc.vtt.txt” (Meeting Transcript – VTT) “GMT20250710-145520_Recording.m4a” (Meeting Audio – M4A) “GMT20250710-145520_Recording.transcript.vtt.txt” (Meeting Transcript – VTT) “GMT20250710-145520_RecordingnewChat.txt” (Meeting Chat Log) Executive Summary

The prevailing approach of focusing on broad “ecosystems” for digital credential development is inefficient and limits opportunities. Instead, a more effective strategy involves starting with specific, high-impact credentials, rigorously documenting them in a trusted and public manner, and then allowing organic growth and adoption to naturally form interoperable networks. The Province of British Columbia (BC) is a leading example of this approach, leveraging foundational identity credentials and promoting their open use, which has led to unpredicted and valuable use cases and significant administrative efficiencies. The discussions highlight the “fractal” nature of ecosystems and the critical role of strong governance and transparency in building trust in digital credentials.

Key Themes and Most Important Ideas/Facts 1. The Flaws of an Ecosystem-First Approach Too Many Credentials to Tackle: Attempting to develop digital credentials for an entire industry or “ecosystem” simultaneously (e.g., healthcare or finance) is “a massive undertaking, resource-intensive, hard to coordinate, and risks spreading us too thin, leading to weak, untrusted credentials.” (Digital Credentials Presentation). As Kyle Robinson notes, “there’s just too many different types of credentials and different authorities for those credentials to really get a good handle on.” (Meeting Transcript). Constrains Opportunities: Pre-planning an entire ecosystem creates a rigid scope, preventing the discovery of “unexpected opportunities that could arise organically.” (Digital Credentials Presentation). Eric Drury echoes this, stating that “building a use case for an ecosystem is much more complicated than building a use case for a single credential.” (Meeting Transcript). 2. Recommendation: Start with Specific, High-Impact Credentials Build Trust and Quality: The core recommendation is to “focus on a few high-impact credentials, like a digital CPA certification. By putting all our effort into making them robust and trustworthy, we create a gold standard that people rely on. Trust drives adoption.” (Digital Credentials Presentation). Foundational Credentials as Catalysts: BC’s strategy focuses on “foundational credentials,” such as “identity of a person, identity of a business.” These are “foundational credentials which… kind of start right at the core of everything and then other credentials are built on top of those.” (Meeting Transcript). This layering allows for credentials like a “licensed doctor” to build upon a verified personal identity. Open More Doors Through Ripple Effects: Strong, well-executed credentials “create ripple effects.” (Digital Credentials Presentation). BC has observed this with their “lawyer credential,” where “all these what we call verifiers, or relying parties, started popping up, saying, oh, we could use that too, we could use that too, we could use that too.” (Meeting Transcript). This organic growth leads to “unpredicted” opportunities. 3. Credential Documentation: The Foundation of Trust Trusted Location: “Documentation must reside in a secure, reputable platform to ensure credibility.” (Digital Credentials Presentation). The BC Gov’s Digital Trust Toolkit is highlighted as a model for “transparent, trusted documentation that stakeholders can rely on.” (Digital Credentials Presentation). This toolkit serves as a “source of truth of governance documentation for credentials that are in production.” (Meeting Transcript). Active Promotion and Public Visibility: Documentation is insufficient without visibility. It “needs to be public, publicly exposed. So that a verifier can look at that document and have enough information to read it to be able to trust it.” (Meeting Transcript). This active promotion through various channels (webinars, industry forums, social media) “builds awareness and encourage adoption among users and organizations.” (Digital Credentials Presentation). Transparency of Issuance and Revocation: Trust extends beyond the technical aspects of a credential; it requires confidence in the “issuance process that the issuing authority goes through to be able to issue that to the right person, with the right attribute information.” (Meeting Transcript). Furthermore, visibility into “revocation status” is critical. If a ledger were to disappear, the ability to check revocation status would be lost, underscoring the need for robust infrastructure. 4. The Nature of Ecosystems and Organic Growth Fractal Nature of Ecosystems: The discussion introduces the concept of ecosystems as “fractal.” As Carly Huitema explains, “You can zoom in all the way into grains of dirt, and there’s an ecosystem there. And then you can zoom all the way out to the planet scale ecosystem.” (Meeting Transcript). This implies that “person is a microcosm” that can be “observed in other ecosystems,” with “boundaries are always fuzzy.” (Meeting Transcript). Market-Driven Adoption: The “overall drive of all of this is driven by those relying parties and verifiers.” When they “see value in doing something with credentials they will implement it, and they will tell their friends. And that’s sort of how you can see that growth and that adoption happening.” (Meeting Transcript). This organic growth is preferred over top-down, pre-defined ecosystem planning. Savings and Efficiency as Drivers: Digital credentials offer tangible benefits, such as “a ton of administrative time” savings for verifiers. For example, the City of Vancouver is realizing benefits where “nothing needs to be reviewed because the technology’s already trusting the stuff that the province is producing. So they don’t have to, like, have somebody manually reveal a form.” (Meeting Transcript). 5. Trust Beyond BC: Scaling and Interoperability Challenges Establishing “Legitimacy”: A key challenge is distinguishing “legitimate” from “non-legitimate” credentials beyond the issuing authority. BC addresses this by publishing the issuer DID, schema ID, and credential definition ID on the Candy ledger. This allows relying parties to cryptographically verify the origin. Cross-Jurisdictional Interoperability: The question of how this scales beyond BC is raised, particularly when different jurisdictions (e.g., Alberta, Rhode Island, Utah) might make different technical choices for their credentials. This mirrors the non-digital world where regulations define accepted IDs. Role of Trust Registries and Standards Bodies: The idea of “trust registries” is introduced as a potential solution for looking up legitimate issuers and their governance frameworks across jurisdictions. This could be driven by “standards bodies,” which could publish “trust registries published on their websites, or in some type of technology to say, hey, this standards body here, these are all of the organizations that we have audited and are following.” (Meeting Transcript). Government’s Evolving Role: While government’s primary role remains issuing foundational IDs and enforcing laws, its new role in “building and supplying software to citizens” (e.g., the BC Wallet app) is seen as a way to “help adoption.” (Meeting Transcript). The future may see OS manufacturers playing a larger role in built-in wallets. Conclusion

The discussion reinforces that while the concept of a broad “ecosystem” might be a useful descriptive tool, the practical and successful implementation of digital credentials should pivot towards a credential-first approach. By focusing on building trust and quality in individual, high-impact credentials, making their governance transparent and public, and fostering organic adoption through demonstrated value, digital trust networks can emerge and grow naturally, leading to widespread benefits. The BC government’s experience serves as a compelling case study for this evolving strategy.

For more details, including the slides,  meeting recording and transcript, please see our wiki 2025-07-10 Kyle Robinson & Digital Trust Ecosystems. Why they don’t make sense.

https://www.linkedin.com/in/kylegrobinson/

The post TOIP EGWG 2025-07-10: Kyle Robinson, Digital Trust Ecosystems. Why they don’t make sense. appeared first on Trust Over IP.


Project VRM

The Cluetrain Will Run from Customers to Companies

For the good of both. Customers need privacy, respect, and the ability to provide good and helpful information to the companies they deal with. The good clues customers bring can include far more than what companies get today from their CRM systems and from surveillance of customer activities. For example, market intelligence that flows both […]

For the good of both.

Customers need privacy, respect, and the ability to provide good and helpful information to the companies they deal with. The good clues customers bring can include far more than what companies get today from their CRM systems and from surveillance of customer activities. For example, market intelligence that flows both ways can happen on a massive scale.

But only if customers set the terms.

Now they can, using a new standard from the IEEE called P7012, aka MyTerms. It governs machine readability of personal privacy terms. These are terms that customers proffer as first parties, and companies agree to as second parties. Lots of business can be built on top of those terms, which at the ground level start with service provision without surveillance or unwanted data sharing by the company with other parties. New agreements can be made on top of that, but MyTerms are where genuine and trusting (rather than today’s coerced and one-sided) relationships can be built.

When companies are open to MyTerms agreements, they don’t need cookie notices. Nor do they need 10,000-word terms and conditions or privacy policies because they’ll have contractual agreements with customers that work for both sides.

On top of that foundation, real relationships can be built by VRM systems on the customers’ side and CRM systems on the corporate side. Both can also use AI agents: personal AI for customers and corporate AI for companies. Massive businesses can grow to supply tools and services on both sides of those new relationships. These are businesses that can only grow atop agreements that customers bring to the table, and at scale across all the companies they engage.

This is the kind of thing that four guys (me included)† had in mind when they posted The Cluetrain Manifesto* on the Web in April 1999. A book version of the manifesto came out in early 2000 and became a business bestseller that still sells in nine languages. Above the manifesto’s 95 theses is this master clue**, written by Christopher Locke:

MyTerms is the only way we (who are not seats or eyeballs or end users or consumers) finally have reach that exceeds corporate grasp, so companies can finally deal with the kind of personal agency that the Internet promised in the first place.

The MyTerms standard requires that a roster of possible agreements be posted at a disinterested nonprofit.  The individual chooses one, the company agrees to it (or not). Both sides keep an identical record of the agreement.

The first roster will be at Customer Commons, which is ProjectVRM’s 501(c)3 nonprofit spinoff. It was created to do for personal privacy terms what Creative Commons does for personal copyright licenses. (It was Customer Commons, aka CuCo, that the IEEE approached with the idea of creating the MyTerms standard.)

Work on MyTerms started in 2017 and is in the final stages of IEEE approval process. While it is due to be published early next year, what it specifies is simple:

Individuals can choose a term posted at Customer Commons or the equivalent Companies can agree to the individual’s choice or not The decision can be recorded identically by both sides Data about the decision can be recorded by both sides and kept for further reference, auditing, or dispute resolution Both sides can know and display the state of agreement or absence of agreement (for example, the state of a relationship, should one come to exist)

MyTerms not a technical spec, so implementations are open to whatever. Development on any of those can start now. So can work in any of the six areas listed above.

The biggest thing MyTerms does for customers—and people just using free services—is getting rid of cookie notices, which are massively annoying and not worth the pixels they are printed on.  If a company really does care about personal privacy, it’ll respect personal privacy requirements. This is how things work in the natural world, where tracking people like marked animals has been morally wrong for millennia. In the digital world, however, agreements need to be explicit, so programming and services can be based on them. MyTerms does that.

For business, MyTerms has lots of advantages:

Reduced or eliminated compliance risk Competitive differentiation Lower customer churn Grounds for real rather than coerced relationships (CRM+VRM) Grounds for better signaling (clues!) going in both directions Reduced or eliminated guesswork about what customers want, how they use products and services, and  how both might be improved

Lawyers get a new market for services on both the buy and sell sides of the marketplace. Companies in the CMP (consent management platform) business (e.g. Admiral and OneTrust) have something new and better to sell.

Lawmakers and Regulators can start looking at the Net and the Web as places where freedom of contract prevails, and contracts of adhesion (such as what you “agree” to with cookie notices) are obsolesced.

Developers can have a field day (or decade). Look for these categories to emerge

Agreement Management Platforms – Migrate from today’s much-hated consent management platforms (hello OneTrust, Admiral, and the rest). Vendor Relationship Management (VRM) Tools and services – Fill the vacuum that’s been there since the Web got real in 1995. Customer Relationship Management (CRM) – Make its middle name finally mean something. Customer Data Return (CDR) – Give, sell back, or share with customers the data you’ve been gathering without their permission since forever. Talking here to car companies, TV makers, app makers, and every other technology product with spyware onboard for reporting personal activity to parties unknown. Platform Relief –  Free customers from the walled gardens of Apple, Microsoft, Amazon, and every other maker of hardware and software that currently bears the full burden of providing personal privacy to customers and users. Those companies can also embrace and help implement MyTerms for both sides of the marketplace. Personal AI (pAI)– Till and plant a vast new greenfield for countless companies, old and new. This includes Apple (which can make Apple Intelligence truly “AI for the rest of us” rather than Siri in AI drag), Mozilla (with its Business Accelerator for personal AI) , Kwaai (for open source personal AI), and everyone else who wants to jump on the train. Big meshes of agents, such as what these developers are all working on.

In the marketplace, we can start to see all these things:

Predictions made by The Intention Economy: When Customers Take Charge finally come true. New dances between customers and companies, demand and supply. (“The Dance” is a closing chapter of The Intention Economy.) New commercial ecosystems can grow around a richer flow of clues in both directions, based on shared interest and trust between demand and supply. Surveillance capitalism will be obsolesced — and replaced by an economy aligned with personal agency and respect from customers’ corporate partners. A new distributed P2P fabric of personally secure and shared data processing and storage — See what KwaaiNet + Verida, for example, might do together.

All aboard!

†Speaking for myself in this post. I invite the other two surviving co-authors to weigh in if they like.

*At this writing, the Cluetrain website, along with many others at its host, is offline while being cured of an infection.  To be clear, however, it will be back on the Web. Meanwhile, I’m linking to a snapshot of the site in the Internet Archive—a service for which the world should be massively grateful.

**The thesis that did the most to popularize Cluetrain was “Markets are conversations,” which was at the top of Cluetrain’s ninety-five theses. Imagining that this thesis was just for them, marketers everywhere saw marketing, rather than markets, as “conversations.” Besides misunderstanding what Cluetrain meant by conversation (that customers and companies should both have equal and reciprocal agency, and engage in human ways), marketing gave us “conversational” versions of itself that were mostly annoying.  And now (thank you, marketing), every damn topic is now also a fucking “conversation”—the “climate conversation,” the “gender conversation,” the “conversation about data ownership.” I suspect that making “conversation” a synonym for “topic” was also a step toward making every piece of propaganda into a “narrative.” But I digress. Stop reading here and scroll back to read the case for MyTerms. And please, hope that it also doesn’t become woefully misunderstood.

Tuesday, 15. July 2025

FIDO Alliance

White Paper: Addressing Cybersecurity Challenges in the Automotive Industry

Abstract As the automotive industry transitions toward software-defined vehicles, autonomous technologies, and connected services, cybersecurity has become a critical concern. This white paper from the FIDO Alliance outlines key challenges […]

Abstract

As the automotive industry transitions toward software-defined vehicles, autonomous technologies, and connected services, cybersecurity has become a critical concern. This white paper from the FIDO Alliance outlines key challenges and emerging solutions for securing next-generation vehicles. It examines global regulatory frameworks such as UN R155, UN R156, and ISO/SAE 21434 and presents the FIDO Alliance’s standards for passwordless authentication, secure device onboarding, and biometric certification.

Audience

This paper addresses the automotive industry. The audience includes automotive system engineers, automotive IVI product and development managers, automotive networking and in-vehicle cyber security engineers, product managers for in-vehicle services for applications such as purchasing, IT system cyber security managers, engineers seeking to support global regulatory frameworks such as UN R155/R156 and ISO/SAE 21434, manufacturing system engineers, and car-to-cloud connectivity engineers.

Download the White Paper 1. Introduction

The automotive industry is undergoing transformative changes, including the shift to software-defined and autonomous vehicles, advanced IT-like architectures, over-the-air (OTA) updates, and the rise of in-vehicle commerce. While these changes offer new revenue opportunities, they also bring significant cybersecurity threats.

Global cybersecurity legislation, such as UN Regulation 155, UN Regulation 156, and ISO/SAE 21434, aim to protect vehicles from emerging threats. The FIDO Alliance plays a crucial role by providing standards for secure authentication, device onboarding, and biometrics certification.

Utilizing standards helps automotive companies ensure consistent security, leverage collective expertise, and avoid proprietary solutions that have the potential to stymie new markets and revenue. FIDO standards apply to various automotive applications, including consumer services, in-vehicle solutions, workforce authentication, and manufacturing, ensuring robust cybersecurity across the industry.

This paper provides companies within the automotive ecosystem an insight into the standards and services the FIDO Alliance offers together with a review of current and future use cases.

The FIDO Alliance is seeking feedback and partnership with industry experts to help ensure that FIDO’s programs are fit for purpose and successfully help companies meet cybersecurity needs, improve driver experiences, and tap into new opportunities.

2. Evolution of the automotive industry

The automotive industry has 140 years of history and is currently going through changes that affect all aspects of the industry:

Electrification and sustainability Software-defined vehicles and connectivity Autonomous and assisted driving Shifting business models: Mobility-as-a-Service (MaaS) and direct sales Supply chain disruptions and geopolitical risks New revenue streams: data monetization and services Rollout of EV charging infrastructure and its energy grid impacts Changing consumer expectations and digital experiences

These changes bring potential upside to manufacturers in terms of new revenue opportunities and improved vehicles, but they also introduce considerable cyber threats.

Vehicles have evolved from isolated mechanical systems into interconnected cyber-physical platforms (often created by various entities) that integrate complex software, hardware, and communication networks. Manufacturers implement these systems to provide end users with a better vehicle and an enhanced driving experience, but they also bring an increased risk of cyber threats associated with new “attack surfaces”. These potential threats come in many forms, from malicious hackers to state funded actors. To minimize these threats, it is now a fundamental priority for manufacturers, their suppliers, regulators, and other industry stakeholders to focus on cybersecurity.

3. Meet the challenges and seize the opportunity

Automotive cybersecurity professionals have a massive challenge in front of them. On one side they need to react to the rise in threats and account for the associated legislation that has been developed to protect consumers. On the other side they need to be open to supporting new business models such as in-vehicle commerce, value added vehicle features such as subscription services, as well as additional cybersecurity for factories and offices. While there is no one simple solution to meet all of these needs, utilizing standards and certification programs from organizations such as the FIDO Alliance can help greatly.

4. Automotive cybersecurity and global legislation

National governments and international organizations have enacted regulations that require stringent cybersecurity measures throughout the automotive lifecycle, including design, operation, and even end-of-life. These frameworks aim to shield vehicles from emerging threats and establish a baseline for safety and trust across the automotive ecosystem. Major worldwide examples include:

United Nations Regulation 155 and United Nations Regulation 156: mandate that vehicles incorporate a Cybersecurity Management System (CSMS) and a Software Update Management System (SUMS) ISO/SAE 21434: provides the foundation for global automotive cybersecurity engineering, outlining processes for managing cyber risks throughout the entire vehicle lifecycle China’s GB 44495-2024 and GB 44496-2024: regulate the Cyber Security Management System (CSMS) and govern secure software updates in a granular fashion India’s AIS 189 and AIS 190: align with UN R155 and R156, to regulate the cybersecurity of connected vehicles United States: Publication of cybersecurity best practices by the National Highway Traffic Safety Administration (NHTSA) that emphasize secure vehicle development processes, incident response plans, and continuous risk monitoring

Refer to Appendix A to learn more about these standards.

5. The FIDO Alliance and FIDO standards

The FIDO Alliance is an open industry association with a focused mission: reduce the world’s reliance on passwords. To accomplish this, the FIDO Alliance promotes the development of, use of, and compliance with standards for user authentication and device onboarding.

The FIDO Alliance:

Develops technical specifications that define an open, scalable, interoperable set of mechanisms to reduce reliance on passwords for authentication of both users and devices. Tracks the evolution of global regulations and evolves its own standards to help industries satisfy those regulations in a harmonized way, reducing their compliance burdens. Operates industry certification programs to ensure successful global adoption of these specifications. Provides education and market adoption programs to promote the global use of FIDO. Submits mature technical specifications to recognized standards development organizations for formal standardization.

The FIDO Alliance has over 300 members worldwide, with representation from leaders in IT, silicon, payments, and consumer services and features a Board of Directors that includes representatives from Apple, Visa, Infineon, Microsoft, Dell, Amazon, and Google. The Alliance also has a variety of active working groups where like-minded members can develop and advance technical work areas and coordinate on market-specific requirements.

The FIDO Alliance is planning to launch an automotive working group, where leaders in this sector can identify and collaborate on technical, business, and market requirements. To learn more, use the Contact Us form at https://fidoalliance.org/contact/ or email info@fidoalliance.org.

6. FIDO for automotive cybersecurity compliance

Meeting the demands of the primary automotive cybersecurity standard ISO/SAE 21434 and subsequently the most prominent regulation, UN R155, hinges on strong identity management and secure device onboarding. While these standards don’t prescribe FIDO protocols per se, they outline key principles where FIDO offers tangible benefits.

ISO 21434, particularly Clauses 8 and 9 concerning risk assessment and threat mitigation, calls for strategies to prevent unauthorized access. FIDO’s passwordless authentication directly addresses this by eliminating weak credentials and reducing risks from phishing and credential stuffing, common threats to connected vehicle systems. Additionally, Clause 10’s focus on secure software deployment aligns with FIDO Device Onboard (FDO), ensuring only authenticated devices join the ecosystem, mitigating supply chain attacks and unauthorized software injections. This direct mapping of FIDO’s capabilities to specific clauses demonstrates its value in achieving compliance.

Beyond these founding standards, FIDO’s approach has broad applicability to emerging regulations, providing OEMs with a pathway to meeting global compliance demands and bolstering cybersecurity resilience across their connected car ecosystem. Some examples include China’s GB 44495-2024 and India’s AIS 189, which call for regional automotive cybersecurity standards and reinforce the need for features such as secure authentication in the software-defined vehicle (SDV) era. China’s GB regulation, similar to UN R155, emphasizes authenticity and integrity in remote updates, where FIDO’s passkey-based authentication provides a compliant approach to verifying access. India’s regulations, currently still in draft, align with UN R155, highlighting the importance of securing vehicle-to-cloud communications and identity management.

7. Overview of emerging use cases where FIDO standards may apply

FIDO standards can be applied to a wide range of scenarios. These can be customer-facing, embedded within the vehicle, or as part of the manufacturer’s IT infrastructure.

High level overview of some of some of these scenarios include but not limited to:

In-vehicle commerce: This includes payments using credentials stored and managed in vehicle to enable convenient fueling, EV charging, parking reservations, car washes or even in-vehicle marketplaces managed by the car manufacturer. Implementation of passkeys to authenticate the associated car user and biometric component certification are most relevant to these use cases. Authentication to personalized services: These applications include easy access to customized automotive settings (for example, headrest and seat adjustments) as well as to informational and entertainment content. In-vehicle solutions: This segment includes applications such as car-to-cloud connectivity and onboarding of ECUs and zone controllers within the vehicle. Implementation of FIDO Device Onboard (FDO) is most applicable to these applications. Workforce authentication: These applications include controlling workforce access to IT systems whether at a development office, manufacturing site, or dealership. Implementation of passkeys and FIDO USB authentication keys are most applicable to these applications. Manufacturing: Modern manufacturing facilities are moving towards software defined control, AI, and robotic systems. The secure deployment of these solutions is often time consuming and expensive. Implementation of FIDO Device Onboard (FDO) can accelerate deployments and increase security. 8. FIDO Alliance technology overview

In the same way that Ethernet started as an IT networking solution, FIDO standards were not specifically created for automotive applications. However, they are highly relevant in modern vehicles where robust cybersecurity is a critical, foundational element rather than just a desirable feature. FIDO standards, such as passkeys, are being used as is in the automotive world today.

The FIDO Alliance technology portfolio for automotive applications can be broadly grouped into three main areas:

Passkeys: The FIDO Alliance is transforming authentication through open standards for phishing-resistant sign-ins using passkeys. Passkeys are more secure than passwords and SMS OTPs, easier for consumers and employees to use, and simpler for service providers to deploy and manage. Automotive manufacturers leverage passkeys for a wide variety of use cases.

Device Onboarding: The FIDO Alliance establishes standards for secure device onboarding (FDO) to ensure the safety and efficiency of connected devices in segments such as industrial and enterprise. In the automotive sector, manufacturers can apply this standard to the connections between Electronic Control Units (ECUs) and zone controllers or connections between the vehicle itself and the cloud services that facilitate over-the-air software updates. This standard has been adopted by Microsoft, Dell, ExxonMobil, Red Hat and others.

Biometrics certification: The FIDO Alliance offers a certification program tailored to specific applications that uses independent test labs to measure performance of biometric sensors (such as iris or fingerprint sensors). Biometric sensors are becoming an increasingly important component of vehicles. Typical use cases might be to automatically configure the driver’s seat position or as part of a payment system. In these two examples the definition of “good technical performance” can differ greatly. Samsung, ELAN Microelectronics, Thales, Qualcomm, Mitek, iProov, and others have had biometric components certified by FIDO Alliance.

9. FIDO Alliance technology deep dive

To better understand how automotive manufacturers and the FIDO Alliance can work together, this section discusses current FIDO technologies and how they might integrate with automotive applications.

9.1 Passkeys and user authentication

A passkey is a FIDO authentication credential based on FIDO standards, that allows a user to sign in to apps and websites with the same steps that they use to unlock their device (biometrics, PIN, or pattern). With passkeys, users no longer need to enter usernames and passwords or additional factors.

Passkeys are the signature implementation of FIDO authentication standards, and they offer secure yet simplified sign-in to a wide range of services. Passkeys are supported by all major device operating systems and browsers and have been utilized by many industry leaders including Apple, Google, Microsoft, Samsung, Amazon, Walmart, PayPal, and Visa.

The following diagram illustrates how passkeys can be used for in-car applications, such as when a driver signs in to a cloud service.

Figure 1:Sample passkey usage in automotive

Passkeys rely on a technology known as public key cryptography (PKC), in which a virtual key pair is created, one private and the other public. For each private key (stored on the user’s device) there exists a matching public key (stored on the server) that is used to check signatures created with the private key.

In the diagram, a user (the driver in this case) first registers with a cloud service such as a payment service. During the registration process, a private and public cryptographic key is created by the FIDO Authenticator. The private key is stored securely in the infotainment system of the vehicle and is associated with that driver. The public key is stored on the cloud of the service provider.

When the driver wants to sign in to the service, a request is sent from the vehicle to the cloud service. The service then sends an authentication request to the vehicle. This challenge can only be successfully authorized by the user that holds the matching private key. To make sure that request is genuine, the driver is asked to confirm that they want to sign in. This is typically achieved via a biometric sensor such as fingerprint or face. Once this verification is complete, the user gains access to the service. Several FIDO hardware and software components are used for this process.

9.2 Passkey components

Three FIDO Certified components are used in the example:

FIDO authenticator

A FIDO authenticator is a software component or a piece of hardware that can perform FIDO authentication to verify possession and/or confirm user identity. In the example, the FIDO authenticator likely resides in the car infotainment system.

FIDO server

The server provides an application with a programming interface that can be leveraged with a FIDO Certified client to perform strong authentication. The server sits inside the cloud application.

Biometric components

Biometric components can identify an individual and are often used to compliment a FIDO authenticator. These sensors can take multiple forms including fingerprint, iris and face. The FIDO Alliance certifies the efficacy of biometric subsystems including end-to-end performance, differential assessment of demographic groups, and presentation attack detection (PAD).

Although the example is an in-vehicle use case, the same passkey technology can be applied inside a factory, development center, or dealership to ensure that systems are resilient to phishing attacks or other common password attack vectors.

9.3 In-vehicle biometrics

Installation of biometric components in vehicles is expected to increase rapidly over time. The performance needs of these components will vary by sensor type and target application. Today, the FIDO Alliance offers a comprehensive independent certification program for biometric components such as fingerprint and iris sensors. By specifying in a request for quote (RFQ) that products should be FIDO Certified, automotive manufacturers can simplify selection of sensors. For more information on FIDO Certification, visit https://fidoalliance.org/certification/.

9.4 FIDO Device Onboard (FDO)

When a computer device (such as an ECU) first connects to its management platform (the zone controller), it needs to be onboarded and provisioned. A parallel example might be the connection between a vehicle and its cloud. FIDO Device Onboard (FDO) was developed by FIDO Alliance members to meet the automation and high security requirements of such onboarding experiences.

With FDO, a device is first connected to the (wired or wireless) network and then powered up. The device then automatically and securely onboards to the management platform. FDO is based on a zero-trust architecture and therefore offers a high level of security as both the device and the management platform must cryptographically authenticate themselves to each other. FDO also provides resilience to supply chain attacks.

A number of leading technology providers have demonstrated implementations of FDO solutions including Dell, Microsoft, Red Hat, Intel, and ASRock.

10. FIDO technology use cases deep dive

The FIDO Alliance has identified several use cases where FIDO technology can be applied to support the automotive industry. This section discusses possible use cases with the hopes of fostering further conversations.

10.1 Consumer use cases

Historically, many cybersecurity applications have been “behind the scenes”. In modern vehicles there is an increasing number of new applications that directly impact the driver and passenger in-vehicle experience and open new revenue opportunities for manufacturers. One such area is the emergence of in-vehicle commerce.

Several factors are driving in-vehicle commerce:

Technological advancements Software Defined Vehicles (SDVs) allow for continuous updates and new functionality without hardware modifications. Autonomous driving introduces a new use case for vehicles as productivity or leisure spaces. Changing consumer expectations Consumers demand experiences in their vehicles akin to those offered by their smartphones and other digital devices. Revenue opportunities By acting as platforms for digital services, vehicles open new revenue streams for car manufacturers and service providers.

10.2 Identity verification, authentication, and authorization

The growing connectivity and services associated with modern vehicles brings about new requirements for identity verification, authentication, and authorization.

Identity verification: The process of confirming a person’s identity. It can involve comparing information provided by a person with records in a database or with the person’s physical documents such as a driver’s license. Authentication: Confirms that a person is who they say they are when attempting to sign in to systems, services, and resources. Authorization: The step after authentication that determines user access in terms of accessing data or performing actions.

Unlike other computing devices, such as smartphones and wearables, vehicles often have multiple users including family members, friends, co-workers, or renters. Each user may need access to services or to perform transactions tied to their unique identities and credentials. Therefore, vehicular computing resources must be cyber secure and capable of managing secure access and authentication for a diverse user base, including third-party service providers.

10.3 In-vehicle commerce and authentication

Commerce services in vehicles are closely tied to payments, making strong and user-friendly authentication essential. Drivers must trust that transactions are secure, manufacturers aim to minimize liability for unauthorized payments, and financial institutions require robust, standards-compliant authentication mechanisms. In addition, regulatory frameworks, such as Europe’s Payment Services Directive 2 (PSD2), mandate strong customer authentication (SCA) for cardholder-initiated transactions.

SCA requires a combination of at least two out of three factors:

Possession (something the user has, for example, a key, phone, or vehicle) Inherence (something the user is, for example, biometrics like fingerprint or facial recognition) Knowledge (something the user knows, for example, a PIN or password)

If the passkey authenticator is not natively integrated into the vehicle, authentication must be implemented using alternative multi-factor configurations. This can be achieved through software-based approaches, such as combining a PIN (knowledge) with the vehicle as a possession factor, or through hardware-based methods, such as biometric authentication (inherence) via fingerprint sensors or facial recognition, again anchored by the vehicle as the possession factor.

In-vehicle commerce can be broadly categorized into three main areas:

On-demand features

With on-demand features, vehicles now allow users to activate specific functionalities based on their needs. This includes advanced driver-assistance systems, comfort features like heated seats, and performance upgrades. On-demand features can be offered through flexible subscription models or pay-per-use systems. These features enhance customer satisfaction and create additional revenue streams for manufacturers.

Vehicle-related services

Vehicle-related services are seamlessly integrated services that include fueling, EV charging, parking reservations and payments, car washes, and toll payments. To maximize user convenience, the vehicle acts as a payment hub without reliance on a smartphone.

Convenience features

With implementation of convenience features such as shopping, entertainment, education, and even remote work functionalities, the vehicle becomes an extension of the user’s digital ecosystem. Examples include ordering coffee or groceries on the go, streaming movies, or attending virtual meetings during commutes. These categories illustrate that vehicles are no longer just modes of transportation but platforms that enable various service providers to engage with drivers and passengers.

10.4 Driver ID Verification for vehicle access control

Vehicle access requires a high level of authentication and is well suited to biometric sensors.

Keyless entry and ignition: Biometric systems like fingerprint and facial recognition can replace traditional keys to provide secure, biometric-based authentication for vehicle access and ignition. Anti-theft measures: Vehicles can utilize biometric authentication to prevent unauthorized usage or theft, including carjacking. Vehicle and OEM services: Vehicles can use biometric authentication as the first step to assessing a driver’s rights and privileges in determining how vehicle services can be accessed.

10.5 Personalization, fleet management and autonomous vehicles

Vehicles are often shared and the ability to automatically adapt to a specific driver is an important capability. The criteria and threshold for identification and authentication varies greatly depending on the specific application. For example, adjusting a driver’s seat adjustment versus passenger authorization for an autonomous vehicle.

10.5.1 Personalization

Adaptive in-car settings: Biometric recognition can identify drivers or passengers in order to adjust seat positions, climate controls, infotainment preferences, and navigation routes according to stored profiles. Adaptive usage-based services: By seamlessly authenticating the driver, the automaker can provide use-based insurance or leasing and financing options for personal and commercial scenarios. Fleet management Shared vehicles and fleets: Biometric-enabled processes ensure smooth transitions between users in car-sharing or fleet systems, loading personal settings for each verified driver. Compliance tracking: Digital wallets can hold compliance documents (for example, licenses and vehicle inspection reports) to reduce paperwork and enhance audit readiness by asserting compliance attributes to authorized users.

10.5.2 Autonomous vehicles

Passenger authentication and ID verification: In self-driving cars, biometric systems authenticate passengers to ensure authorized use and personalized experiences.

Why key possession is not sufficient authentication

There are several reasons why a physical key is not a sufficient form of user authentication.

A physical key verifies access to the vehicle but does not confirm the identity of the individual using it. In scenarios such as ridesharing, fleet management, or multi-user vehicles, relying solely on key possession fails to distinguish authorized users from unauthorized users. As discussed earlier, for payment use-cases there is a need in some markets to be compliant with SCA regulations. A key only satisfies the possession factor and therefore does not meet the SCA requirements for secure payments. A vehicle key can be lost, stolen, or duplicated allowing unauthorized individuals to gain access. Without additional layers of authentication, transactions made in the vehicle could be fraudulent. Multi-party and platform complexity: In-car commerce involves multiple stakeholders such as Original Equipment Manufacturers (OEMs), service providers, and users. Authentication must ensure that the user is authorized to transact across all platforms and services, necessitating identity verification beyond simple possession.

10.6 Electronic systems and manufacturing use cases

10.6.1 In-vehicle ECU, zone controller, and compute onboarding

As the compute level rises within vehicles, the need for efficient and fast communication becomes increasingly important. In response to this need, cars are increasingly moving to an IT-centric architecture with Ethernet becoming the networking technology of choice to link zone controllers and ECUs inside a vehicle.

In addition to high speed and secure communication, there is a need to ensure that both the device (ECU) and the management platform (Zone controller) are cryptographically authenticated against each other. Although initially developed for IoT and IT systems, the FIDO Alliance team believes that FIDO Device Onboard (FDO) can be a fast and secure way to automate the onboarding process. As FDO is an open standard, automotive manufacturers can benefit from economies of scale savings versus paying for the development and maintenance of proprietary solutions.

In addition to speed and security, FDO also provides resilience to supply chain attacks and grey market counterfeits.

10.6.2 Car to cloud onboarding

As the complexity of car features grows and autonomous driving technology increases, a modern car is essentially a computer on wheels that requires a vast amount of software for all functions to operate.

Most sources agree that a typical modern car is managed by software generated by around 100 million lines of code. The very nature of this complexity confirms that the days when vehicle software can be frozen at vehicle product launch is no longer realistic.

Software updates are now a mandatory feature of modern automobiles and a secure and efficient way of connecting the vehicle to the manufacturer’s cloud is essential.

FDO provides a secure and fast method for vehicles to onboard to their management platforms, making Over the Air (OTA) software updates possible.

Figure 2:FIDO fit for in-vehicle systems

Additionally, new updates to the FDO standard are expected to allow software to securely deploy to bare ECUs or zone controllers, which would greatly simplify dealership repairs and upgrades.

10.7 Workforce authentication (passkeys/FIDO keys)

For many years the IT industry has been using FIDO authenticators to ensure that only authorized staff have access to systems. The risks associated with attacks in this space have been highlighted by the recent challenges faced by some automotive dealers.

Figure 3:FIDO fit for workforce authentication

A cyberattack on a software provider for car dealerships occurred in June of 2024 and disrupted the operations of thousands of dealerships in North America. This attack caused major disruptions, including delays for car buyers and an estimated $1 billion in collective losses for dealerships.

10.8 Manufacturing use cases

Factories are using classic fixed function manufacturing functions such as motion control and PLCs less as they move towards use of far more flexible and intelligent software defined control and AI based vision systems. This transition introduces large numbers of general-purpose computers to the factory floor.

At installation, each server or industrial PC needs to be onboarded to its respective management platform (on-premises or cloud). This onboarding process typically requires that skilled technicians manually configure the credentials or passwords in the devices, a process that is slow, not secure, and expensive.

With FIDO Device Onboard (FDO), a technician can plug in an industrial PC and have it automatically and securely onboard the management server platform.

The following diagram shows how FDO is used to onboard the industrial PCs to the local servers which are in turn onboarded to the manufacturing cloud.

Figure 4:FIDO fit for automotive manufacturing

11. Why using standards helps

Cybersecurity standards, such as those from the FIDO Alliance, offer value in ways that are hard for any single company to achieve. These consensus-based standards represent maturity and provide consistency for the industry, which are crucial for reliable authentication and authorization. FIDO cybersecurity standards are based on diverse expertise, provide clarity in a changing cybersecurity landscape, and offer essential guidance for certification authorities and regulators as they develop new laws.

Although the automotive industry has utilized standards almost since its inception, there are still areas where companies have tried to develop their own proprietary solutions. Such solutions rarely add value for the manufacturers and require engineering talent to develop and time to maintain.

As the automotive computing platform is a system of systems, the automotive industry can benefit from lessons learned by related industries. Open standards supported by certification programs help streamline product and service development.

FIDO’s standards are essentially commoditizing authentication elements that are critical to cybersecurity, but that are not natural areas for competitive differentiation. By leveraging standards, vendors and manufacturers can now focus their resources and development efforts on higher-value services.

11.1 Benefits of partnering with the FIDO Alliance

Diverse expertise: The FIDO Alliance brings together skilled professionals from various companies, including cloud players, credit card companies, and manufacturers.

Ecosystem cohesion: Standards ensure quality, security, and interoperability within ecosystems, which is crucial for applications like payments.

Adapt to emerging threats: The threat landscape is always evolving. As an example, quantum computing represents a significant threat to commonly used encryption techniques. Although quantum computing is in a relatively early stage of maturity, standards groups such as the FIDO Alliance are already defining how to create quantum resilient solutions.

12. FIDO Certification programs for the automotive industry

The FIDO Alliance’s world-class certification programs validate that products conform to FIDO specifications and interoperate effectively and assess security characteristics and biometric performance. With over 1,200 FIDO Certified products from hundreds of vendors around the world, these programs unlock the value of FIDO’s open standards for vendors and buyers. By specifying FIDO Certification in their RFQ’s, manufacturers can be sure that their suppliers will deliver performant, secure, and interoperable products.

Automotive OEMs can seek out and leverage components that are already certified (for example, authenticators or biometric components) and FIDO Alliance’s certification team is also developing an automotive profile with its lab partners that replicates in-car environments for more precise biometric tests. The Alliance seeks automotive sector feedback to help us collectively:

Address gaps in the current certification specifications Update specifications as needed Issue sector-specific policies Implement new testing procedures

For more information on FIDO Certification, visit https://fidoalliance.org/certification/.

13. Conclusion and next steps

The automotive industry and cybersecurity are evolving quickly; the FIDO Alliance’s proven and established standards and certification programs can help with a wide range of automotive industry applications. Applications include in-vehicle services and payment authentication, onboarding zone-controllers, car-to-cloud connectivity, OTA updates, and leveraging biometrics for a better driver experience.

The FIDO Alliance provides a path for automotive manufacturers and their suppliers to simplify their development processes, raise security levels, improve customer experience, reduce costs and tap into new revenue opportunities.

Feedback is welcome on the topics covered within this white paper and the FIDO Alliance encourages interested parties to engage with the Alliance and its members. FIDO Alliance members can learn more about FIDO standards and have opportunities to influence how these standards evolve. Additionally, members get the benefit of being able to engage with a broad range of thought leaders from leading companies within the broader ecosystem.

To get involved visit https://fidoalliance.org/members/become-a-member/ or use the Contact Us form at https://fidoalliance.org/contact/.

14. Appendix A – Global legislation applicable to automotive cybersecurity

National governments and international organizations have enacted regulations that require stringent cybersecurity measures throughout the automotive lifecycle, from design to operation and even end of life. These frameworks aim to shield vehicles from emerging threats and establish a baseline for safety and trust across the automotive ecosystem.

United Nations Regulations 155 and 156: These are the most prominent and clearly defined automotive cybersecurity regulations. Adopted under the WP.29 framework in 2021, UN R155 and R156 are globally recognized and mandate that vehicles incorporate a Cybersecurity Management System (CSMS) and a Software Update Management System (SUMS). These regulations are prerequisites for type approvals in over 50 countries, including most EU nations, Japan, South Korea, and Australia (UNECE, 2021).

ISO/SAE 21434: This standard provides the foundation for global automotive cybersecurity engineering, outlining processes for managing cyber risks throughout the entire vehicle lifecycle. It complements existing regulations and aids manufacturers in complying with mandatory regulations such as UN R155 (ISO, 2021).

China’s GB 44495-2024 and GB 44496-2024: Introduced in the summer of 2024, these regulations mirror UN R155 and R156 but are more detailed in specificity. GB 44495 outlines cybersecurity requirements for connected vehicles, while GB 44496 governs secure software updates. China’s focus on intelligent connected vehicles highlights its ambition to lead in autonomous and connected technologies (Shadlich, 2024).

India’s AIS 189 and AIS 190: India has introduced AIS 189 and AIS 190, standards aligned with UN R155 and R156, to regulate the cybersecurity of connected vehicles. These frameworks emphasize risk management, monitoring, secure communication protocols, and secure software updates, similar to UN R155/R156 (Vernekar, 2024).

United States: While there are no mandated federal regulations for automotive cybersecurity, the National Highway Traffic Safety Administration (NHTSA) has published cybersecurity best practices. These guidelines emphasize secure vehicle development processes, incident response plans, and continuous risk monitoring. They align with ISO/SAE 21434 and offer a proactive approach to mitigating vulnerabilities in connected vehicles (NHTSA, 2022).

Document history ChangeDescriptionDateInitial publicationWhite paper first published.7-2025             15. Contributors

Conor White, Daon, Inc
Richard Kerslake, FIDO Alliance
Andrew Shikiar, FIDO Alliance
Nimesh Shrivastava, Qualcomm Inc
Drew Van Duren, Qualcomm Inc
Jens Kohnen, Starfish GmbH & Co. KG
Tin T. Nguyen, VinCSS JSC
Henna Kapur, Visa

16. References

Harley, M. (2024, March 28). EU Cybersecurity Laws Kill Porsche’s 718 Boxster and Cayman Early. Retrieved from https://www.forbes.com/sites/michaelharley/2024/03/28/eu-cybersecurity-laws-kill-porsches-718-boxster-and-cayman-early/

ISO. (2021). ISO/SAE 21434:2021 Road vehicles—Cybersecurity engineering. International Organization for Standardization. Retrieved from https://www.iso.org/standard/70918.html

Miller, C., &amp; Valasek, C. (2015, July 21). Hackers remotely kill a Jeep on the highway—With me in it. Wired. Retrieved from https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway

National Highway Traffic Safety Administration (NHTSA). (2022, September 7). Cybersecurity best practices for new vehicles. NHTSA. Retrieved from https://www.nhtsa.gov/press-releases/nhtsa-updates-cybersecurity-best-practices-new-vehicles

Shadlich, E. (2024, September 2). China’s New Vehicle Cybersecurity Standard: GB 44495-2024. Retrieved from https://dissec.to/general/chinas-new-vehicle-cybersecurity-standard-gb-44495-2024/

UNECE. (2021). UN Regulation No. 155 – Cyber security and cyber security management system. UNECE. Retrieved from https://unece.org/transport/documents/2021/03/standards/un-regulation-no-155-cyber-security-and-cyber-security

University of Detroit Mercy. (n.d.). Vehicle cybersecurity engineering program. Retrieved from https://eng-sci.udmercy.edu/academics/engineering/vehicle-cyber-eng.php

Vernekar, A. (2024, October 10). Securing The Future Of Indian Automobiles: Understanding AIS-189 And Cybersecurity For Vehicles. Retrieved from https://vayavyalabs.com/blogs/securing-the-future-of-indian-automobiles-understanding-ais-189-and-cybersecurity-for-vehicles/

Walsh College. (n.d.). Bachelor of Science in Automotive Cybersecurity. Retrieved from https://walshcollege.edu/walsh-undergraduate-degree-programs/bachelor-of-science-in-information-technology/bachelor-of-science-in-automotive-cybersecurity/


Blockchain Commons

Musings of a Trust Architect: When Technical Standards Meet Geopolitical Reality

Reflections on recent conversations about digital identity, sovereignty, and the erosion of foundational principles Echoes from Geneva I wasn’t present at the Global Digital Collaboration conference (GDC25), but the observations shared by colleagues who attended have crystallized some issues I’ve been wrestling with for years. I should note there’s a selection bias here: I’m the author of the 10 pr

Reflections on recent conversations about digital identity, sovereignty, and the erosion of foundational principles

Echoes from Geneva

I wasn’t present at the Global Digital Collaboration conference (GDC25), but the observations shared by colleagues who attended have crystallized some issues I’ve been wrestling with for years. I should note there’s a selection bias here: I’m the author of the 10 principles of self-sovereign identity, so my community tends to have strong opinions about digital identity. Still, when multiple trusted voices independently report similar concerns, patterns emerge that are worth examining. And these weren’t casual observers sharing these concerns. They were seasoned practitioners who’ve spent decades building identity infrastructure. Their collective unease speaks to something deeper than technical disagreements.

It’s hard to boil the problems at GDC25 down to a single issue, because they were so encompassing. For example, there was a pattern of scheduling issues that undercut the community co-organizing goal of the conference and seemed to particularly impact decentralized talks. One session ended up in a small, hot room on the top floor that was hard to find. (It was packed anyway!) Generally, the decentralized-centric talks were in bad locations, they were short, they had restricted topics, or they were shared with other panelists.

I think that logistical shuffling of events may point out one of the biggest issues: decentralized systems weren’t given much respect. This may be true generally. There may be lip service to decentralized systems, but not deeper commitments. Its value isn’t appreciated, so we’re losing its principles. Worse, I see the intent of decentralization being inverted: where our goal is to give individuals independence and power by reducing the control of centralized entities, we’re often doing the opposite — still in the name of decentralization.

The Echo Chamber Paradox

The problems at GDC25 remind me of Rebooting the Web of Trust (RWOT) community discussions I’ve been following, which reiterate that this is a larger issue. We debate the finer points of zero-knowledge proofs and DID conformance while missing the forest for the trees. Case in point: the recent emergence of “did:genuineid” — a centralized identifier system that fundamentally contradicts the “D” in DID.

Obviously, decentralization is a threat to those who currently hold power (whether they be governments, corporations, billionaires, or others who hold any sort of power), because it tries to remove their centralization (and therefore their power), to instead empower the individual. But if we can’t even maintain the semantic integrity of “decentralized” within our own technical community, devoted to the ideal, how can we fight for it in the larger world?

The Corpocratic Complication

GDC25 was held in Geneva, Switzerland. 30+ standards organizations convened to discuss the future of digital identity. Participants spanned the world from the United States to China. There was the opportunity that GDC25 was going to be a truly international conference. Indeed, Swiss presenters were there, and they spoke of privacy, democratic involvement, and achieving public buy-in. It was exactly the themes that we as decentralized technologists wanted to hear.

But from what I’ve heard, things quickly degraded from that ideal. Take the United States. The sole representative of the country as a whole attended via teleconference. (He was the only presenter who did so!) His talk was all about Real ID, framed as a response to 9/11 and rooted in the Patriot Act. It lay somewhere between security-theatre and identity-as-surveillance, and that’s definitely not what we wanted to hear. (The contrast between the US and Swiss presentations was apparently jarring.)

And with that representative only attending remotely, the United State’s real representatives ended up being Google and Apple, each advancing their own corpocratic interests, not the interests of the people we try to empower with decentralized identities.

This isn’t just an American problem. It’s a symptom of a deeper issue happening across our digital infrastructure. It’s likely the heart of the inversions of decentralized goals that we’re seeing — and likely why those logistical reshufflings occurred: to please the gold sponsors. In fact, the conference sponsors tell the story: Google, Visa, Mastercard, and Huawei were positioned as “leading organizations supporting the advancement of wallets, credentials and trusted infrastructure in a manner of global collaboration.”

While Huawei’s presence demonstrates international diversity—a Swiss conference bringing together Europe and Asia—it also raised questions about whose vision of “trust” would ultimately prevail. When payment platforms and surveillance-capable tech giants frame the future of identity infrastructure, we shouldn’t be surprised when the architecture serves their interests first.

This echoes my concerns from “Has SSI Become Morally Bankrupt?”. We’ve allowed the narrative of self-sovereignty to be co-opted by the very platforms it was meant to challenge. The technical standards exist, but they’re being implemented in ways that invert their original purpose. Even UNECE sessions acknowledged the risk of “diluting the autonomy and decentralization that SSI is meant to provide.”

The Sovereignty Shell Game

Google was partnered with German Sparkasse on ZKP technology and that revealed a specific example of this co-opting.

Google’s open-sourcing of its Zero-Knowledge Proof libraries, announced July 3rd in partnership with Germany’s network of public savings banks, was positioned as supporting privacy in age verification. Yet as Carsten Stöcker pointed out, zero-knowledge doesn’t mean zero-tracking when the entire stack runs through platform intermediaries. Carsten noted that Google has “extensive tracking practices across mobile devices, web platforms and advertising infrastructure.” Meanwhile, the Google Play API makes no promises that the operations are protected from the rest of the OS.

The Google ZKP libraries (“longfellow-sk”) could be a great building block for truly user-centric systems, as they link Zero-Knowledge Proofs to legacy cryptographic signature systems that are still mandatory for some hardware. But they’d have to be detached from the rest of Google’s technology stack. Without that, there are too many questions. Could Google access some of the knowledge supposedly protected by ZKPs? Could they link it to other data? We have no idea.

The European Union’s eIDAS Regulation, set to take effect in 2026, encourages Member States to integrate privacy-enhancing technologies like ZKP into the European Digital Identity Wallet, but integration at the platform level offers similar dangers and could again invert the very privacy guarantees ZKP promises.

Historical Echoes, Modern Inversions

Identity technology’s goals being inverted, so that identity becomes a threat rather than a boon, isn’t a new problem. In “Echoes of History”, I examined how the contrasting approaches of Lentz and Carmille during WWII demonstrate the life-or-death importance of data minimization. Lentz’s comprehensive Dutch identity system enabled the Holocaust’s efficiency; Carmille’s deliberate exclusion of religious data from French records saved lives. Even when they’re decentralized, today’s digital identity systems face the same fundamental questions: what data should we collect, what should we reveal, and what should we refuse to record entirely?

But we’re adding a new layer of complexity. Not only must we consider what data to collect, but who controls the infrastructure that processes it. When Google partners with Sparkasse on “privacy-preserving” age verification, when eIDAS mandates integration at the operating system level, we’re not just risking data collection: we’re embedding it within platforms whose business models depend on surveillance. Even if the data is theoretically self-sovereign, the threat of data collected is still data revealed — just as happened with Lentz’s records.

The European eIDAS framework, which I analyzed in a follow-up piece to “Echoes from History”, shows how even well-intentioned regulatory efforts can accelerate platform capture when they mandate integration at the operating system level. As I wrote at the time, a history of problematic EU legislation that had the best of intentions but resulted in unintended consequences has laid the groundwork, and now identity is straight in that crosshairs. One of the first, and most obvious problems with eIDAS is the mandate “that web browers accept security certificates from individual member states and the EU can refuse to revoke them even if they’re dangerous.” There are many more — and I’m not the only voice on eIDAS and EUDI issues.

Supposedly self-sovereign certificates phoning home whenever they’re accessed is another recent threat that demonstrates best intentions gone awry. This not only violates privacy, but it undercuts some of our best arguments for self-soveereign control of credentials by returning liability for data leaks to the issuer. The No Phone Home initiative that Blockchain Commons joined last month represents one attempt to push back on that, but it feels like plugging holes in a dam that’s already cracking. It all does.

The Builder’s Dilemma

What troubles me most is the split I see in our community. On one side, technology purists build increasingly sophisticated protocols in isolation from policy reality. On the other, pragmatists make compromise after compromise until nothing remains of the original vision.

The recent debates about did:web conformance illustrate this perfectly. Joe Andrieu correctly notes that did:web can’t distinguish between deactivation and non-existence — a fundamental security boundary. Yet did:web remains essential to many implementation strategies because it bridges the gap between ideals and adoption. It provides developers and users with experience with DIDs, but in doing so undercut decentralized ideals for those users. We’re caught between philosophical purity and practical irrelevance.

In my recent writings on Values in Design and the Right to Transact, I’ve tried to articulate what we’re fighting for. But values without implementation are just philosophy, and implementation without values is just surrender.

The Global Digital Collaboration highlighted this tension perfectly. International progress on digital identity proceeds apace: Europe, Singapore, and China all advance their frameworks, but there are still essential issues that invert our fundamental goals in designing self-sovereign systems. Meanwhile, the U.S. remains even more stalled, its position represented only by the platforms that benefit from the status quo. Alongside this, technical standards discussions proceed in isolation from the policy, regulatory, and social frameworks that will determine their real-world impact.

Where Do We Go From Here?

I find myself returning to first principles. When we designed TLS 1.0, we understood that technical protocols encode power relationships. When we established the principles of self-sovereign identity, we knew that architecture was politics. Ongoing battles, such as those between Verifiable Credentials and ISO mDLs, between DIDComm and OpenID4VC, demonstrate disagreements over these power relationships made visible in technological discussions.

The question now is whether we can reclaim our ideals before they’re completely inverted by the side of centralized power and controlled architecture.

The path forward requires bridging the gaps Geneva revealed:

Between corporate platform dominance and global digital sovereignty Between the promise of decentralization and the reality of recentralization Between technical standards and policy reality Between privacy absolutism and implementation pragmatism A Personal Note

After three decades of building internet infrastructure, I’ve learned that the most dangerous moment isn’t when systems fail, it’s when they succeed in ways that invert their purpose. We built protocols for human autonomy and watched them become instruments of platform control. We created standards for decentralization and see them twisted into new forms of centralization.

This conversation continues in private Signal groups, in conference hallways, in the space between what we built and what we’ve become. The Atlantic Council warns of power centralizing “in ways that threaten the open and bottom-up governance traditions of the internet.” When critics from across the geopolitical spectrum — from sovereignty advocates to digital rights groups — all sense something amiss, it suggests a fundamental architectural problem that transcends ideology.

Perhaps it’s time for a new architecture: one that acknowledges these inversions and builds resistance into its very foundations.

But that’s a longer conversation for another day.

Christopher Allen has been architecting trust systems for over 30 years, from co-authoring TLS to establishing self-sovereign identity principles. He currently works on alternative approaches to digital identity through Blockchain Commons.

Community Responses & Discussion since Publication

This article sparked significant discussion across the digital identity community:

Mailing List Discussion W3C Credentials Community Group Thread (39 messages, July 16-17) Debate between pragmatic incrementalism vs. human rights imperatives Questions about whether current standards help or hinder decentralization Concerns about “death by 1000 compromises” in SSI implementation My own synthesis and response to this CCG thread, including highlighting Utah’s “recognizer not issuer” as an altertive model De-platforming humans sub-thread (19 messages, July 17) Adrian Gropper proposes moving beyond SSI as “anti-pattern” Discussion of Nostr as alternative architecture Debate over whether did:web is truly decentralized given DNS dependencies Response Articles A Pattern of Moral Crisis - Kyle Den Hartog Examines how technologies get co-opted during times of crisis, drawing parallels to historical censorship patterns Centralized SSI - Kyle Den Hartog Analyzes how trust architectures themselves, not just technology, determine whether systems preserve or remove agency Cyber Storm Rising: Designing for the Warzone - Carsten Stöcker Reframes decentralization as urgent cybersecurity necessity, not just privacy concern, citing Ukraine’s experience Choose Love and Joy - Will Abramson Optimistic perspective on using advanced cryptography and blockchain “hardness” to build kinder digital futures Privacy in EUDI - Jaromil (Dyne.org) Technical analysis of European Digital Identity implementation and privacy implications Decentralized Age Verification - Kyle Den Hartog Concrete proposal for privacy-preserving content moderation that shifts roles within the SSI triangle

Join these ongoing discussion or share your perspective.

Monday, 14. July 2025

The Rubric

Didja JWK? We did! (did:jwk, Part 2)

did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...
did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...

Didja JWK? We did! (did:jwk, Part 1)

did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...
did:jwk embeds a JSON Web Key (JWK) in a DID to enable the use of JWKs in DID-enabled systems. Simple and straightforward, it promises to give did:key and did:pkh a run for their money. We talk with two of the co-authors of did:jwk, Jeremie Miller, known for creating Jabber and XMPP, and Orie Steele, CTO...

Saturday, 12. July 2025

Human Colossus Foundation

Moving on with Overlays Capture Architecture 2.0: A New Era of Semantic Interoperability

The Overlays Capture Architecture (OCA) is a foundation for Dynamic Data Economy structuring and presenting data in a traceable and verifiable way. Recognising today’s need of interoperability between standards, the release of OCA v2, the architecture takes a major leap forward, introducing a modular and extensible approach in the definition of overlays.

The Overlays Capture Architecture (OCA) is a foundation for Dynamic Data Economy structuring and presenting data in a traceable and verifiable way. Recognising today’s need of interoperability between standards, the release of OCA v2, the architecture takes a major leap forward, introducing a modular and extensible approach in the definition of overlays.

Community 2.0

The new specification centers around making overlays easier to define, share, and validate — even by non-technical users. At its core, OCA v2 introduces the concept of Community Overlays, empowering ecosystems to create and maintain their own overlay definitions without deep technical barriers.

We used the term Community (i.e. not ecosystem) to stress the need of creating the flexibility to create specific overlays for a given purposes shared by multiple users. For example, in science this approach helps to define the ultra-precise meaning of certain datasets while keeping them in line with common standards. In compliance, the approach enables the definition of data structures that must match specific regulatory constraints in a given jurisdiction. In supply-chain, the approach enables domain specific definitions of data exchanges to be used across the entire chain without requiring a centralised authority.

Definition of Community-Driven Overlay

Enabling Community Overlays required a different approach to the way the overlays are defined. Instead of specifying them in the OCA specification, we created a Domain-Specific Language (DSL), which allows us to create overlay definitions, which we called OVERLAYFILE.

OVERLAYFILE is a text-based file which can consist of one or many definitions of various overlays. If you are familiar with programming, think of it as a *.h (header) file from C++. If you are not familiar with technology, think of it as the exact structure of the dataset your boss, team, department or company validated for use.

Having clearly defined overlays, the relevant tooling support any community-defined overlay and simplify the way we manage overlays in the whole ecosystem. When you use it, you have the cryptographic assurance to use what the community has validated for use.

A new file type, .overlaysfile, allows communities to formally define their own overlays. By separating overlay definitions from usage, ecosystems can establish governance and enable cross-project reusability. Overlay repositories further support this, enabling easy distribution and import of overlays.

Enhanced Modularity and Validation

The new approach allows overlays to define schemas, ensuring proper structure and data integrity. Tooling can now validate authored overlays against community-defined schemas, reducing errors and increasing trust.

OCA Bundle in JSON format

An OCA Bundle is a set of OCA objects serving as envelop for distribution and usage of the semantic. With v2 we finally introduced long waiting new encoding format which replace old .zip file by simple JSON object. This new format was in test since quite some time in reference implementation of OCA and now it is official part of the OCA specification allowing to simplify the tooling and the way how the bundle is transmitted.

OCA Specification v2

Finally we release RC1 of Overlays Capture Architecture Specification v2.0.0 which is signal for readiness of mentioned features and functions. The process of implementation already started in our reference implementation which can be followed at oca-rs.

Below you would find the list of major changes which version 2.0.0 brings:

Sensitive overlay replaces PII's flagging in capture base. This enables an enhanced approach to privacy and other risks flagged directly in the schema

Categories from the Label overlay have move to presentation layer. Thus strictly enforcing the distinction between data inner structure and meaning with data presentation.

Upgrade to “Community Overlays” of certain previously “core overlays”. - all overlays listed below are nominated as community overlays and hosted in an Overlays Repository.

Information

Transoformation

Presentation

Layout

Conditional

Unit mapping

Introduce SemVer for all objects

Support for 639-1 and 639-3

Support for namespacing in overlay name

OCA2.0 build for stronger, verifiable data integrity

All objects of the Overlays Capture Architecture, the Capture Base, Overlays and Bundle are compatible with CESR encoding. This ensures that those objects can be authenticated using DKMS a KERI based decentralised key management.

This is pivotal to establish cryptographically verifiable integrity of data objects. This property enables the implementation of distributed governance models - a topic that will receive much attention at the Human Colossus Foundation

Webinar at DSWG

The Technology Council, who is responsible for maintaining the OCA specification and reference implementation, hosted a webinar during the Decentralised Semantic Working Group where they delved into the details of the design of the new architecture and all the features of version 2.0.

Decentralised Semantic Working Group - Overlays Capture Architecture 2.0

Looking Ahead

With OCA v2, the Overlays Capture Architecture moves closer to its vision of an open, extensible semantic ecosystem where organizations and communities can seamlessly create, validate, and share schemas.

Friday, 11. July 2025

FIDO Alliance

National World: 16 billion passwords leaked: How to protect yourself as cybersecurity experts warn of repeat attacks

Cybersecurity experts are urging internet users to take immediate steps to secure their online accounts, after largest-ever data leak exposed more than 16 billion login credentials including from major platforms […]

Cybersecurity experts are urging internet users to take immediate steps to secure their online accounts, after largest-ever data leak exposed more than 16 billion login credentials including from major platforms like Google, Facebook, Apple, and even government services.

The breach, discovered by researchers at Cybernews, is believed to have been carried out using infostealers that harvested login data and other sensitive credentials from multiple platforms. “This is not just a leak – it’s a blueprint for mass exploitation,” Cybernews said in a statement. “With over 16 billion login records exposed, cybercriminals now have unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing.”


ZD NET: How passkeys work: Your passwordless journey begins here

Over the last few decades, compromised usernames and passwords have typically been at the root of some of the most sensational, damaging, and costly data breaches. An incessant drumbeat of […]

Over the last few decades, compromised usernames and passwords have typically been at the root of some of the most sensational, damaging, and costly data breaches. An incessant drumbeat of advice about how to choose and use strong passwords and how not to fall prey to social engineering attacks has done little to keep threat actors at bay. 


MobileIDWorld: Meta Rolls Out Passkey Authentication for Facebook Mobile Users Globally

Meta has begun rolling out passkey login authentication for Facebook users on iOS and Android mobile devices, marking a significant advancement in the industry-wide movement away from traditional password-based security. […]

Meta has begun rolling out passkey login authentication for Facebook users on iOS and Android mobile devices, marking a significant advancement in the industry-wide movement away from traditional password-based security. The implementation follows similar moves by tech giants Apple, Google, and Microsoft who have been leading the charge toward passwordless authentication.

The new passkey feature will become available to users globally over the coming weeks. To use the functionality, users must have devices that support FIDO2/WebAuthn standards, which are commonly found in modern iOS and Android smartphones. These standards, developed through collaboration between the FIDO Alliance and the World Wide Web Consortium (W3C), provide a secure framework for passwordless authentication that has been widely adopted across the technology industry.


Project VRM

Market intelligence that flows both ways

Look up customer journey or customer experience (aka CX) and you’ll find nothing about what the customer drives, or rides. All results will be for systems meant for herding customers like cattle into a chute that the CX business (no kidding) calls a sales funnel: But let’s stick with the journey metaphor, because there are good people […]

Look up customer journey or customer experience (aka CX) and you’ll find nothing about what the customer drives, or rides. All results will be for systems meant for herding customers like cattle into a chute that the CX business (no kidding) calls a sales funnel:

Do any customers want to go down these drains?

But let’s stick with the journey metaphor, because there are good people in the marketing business who have thought deeply about how people buy and own things. Chief among those people is Estaban Kolsky, of Constellation Research. He visualizes the journey in a way that not only gives weight to the ownership experience, but separates it from the sales experience :

As for our actual experience, we spend 100 percent of our lives with things we own, and just a tiny percentage on buying them. So the real ratio should look more like this:

And yet, as I pointed out several years back in Turning the Customer Journey Into a Virtuous Cycle

…consider the curb weight of “solutions” in the world of interactivity between company and customer today. In the BUY loop of the customer journey, we have:

1. All of advertising, which Magna Global expects to pass $.5 trillion, more than a decade ago

2. All of CRM/CX, which now exceeds $100 billion

3. All the rest of marketing, which has too many segments for me to bother looking up

So, in the OWN loop we have a $0 trillion greenfield.

To enter that greenfield, we need customers to be in charge of their side of these relationships— preferably through means for interaction that customers themselves control—on terms that are agreeable to both sides, rather than the one-sided terms we suffer every time we click AGREE on a cookie notice.

To help imagine how that will work, I volunteer a real-world example from my own life.

A few years back, I bought a pair of LAMO Mens Mocs at a shopping mall kiosk in Massachusetts. Here’s one:

I like them a lot. They’re very comfortable and warm on winter mornings. In fact I still wear them, even though the soles have long since come apart and fallen off. Here is how they looked after a few years of use:

I’m showing this so you, and LAMO, can see what happens, and how we can both use my experience—and those of other customers—to change the world.

See, I like LAMO, and would love to help the company learn from my experience with one of their products. As of today, there are four choices for that:

Do nothing (that’s the default) Send them an email Go on some website and talk about it. (A perfect Leighton cartoon in the  New Yorker shows a couple registering at a hotel while the person behind the counter says, “If there’s anything we can do to make your stay more pleasant, just rant about it on the Internet.”) Get “social” by tweeting to @LAMOfootwear or posting to LAMO’s Facebook page. (For wisdom on “social” relations between brands and presumed fans, see Bob Hoffman‘s talk on the topic.)

So here is a fifth choice: give these moccasins their own virtual cloud, where LAMO and I can share intelligence about whatever we like, starting (on my side) with reports on my own experience, requests for service, or whatever. Phil Windley calls these clouds picos, for persistent compute objects. Picos are breeds of what Bruce Sterling calls spime: persistent intelligence for things. Picos have their own operating system (e.g., Wrangler, which Phil most recently posted about here), and don’t need intelligence on board. Just scan a QR code, and you’ll get to the pico. Here’s the QR code on one of my LAMO moccasins:

Go ahead and scan the code with your phone. You’ll get to a page that says it’s my moccasin.

That’s just one view of a potential relationship between me and Lamo — one in which I can put a message that says “If found, call or text _______.” Another view is on my own dashboard of things in my OWN cycle, and direct connections to every one of those companies. That relationship can rest on friendly terms in which I’m the first party and the company is the second party. (For more on that, see here and here.)

So look at the relationship between me and Lamo as a conduit (the blue cylinder below) that lives in the pico for my mocassin. That conduit goes from my VRM (vendor relationship management) dashboard to Lamo’s CRM (customer relationship management) system. There is no limit to the goodness that can pass back and forth between us, including intelligence about how I use my moccasins.

Let’s look at what can happen at either or both ends of that conduit.

A pico for a product is a CRM dream come true: a standard way for every copy of every product to have its own unique identity and virtual cloud (in which any data can live), and standard way any customer can report usage and other intelligence about any product they own—without any smarts needing to live on the thing itself.

If I scan that QR code, I can see whatever notes I’ve taken. I can also see whatever LAMO has put in there, with my permission. Also in that cloud is whatever programming has been done on it. Here is one example of simple relationship logic at work:

IF this QR code is scanned, THEN send LAMO a note that Doc has a new entry in our common journal.

Likewise, LAMO can send me a note saying that there is new information in the same journal. Maybe that information is a note telling me that the company has changed sole manufacturers, and that the newest Mens Mocs will be far more durable. Or maybe they’ll send a discount on a new pair. The correct answer for what goes in the common journal (a term I just made up) is: whatever.

Now let’s say LAMO puts a different QR code, or other identifier, in every moccasin it sells. Or has a CRM system that is alert to notifications from customers who have turned their LAMO moccasins into picos, making all those moccasins smart. LAMO can then not only keep up with its customers through CRM-VRM conduits, but tie interactions through those conduits to the dashboards of their accounting systems (from Xero or other companies that provide enriched views of how the company is interacting with the world).

This is one huge potential key to the future of customer servicecustomer relationship management (CRM), call centersloyalty programscontinuous improvementcustomer experience (CX)customer engagement (CE) and other complicated ways businesses today try to solve all relationship problems from the maker’s or the seller’s side alone.

Follow the links in the last paragraph (all to Wikipedia), and you’ll find each of them has “multiple issues.” The reason for that is simple: the customer is not involved with any of them. All those entries make the sound of industries talking to themselves — or one hand slapping.

This is an old problem that can only be fixed on the customer’s side. Before the Internet, solving things from the customer’s side — by making the customer the point of integration for her own data, and the decider about what gets done with that data — was impossible. Now that we have the Internet, it’s very possible, but only if we get our heads out of business-as-usual and back into our own lives. This will be good for business as well.

A while back I had meetings with two call center companies, and reviewed this scenario:

A customer scans the QR code on her cable modem, activating its pico. By the logic described above, a message to the call center says “This customer has scanned the QR code on her cable modem.” The call center checks to see if there is an outage in the customer’s area, and — if there is — finds out how soon it will be fixed. The call center sends a message back saying there’s an outage and that it will be fixed within X hours.

In both cases, the call center company sai,d “We want that!” Because they really do want to be fully useful. And — get this — they are programmable.

Unfortunately, in too many cases, they are programmed to avoid customers or to treat them as templates rather than as individual human beings who might actually be able to provide useful information. This is old-fashioned mass-marketing thinking at work, and it sucks for everybody. It’s especially bad at delivering (literal) on-the-ground market intelligence from customers to companies.

Call centers would rather be sources of real solutions rather than just customer avoidance machines for companies and anger sinks for unhappy customers. The solution I’m talking about here takes care of that. And much more.

Now let’s go back to shoes.

I’m not a hugely brand-loyal kind of guy. I use Canon cameras because I like the long-standing 5D user interface more than the competing Nikon ones, and Canon’s lens prices tend to be lower. I use Apple computers because they’re easy to get fixed and I can open a command line shell and get geeky when I need to. I drive a 2017 VW wagon because I got it at a good price. And I buy Rockport shoes because, on the whole, they’re pretty good.

Used to be they were great. That was in the ’70s and early ’80s when Saul and Bruce Katz, the founders, were still in charge. That legacy is still there, under Reebok ownership; but it’s clear that the company is much more of a mass marketing operation than it was back in the early days. Still, in my experience, they’re better than the competition. That’s why I buy their shoes. Rockports are the only shoes I’ve ever loved. And I’ve had many.

So here is a photo I took of wear-and-tear on two pairs of Rockport casual shoes I still use, because they’re damned comfortable:

Shots 1 and 2 are shoes I bought in June 2012, and are no longer sold, near as I can tell. (Wish they were.) Shots 3 and 4 are of shoes called Off The Coast 2 Eye. I bought mine in late 2013, but didn’t start wearing them a lot until early this year. I bought both at the Rockport store in Burlington Mall, near Boston. I like that store too.

The first pair has developed a hole in the heel and loose eyelet grommets for the laces around the side of the shoe. The hole isn’t a big deal, except that it lets in water. The loose eyelets are only a bother when I cross my feet sitting down: they bite into the other ankle. The separating outer sole of the second pair is a bigger concern, because these shoes are still essentially new, and look new except for that one flaw. A design issue is the leather laces, which need to be double-knotted to keep from coming undone, and even the double-knots come undone as well. That’s a quibble, but perhaps useful for Rockport to know.

I’d like to share these experiences privately with Rockport, and for that process to be easy. Same with my experiences with LAMO moccasins.

It could be private if Rockport and LAMO footwear came with QR codes for every pair’s pico — it’s own cloud. Or if Rockport’s CRM or call center system was programmed to hear pings from my picos.

Ideally, customers would get the pico along with the shoe. Then they would have their own shared journal and message space — the conduit shown above — as well as a programmable system for creating and improving the whole customer-company relationship. They could also get social about their dialogs in their own ways, rather than only within Facebook and Twitter, which are the least private and personal places imaginable.

This kind of intelligence exchange can only become a standard way for companies and customers to learn from each other if the code for picos is open source. If Rockport or LAMO try to “own the customer” by locking her into a closed company-controlled system — the current default for customer service — the Internet of Things will be what Phil calls “the Compuserve of things”. In other words, divided into the same kind of closed and incompatible systems we had before the Net came along.

One big thing that made the Internet succeed was substitutability of services. Cars, banks, and countless other product categories you can name are large and vital because open and well-understood standards and practices at their base have made substitutability possible. Phil says we can’t have a true Internet of Things without it, and I agree.

The smartest people working for companies are their customers. And the best way to activate customer smarts is by giving them scale. That’s what picos do.

As a bonus, they also give companies scale. If we can standardize picos, we’ll have common and standard ways for any customer and any company to relate to each other through any VRM + CRM system. Think about how much more, and better, intelligence a company can get from its customers this way, rather than through the ones barely succeeding now, where the company does all the work, and fails to know an infinitude of useful stuff customers could be telling them. Think about how much more products can be improved, an iterated over time. Think about how much more genuine loyalty can be created and sustained with this kind of two-way system.

Then think how much companies can save by not constantly spying on customers, guessing about what they might want, spamming them with unwanted and unnecessary sales messages, maintaining systems try to relate but actually can’t, and herding customers into imaginary funnels that customers would loathe if they could see what’s going on.

It’s a lot.

So let’s start working on growing a sane world of business that’s based on market intelligence that flows both ways, instead of the surveillance-based guesswork and delusional imaginings of marketing that smokes its own exhaust. We can do it, privately, and at scale.

The first ancestor of this post appeared at ProjectVRM on 19 Apri 2014. It was updated a bit on 8 June 2017. The second one was posted here on Medium in 2016. With IEEE P7012, aka MyTerms nearing completion, there is a good chance we can make this dream come true in 2026.

Thursday, 10. July 2025

The Engine Room

Help Us Map Responses to Tech-Facilitated Intimate Partner Violence Affecting Young Women and LGBTIQ+ Activists

We are excited to share that we are starting a new project in partnership with the Numun Fund to map organizations and community responses addressing technology-facilitated gender-based violence (TFGBV), specifically intimate partner violence (IPV) affecting girls, young women and LGBTIQ+ activists in the Majority World. The post Help Us Map Responses to Tech-Facilitated Intimate Partner Violenc

We are excited to share that we are starting a new project in partnership with the Numun Fund to map organizations and community responses addressing technology-facilitated gender-based violence (TFGBV), specifically intimate partner violence (IPV) affecting girls, young women and LGBTIQ+ activists in the Majority World.

The post Help Us Map Responses to Tech-Facilitated Intimate Partner Violence Affecting Young Women and LGBTIQ+ Activists appeared first on The Engine Room.


Weaving Sound Memories: Exploration and Care of the Oír Más Archive

We reconstruct memory through listening. Over the past three months, as we shared in April, we immersed ourselves with our ears wide open and a shared desire to explore a digital sound archive that began taking shape over 20 years ago. The post Weaving Sound Memories: Exploration and Care of the Oír Más Archive appeared first on The Engine Room.

We reconstruct memory through listening. Over the past three months, as we shared in April, we immersed ourselves with our ears wide open and a shared desire to explore a digital sound archive that began taking shape over 20 years ago.

The post Weaving Sound Memories: Exploration and Care of the Oír Más Archive appeared first on The Engine Room.

Wednesday, 09. July 2025

Next Level Supply Chain Podcast with GS1

Why Smart Brands Are Rethinking Supply Chain Resilience

How are brands thinking about their supply chains five years after the COVID-19 crisis? Companies are now leaning more heavily into innovation, from making their operations resilient to market changes to launching sustainability initiatives. In this episode, Stephanie Mehta, CEO and Chief Content Officer at Mansueto Ventures, joins hosts Reid Jackson and Liz Sertl following her keynote at GS1

How are brands thinking about their supply chains five years after the COVID-19 crisis?

Companies are now leaning more heavily into innovation, from making their operations resilient to market changes to launching sustainability initiatives.

In this episode, Stephanie Mehta, CEO and Chief Content Officer at Mansueto Ventures, joins hosts Reid Jackson and Liz Sertl following her keynote at GS1 Connect. They discuss why the supply chain is now at the center of innovation and how companies can stay ahead of changes in the economy and evolving customer demands.

Drawing on her experience leading Fast Company and Inc., Stephanie shares how resilience, sustainability, and data-driven thinking are transforming the business landscape for companies of every size.

 

In this episode, you'll learn:

How companies are using the supply chain to drive innovation and acceleration

Why executives are rethinking product packaging, automation, and logistics

The impact of data and social media on a company's operations

 

Jump into the conversation:

(00:17) Introducing Next Level Supply Chain

(02:32) Stephanie Mehta's journey from business journalist to CEO

(03:27) How GS1 US collaborates with media brands

(04:13) Reaching small businesses with supply chain storytelling

(05:41) What most people miss about barcodes

(07:35) Why innovation should not stop post-crisis

(08:23) How sustainability aligns with consumer demand

(11:16) Solving forecasting and inventory with better data

(12:05) The logistics of HP's sustainability initiative

(15:56) Social media's growing impact on supply chains

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Stephanie Mehta on LinkedIn

Check out Mansueto Ventures

Tuesday, 08. July 2025

The Engine Room

WHEN SAFETY IS POLITICAL: DIGITAL SOLUTIONS FOR TRANS YOUTH IN UGANDA

Through our Matchbox Program, we partner with civil society organizations to strengthen their work by harnessing the power of data and technology. Our recent partnership with Trans Youth Initiative-Uganda (TYI-Uganda) offers a lens into how collaborative platform development can respond to pressing social challenges while also building long-term  organizational capacity. The post WHEN SAFET

Through our Matchbox Program, we partner with civil society organizations to strengthen their work by harnessing the power of data and technology. Our recent partnership with Trans Youth Initiative-Uganda (TYI-Uganda) offers a lens into how collaborative platform development can respond to pressing social challenges while also building long-term  organizational capacity.

The post WHEN SAFETY IS POLITICAL: DIGITAL SOLUTIONS FOR TRANS YOUTH IN UGANDA appeared first on The Engine Room.

Monday, 07. July 2025

FIDO Alliance

The Hacker News: Microsoft Removes Password Management from Authenticator App Starting August 2025

Microsoft has said that it’s ending support for passwords in its Authenticator app starting August 1, 2025. Microsoft’s move is part of a much larger shift away from traditional password-based […]

Microsoft has said that it’s ending support for passwords in its Authenticator app starting August 1, 2025.

Microsoft’s move is part of a much larger shift away from traditional password-based logins. The company said the changes are also meant to streamline autofill within its two-factor authentication (2FA) app, making the experience simpler and more secure.

Over the past few years, Microsoft has been pushing for a passwordless future using technologies like passkeys, Windows Hello, and FIDO2-based authentication. These methods offer better protection against phishing and password reuse, which are still major attack vectors. While it may feel like a hassle at first, this change is actually aimed at reducing your risk in the long run.


Oasis Open

Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard – ends September 7th

OASIS Members and other interested parties, OASIS and the Open Document TC [1] are pleased to announce that Open Document Version 1.4 CS01 is now available for public review and comment. The OpenDocument Format (ODF) is an open XML-based document file format for office applications to be used for documents containing text, spreadsheets, charts, and […] The post Invitation to comment on OpenDocum

Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard - ends September 7th

OASIS Members and other interested parties,

OASIS and the Open Document TC [1] are pleased to announce that Open Document Version 1.4 CS01 is now available for public review and comment.

The OpenDocument Format (ODF) is an open XML-based document file format for office applications to be used for documents containing text, spreadsheets, charts, and graphical elements. The file format makes transformations to other formats simply by leveraging and reusing existing standards wherever possible. As an open standard under the stewardship of OASIS, OpenDocument also creates the possibility for new types of applications and solutions to be developed other than traditional office productivity applications.

The TC received three Statements of Use from Microsoft, Allotropia Software GmbH, and The Document Foundation [3].

The candidate specification and related files are available here:

OpenDocument Version 1.4 
Part 1: Introduction
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.odt(Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part1-introduction/OpenDocument-v1.4-cs01-part1-introduction.pdf

OpenDocument Version 1.4 
Part 2: Packages
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.odt(Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part2-packages/OpenDocument-v1.4-cs01-part2-packages.pdf

OpenDocument Version 1.4
Part 3: OpenDocument Schema
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part3-schema/OpenDocument-v1.4-cs01-part3-schema.pdf

OpenDocument Version 1.4
Part 4: Recalculated Formula (OpenFormula) Format
Committee Specification 01
2 August 2024
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.html
https://docs.oasis-open.org/office/OpenDocument/v1.4/cs01/part4-formula/OpenDocument-v1.4-cs01-part4-formula.pdf
Schema files are located here.

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at: OpenDocument-v1.4-cs01.zip

Members of the Open Document TC approved this specification by Special Majority Vote [2]. The specification had been released for public review as required by the TC Process.

Public Review Period

The 60-day public review is now open and ends 7 September 2025 at 23:59 UTC.

This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review  we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

========== Additional references:

[1] OASIS Open Document TC

[2] Approval ballot

[3] Links to Statements of Use
Microsoft:
https://groups.oasis-open.org/discussion/statement-of-use-from-microsoft
Allotropia Software GmbH:
https://groups.oasis-open.org/discussion/statement-of-use-for-open-document-format-for-office-applications-opendocument-version-14-cs01
Document Foundation:
https://groups.oasis-open.org/discussion/fwd-statement-of-use-odf-14

[4] https://www.oasis-open.org/policies-guidelines/ipr/
https://www.oasis-open.org/committees/office/ipr.php

Intellectual Property Rights (IPR) Policy

The post Invitation to comment on OpenDocument Version 1.4 before call for consent as OASIS Standard – ends September 7th appeared first on OASIS Open.

Friday, 04. July 2025

Human Colossus Foundation

DKMS Alliance Kick-off in Geneva: Building Now the Future of Digital Trust

On June 30th, Geneva became the stage for an informal event: the kick-off meeting of the DKMS Alliance. This gathering marks the official launch of a collaborative initiative poised to reshape digital trust infrastructure for the next generation.

On June 30th, Geneva became the stage for an informal event: the kick-off meeting of the DKMS Alliance. This gathering marks the official launch of a collaborative initiative poised to reshape digital trust infrastructure for the next generation.

At the time Geneva’s international traditions lead the city to host the Global Digital Conference 2025, the DKMS Alliance aims to bring now the open-source primitives enabling the digital authentication of tomorrow to the enterprises and organisations with the required support, maintance and stability they needs to build digital interoperable solutions at scale.

The DKMS Alliance — is a joint endeavor kicked off by Human Colossus Foundation, Vereign AG and argonAUTHs. Anchored in the architectural roots of KERI (Key Event Receipt Infrastructure), the Alliance unites deep technical expertise and strong operational capabilities to deliver to the world a new paradigm for secure, verifiable, and sovereign digital interactions.

A Shared Vision for Open and Secure Digital Infrastructure

The Alliance goal is to drive adoption, sustainability, and market readiness for digital infrastructure components that are open, secure, and industrial-grade. By combining the power of DKMS, a Rust implementation of KERI, and the Overlays Capture Architecture (OCA) into production-ready resources the Alliance provides its members the first movers advantage to develop enterprise applications that scales. The DKMS Alliance aims to become a community acting as a beacon of dynamic innovation and reliability to deploy resilient solutions build on top of a reliable digital trust infrastructure.

The DKMS Alliance represents today a powerful fusion of complementary strengths. From the Human Colossus Foundation’s pioneering work on the KERI protocol and in data semantics and governance to Vereign’s production deployments in secure communications and ArgonAUTHs’ cryptographic agility and verifiability at scale, the founding organizations bring together decades of experience in building trust frameworks, decentralized identity, secure communication, and verifiable data ecosystems.

Why Now?

In a world increasingly challenged by fragmented approaches to digital trust, vendor lock-in, and fragile ecosystems, the DKMS Alliance provides a unified, community-driven foundation. By offering stable APIs, modular architectures, and predictable release cycles, the Alliance ensures that the core infrastructure remains robust and future-proof.

The Alliance also emphasises transparent governance, which will be build with Alliance members. This commitment to openness and quality positions it to become the go-to stack for governments and enterprise innovators looking to implement decentralised trust layers.

The Kick-off: Setting the Stage for Collective Action

The Geneva meeting brought together a mix of technical leaders, strategists, funders and early adopters to align on shared goals, identify immediate priorities, and announced a roadmap for 2025 and beyond. It marked the transition from vision to execution. Focused action points emerged to accelerate engineering efforts, develop governance structures, and expand outreach to early members.

As an early member of the DKMS Alliance, organizations have the unique opportunity to influence the evolution of specifications and implementations, gain early access to deliverables, and secure premium support from the visionary leaders shaping tomorrow’s digital trust landscape.

Join Us on This Journey

The DKMS Alliance is more than just a technical project — it’s a call to collective stewardship of the global digital commons. By joining, organizations not only protect critical dependencies but also demonstrate leadership in privacy, security, and data sovereignty. As a renegade from the current mainstream, you become within your domain, the reputable authority of tomorrow’s digital architecture.

We invite you to become part of the transformative movement. Check our Prima Vista document on how you can join this journey. Together, we can build the resilient, trustworthy, and open digital infrastructure the world urgently needs.

Stay tuned — more information, updates, and opportunities to engage with the DKMS Alliance will be coming soon!


MyData

Intermediaries in Network-based Ecosystems

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Every day, billions of […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Every day, billions of […]

Thursday, 03. July 2025

Trust over IP

ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI)

India's Digital Public Infrastructure (DPI) and Unified Payment Interface (UPI) revolutionize financial inclusion, bringing over 500 million people into the digital economy. This open-source, mobile-first system builds trust and breaks traditional payment monopolies. The post ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI) appeared first on Trust O
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/ToIP-EGWG-2025-06-26_-Aakash-Guglani-Enhancing-Trust-using-Digital-Public-Infrastructure-DPI.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Briefing Document: India’s Digital Public Infrastructure (DPI)

Date: June 26, 2025
Source: Excerpts from Trust over IP Ecosystems and Governance Working Group Meeting
Presenter: Aakash Guglani from the Digital India Foundation.
Topic: Enhancing Trust using Digital Public Infrastructure (DPI)
Excerpt: India’s Digital Public Infrastructure (DPI) and Unified Payment Interface (UPI) revolutionize financial inclusion, bringing over 500 million people into the digital economy. This open-source, mobile-first system builds trust and breaks traditional payment monopolies.

Executive Summary

This briefing document provides an overview of India’s Digital Public Infrastructure (DPI), focusing on its development, impact, and future direction as presented by Aakash Guglani from the Digital India Foundation. The DPI, particularly the Unified Payments Interface (UPI) and Aadhaar, has been instrumental in formalizing India’s economy, empowering its vast and diverse population, and challenging traditional payment monopolies. It represents a unique, policy-driven approach that blends public and private sector participation, aiming for broad inclusivity rather than just serving the affluent.

I. Core Philosophy and Context Addressing India’s Unique Challenges: India’s DPI was developed to address the needs of its “100 million plus” households, representing “500 million people” who were largely deprived and excluded from the burgeoning digital economy. Traditional private platforms primarily targeted the “top 1%,” while public approaches lacked state capacity and risked centralization. (00:07:39, 00:10:50) A “Digital Public Infrastructure” Approach: India adopted a “distro public infrastructure” model, which emphasizes diversity, choice, openness, and sovereignty. It’s a collaborative effort involving both public and private sectors. (00:11:08) Contrast with Other Models: Unlike purely private platforms or a “totally state-led model” like China’s, India’s DPI is designed to be a democratic solution. “We could not go with private platforms, and we never had the state capacity like China to go for a totally state-led model.” (00:09:30, 00:16:40) II. Key Components and Their Impact A. Aadhaar: Unique Digital Identity Foundation of DPI: Over the last 10 to 15 years, India has established “more than 1.3 billion Aadhaar,” a unique digital identity based on 10 biometrics (fingerprints and iris scan) for “99% of Indians.” (00:11:37, 00:11:50) Enabled EKYC: Aadhaar facilitated “EKYC (Know Your Customer),” drastically reducing the cost of customer acquisition for banks. The cost of KYC dropped from “$25 to $1.” (00:12:02, 00:21:16, 00:21:21) This cost reduction made it economically viable for banks, previously reluctant due to high offline verification costs, to open “zero-cost accounts” for the rural and unbanked population. (00:21:21, 00:21:29) Broad Financial Inclusion: This initiative, known as “JanDhan Yojna,” led to “more than 500 million people [having] bank accounts, which is totally zero cost.” (00:19:53, 00:21:10) B. Unified Payments Interface (UPI) Solution to Payment Friction: UPI addressed significant problems in India’s financial landscape, including limited digital transactions, high costs for credit card usage, reliance on paper receipts, and lack of “information collateral” for poor individuals seeking credit. (00:15:08, 00:16:56) Massive Adoption and Interoperability: As of March 2025, UPI recorded “more than 18 billion transactions.” (00:18:30) Over 200 banks are integrated, and it’s “totally interoperable.” A payment between a customer from Bank A and a merchant from Bank B takes “less than 10 seconds” via a QR code scan. (00:18:37, 00:18:42, 00:18:54) There is “no MDR (Merchant Discount Rate)” for most transactions, which incentivizes small vendors like vegetable sellers to adopt digital payments. (00:19:04, 00:19:15) “Every part of the country is using UPI,” demonstrating “deeper usage” even in poorer regions. (00:31:47, 00:31:52) Economic Empowerment: For retailers, UPI reduces the “cost of business” by allowing them to “service 3-4 people at a time” without handling cash and change. (00:29:21) It empowered women by ensuring that earnings from their stores went directly to their bank accounts, preventing misuse of physical cash. (00:30:13, 00:30:24) Challenging Global Monopolies: UPI has fundamentally reshaped the global payment landscape. It involves a diverse range of participants including “Visa, Mastercard, Google Pay, Phone Pay, American Express.” (00:33:45, 00:33:50) “It is not dominant by two major players,” effectively “break[ing] monopoly of private players.” (00:33:50, 00:33:58) UPI has “crossed the transaction volumes of Visa and Mastercard combined.” (00:34:14) “46% of real-time payments across the world happen out of India.” (00:34:21) Beyond the “Bottom of the Pyramid”: DPI, including UPI and EKYC, has also benefited the middle and upper-middle classes. The reduced KYC cost facilitated the opening of “more than 190 million DMAT accounts” (online stockbroking accounts) between 2016 and 2024, many for “first-time users.” (00:34:43, 00:35:08, 00:35:14, 00:35:52) This formalizes the economy and encourages savings. III. Enablers and Policy Framework Government Subsidies: An ongoing annual government subsidy of “$0.2 billion for MDR” (Merchant Discount Rate) for UPI usage is in place. This is viewed as an investment in “public railroad” or “public highway” infrastructure due to its immense “inclusion value.” (00:40:41, 00:40:54, 00:41:09) Agile Policy: The success of DPI is underpinned by “agile policies.” Examples include: The central bank’s allowance for EKYC (2012-2013). (00:45:04) An IT Act making “digital documents equivalent to offline documents” for digital locker services. (00:45:21) The RBI’s Digital Payments Committee recommending UPI. (00:45:49) Government procurement shifting to “totally digitally” platforms using UPI. (00:46:39) Leadership and Trust: The Prime Minister’s public use of UPI and foreign leaders using it in India “enhances trust in people,” creating a “virtuous cycle.” (00:47:01) Mobile-First, Low-Tech, Multilingual: The platforms are designed to be mobile-first, multilingual, and low-tech, supporting feature phones and even “offline availability.” (00:39:31, 00:39:38) Internet Penetration: Government efforts to provide “free internet” increased coverage from 25% to 48%, a “major contributor for UPI.” (00:33:11) Indigenous Development: All platforms are “indigenously made,” addressing specific Indian problems. (00:39:38) IV. Future Directions and Challenges Next Stage: Credit Enablement: With 500 million bank accounts and 1.2 billion digital identities, the focus shifts to empowering users by providing credit. (00:36:00, 00:36:26) Account Aggregator: A framework allowing individuals to consent to share their financial data between banks, mutual funds, insurance providers, etc., using the EKYC framework, to facilitate consumer and business loans. (00:36:41, 00:36:53) Open Credit Enablement Network (OCEN): This leverages the transaction data from informal vendors (e.g., vegetable sellers) who now have “information collateral” on their transaction frequency and average ticket size, making it “easier for banks to also financialize them.” (00:37:29, 00:37:35, 00:37:49) This moves DPI beyond just a base to “formalize the economy.” (00:37:49) Global Export of DPI: India aims to share its open-source DPI frameworks, including UPI APIs on GitHub, “freely to anyone around the world.” This includes cross-border payment solutions and sharing its vaccination system. (00:38:16, 00:38:34, 00:38:42) Future Open Data Platforms: The “India Stack was the beginning,” with future initiatives including digital commerce, online financial information sharing, online credit availability, digitization of health records, AI-based voice, and digital skills. (00:38:54, 00:39:04) Tech Sovereignty: India advocates for “tech sovereignty” to avoid reliance on foreign private entities that could “cut off my access” due to geopolitical tensions, as seen in the Russia-Ukraine war. The aim is to prevent private companies from engaging in “geopolitical positioning.” (00:48:14, 00:49:06, 00:49:32, 00:49:45) Fraud and Cybersecurity: With massive scale, the system faces “more than a million cyberattacks” annually, including those from state and non-state actors. (00:51:38, 00:51:49, 00:51:53) Common frauds include “digital arrest” scams using AI-based voice to demand UPI transfers. (00:53:00, 00:53:07) RBI is implementing measures like mandating brokers to use “authenticated UPI IDs,” displaying the recipient’s name before transfer, and nudging users not to pay while on a call. (00:55:54, 00:56:03, 00:56:10) The direct debit nature of UPI makes reversing fraudulent transactions difficult once money is transferred. (00:54:47) Ongoing efforts include public advisories and potential future escrow services. (00:53:23, 00:57:00) V. Conclusion

India’s DPI journey, marked by the success of Aadhaar and UPI, demonstrates a powerful model for digital inclusion and economic formalization in a democratic context. By leveraging agile policy-making, strategic government investment, and open-source technology, India has built a robust digital public good that empowers its citizens and offers significant lessons for the Global South. The ongoing challenge lies in mitigating fraud and expanding the system to foster greater financial empowerment through credit, while maintaining technological sovereignty.

For more details, including the slides,  meeting recording and transcript, please see our wiki 2025-06-26 Aakash Guglani & Enhancing Trust using Digital public Infrastructure (DPI) – Home – Confluence

https://www.linkedin.com/in/aakashguglani/ https://digitalindiafoundation.org/

The post ToIP EGWG 2025-06-26: Aakash Guglani, Enhancing Trust using Digital Public Infrastructure (DPI) appeared first on Trust Over IP.

Monday, 30. June 2025

Human Colossus Foundation

HCF to Participate in GC25: Advancing Global Digital Collaboration on Wallets & Credentials

We’re excited to announce that Human Colossus Foundation (HCF) will participate in GC25: Global Digital Collaboration on Wallets & Credentials, taking place on July 2, 2025. This important event, hosted by the Swiss Confederation, brings together leading organizations and innovators to shape the future of digital wallets, credentials, and interoperable identity systems. At HCF, we believe tha

We’re excited to announce that Human Colossus Foundation (HCF) will participate in GC25: Global Digital Collaboration on Wallets & Credentials, taking place on July 2, 2025. This important event, hosted by the Swiss Confederation, brings together leading organizations and innovators to shape the future of digital wallets, credentials, and interoperable identity systems.

At HCF, we believe that empowering individuals to control their own digital space is foundational to building a sustainable and trustworthy digital society. This vision is captured in our ongoing work around the concept of the Digital Self, as discussed in our blog post Self Actioning System, a preferred systematic embodiment of “Digital Self”. Central to this idea is the belief that enabling global digital collaboration can only emerge when individuals have sovereignty over their credentials and interactions.

HCF on Governance in Digital Trade

As part of GC25, HCF will also participate in the panel discussion "Governance in Digital Trade – Decentralization as Response to Challenges in a Multi-Polar World." moderated by Stephan Wolf from the Verifiable.Trade Foundation and including the Digital Governance Institute, FIWARE, and the Asia PKI Consortium, we will bring forward the necessity of a digital governance able to consider the sovreignty of peer-to-peer connections in a multi-polar world.

The world is changing rapidly, placing supply chains and financial systems under increasing pressure. In a multi-polar landscape shaped by shifting tariffs and transformative AI, flexibility and speed have become essential. Digitalisation and open networks provide a clear path to inclusive global market participation. However, governance remains a largely overlooked but crucial topic.

This session will explore current initiatives but also deal with challenging questions. By examining these questions from different perspectives, the panel aims to bridge public and private demand to address the complexities of modern global trade.

HCF will contribute its perspective on how decentralized governance models, such as those enabled by the Dynamic Data Economy, can empower organizations and individuals alike.

Digital Self: Enabling Individual Agency

The Digital Self represents a shift from centralized data control to individual empowerment, allowing each person to define, manage, and protect their digital identity and data assets. Events like GC25 are critical because they convene diverse stakeholders to co-create frameworks and standards that make this shift possible — from verifiable credentials to interoperable wallets.

Introducing the Dynamic Data Economy

HCF is contributing to this discussion through the introduction of the Dynamic Data Economy (DDE), a groundbreaking approach outlined in our launch announcement. The DDE offers an infrastructure where data is not merely a static asset but a dynamic, contextual element that individuals can control and share on their terms.

This model supports privacy, promotes innovation, and opens the door to new economic models centered around consent and transparency — all essential ingredients for a human-centric digital future.

Looking Ahead

Our participation in GC25 aligns with our mission to advance data governance standards that prioritize individual autonomy. By collaborating with global leaders at this event, we aim to further the conversation on building infrastructures that respect and reinforce the Digital Self.

We invite all who share this vision to join us in exploring how we can collectively build a more equitable, dynamic, and user-controlled digital ecosystem.

Stay tuned for more updates from GC25 and beyond!

Friday, 27. June 2025

FIDO Alliance

PCmag: This Password Manager Now Lets You Create an Account Without a Password

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication. One of the better-known password managers is now inviting people to try it without having to create […]

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication.

One of the better-known password managers is now inviting people to try it without having to create yet another password. Instead, Dashlane is now inviting people to try opening a new account secured only by a USB security key compliant with the “FIDO2” authentication standard; FIDO being short for Fast Identity Online.

Emphasize “try.” The company’s support page for this “early access” program notes that it supports only Google Chrome and Microsoft Edge, not Dashlane’s mobile apps. For now, it doesn’t let you create an account secured only by a passkey, the form of FIDO2 security more people use. 

The page also highlights a warning that this is an early-days exercise: “Important: Accounts created as part of the early access program are for testing purposes only. We recommend using your primary Dashlane account to store and manage your data.”


ZDNET: Facebook’s new passkey support could soon let you ditch your password forever

For all of us who hate passwords, passkeys represent a simpler and safer way of authenticating online accounts. But adoption has been slow, with many companies and websites still relying on passwords. […]

For all of us who hate passwords, passkeys represent a simpler and safer way of authenticating online accounts. But adoption has been slow, with many companies and websites still relying on passwords. Now the world’s biggest social media platform is jumping on the bandwagon.

On Wednesday, Facebook announced that it’s now rolling out support for passkeys on mobile devices. This means you’ll be able to use one to sign in to Facebook on an iPhone or Android device. But the passkey won’t be limited to your actual Facebook account.


Expert Insights Podcast: #64 – Passwordless Authentication and the Rise of Passkeys

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. […]

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. He explores the challenges of adoption, the importance of identity verification, and how cross-platform interoperability is accelerating passkey use. The conversation also touches on the impact of generative AI on cybersecurity and what the future holds for passkeys in building long-term resilience.


Ars Technica: Coming to Apple OSes: A seamless, secure way to import and export passkeys

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to […]

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to credential phishing and other attacks targeting passwords.

The import/export feature, which Apple demonstrated at this week’s Worldwide Developers Conference, will be available in the next major releases of iOS, macOS, iPadOS, and visionOS. It aims to solve one of the biggest shortcomings of passkeys as they have existed to date. Passkeys created on one operating system or credential manager are largely bound to those environments. A passkey created on a Mac, for instance, can sync easily enough with other Apple devices connected to the same iCloud account. Transferring them to a Windows device or even a dedicated credential manager installed on the same Apple device has been impossible.

Thursday, 26. June 2025

FIDO Alliance

Blog: 2025 FIDO India Working Group Member Meetup and Workshop

The FIDO Alliance hosted its annual India Working Group (FIWG) Member Meetup & Workshop on June 6, 2025, at Google’s Ananta campus in Bengaluru. With over 100 attendees representing leading […]

The FIDO Alliance hosted its annual India Working Group (FIWG) Member Meetup & Workshop on June 6, 2025, at Google’s Ananta campus in Bengaluru. With over 100 attendees representing leading technology companies, financial service providers, telecom operators, government agencies, retailers, and platform providers, the event served as an important forum for advancing phishing-resistant, passwordless authentication efforts in India.

The program began with welcome remarks from FIWG Chair Niharika Arora (Google) and Vice Chair Tapesh Bhatnagar (G+D), followed by a keynote address from FIDO Alliance President Sam Srinivas. Sam’s session offered a forward-looking overview of the global FIDO roadmap, covering recent progress in passkey adoption, certification, and platform interoperability, highlighted by many attendees as one of the most valuable sessions of the day.

Throughout the morning, FIWG member organizations shared implementation case studies drawn from real-world deployments. Christopher Clement Soris of Zoho Corporation presented lessons from integrating passkeys into enterprise workflows, emphasizing developer enablement and user trust. Vishu Gupta, Piyush Ranjan, and Deepak Singal of Times Internet shared insights into their journey deploying passkeys across consumer media platforms, with a focus on UX challenges and account recovery design. Shantanu Shirke of Mastercard showcased its Global Financial Framework (GFF) through a live demo of its FIDO-enabled authentication solution, and Simon Trac Do of VinCSS introduced applications of the FIDO Device Onboard (FDO) protocol for secure and scalable onboarding of IoT devices.

The Google Android and Chrome teams, including Niharika Arora, Eiji Kitamura, and Neelansh Sahai, provided updates on platform support for passkeys, highlighting recent enhancements to Android APIs, Chrome UX flows, and best practices for relying parties. These updates offered implementers concrete guidance on leveraging native OS features to enable seamless, secure sign-ins.

The event concluded with a panel discussion titled “Modern Authentication Meets Legacy Systems,” moderated by Niharika Arora and featuring Amit Mathur (Ensurity), Rooparsh Kalia (Mercari), Rahul Dani (Yubico), Tom Sheffield (Target), and Sam Srinivas (Google). The discussion addressed practical challenges in deploying FIDO-based authentication in environments with legacy infrastructure, including backward compatibility, account recovery, and risk trade-offs. Panelists shared candid reflections and emphasized the importance of phased integration strategies and cross-industry collaboration.

Compared to the 2024 edition, this year’s workshop reflected a clear evolution, from awareness-building to implementation maturity. While last year’s focus was largely on introducing the promise of passkeys and FIDO standards, the 2025 program emphasized operational insights, technical execution, and collaborative solutions.

[Watch the Highlight Video]

Through post-event surveys, participants expressed strong appreciation for the event’s practical focus, noting the value of detailed case studies, direct access to platform teams, and the opportunity to connect with peers tackling similar challenges. Many described the in-person format as especially effective for fostering shared understanding and building momentum.

As passkey adoption continues to accelerate across India, the India Working Group remains a vital platform for aligning implementation efforts, exchanging knowledge, and enabling long-term deployment success. Sincere thanks to all speakers, panelists, and attendees for their contributions, and to Google for hosting us in Bengaluru. We look forward to continuing this important work together throughout 2025 and beyond.
*Read last year’s recap: 2024 FIDO Alliance India Working Group Meetup and Workshop


Velocity Network

South Florida Leaders Gather to Explore the Future of Learning, Hiring, and Innovation

The post South Florida Leaders Gather to Explore the Future of Learning, Hiring, and Innovation appeared first on Velocity.

Wednesday, 25. June 2025

FIDO Alliance

MobileIDWorld: Apple Introduces Cross-Platform Passkey Import/Export Features Across Operating Systems

Apple has announced significant enhancements to its operating systems that will implement secure import and export capabilities for passkeys, building on the company’s ongoing efforts to eliminate traditional passwords. The new […]

Apple has announced significant enhancements to its operating systems that will implement secure import and export capabilities for passkeys, building on the company’s ongoing efforts to eliminate traditional passwords. The new features match standards developed by the FIDO Alliance for cross-platform credential management, joining similar initiatives from Microsoft and Google in the push toward passwordless authentication.

The new implementation will enable seamless and secure transfer of passkeys across platforms, addressing previous limitations in transferring credentials between devices and applications. The system uses a standardized data schema developed by the FIDO Alliance to ensure compatibility between different credential manager apps across iOS, iPadOS, macOS, and visionOS 26. The standardization is particularly significant as password-based attacks continue to rise, pushing the industry toward more secure authentication methods.


Techopedia: FIDO2 & Passkeys: The Future of Passwordless Authentication

Passwordless authentication has picked up in recent years. But the method drawing the most interest in security circles is physical security keys based on the FIDO2 standard. These USB or NFC […]

Passwordless authentication has picked up in recent years. But the method drawing the most interest in security circles is physical security keys based on the FIDO2 standard.

These USB or NFC keys offer something beyond the usual passwordless methods, like synced device passkeys or biometric logins. Here, you’re not relying on cloud-stored credentials or browser memory. Instead, everything depends on holding the key and verifying it with something only you know, like a PIN or fingerprint.

This shift to hardware security keys is gaining momentum across industries. Dashlane, for instance, has just rolled out an update that enables users to make a FIDO2 key their main passwordless login for unlocking credential vaults.

In this article, we explore where passwordless authentication stands today, what makes physical keys different, and how platforms are handling the hard parts like recovery, usability, and long-term security.


Next Level Supply Chain Podcast with GS1

From Red Tape to Resilience: How Regulation Shapes Supply Chains with Maggie Lyons

Confused by the new regulations and a patchwork of state-level policies? With a new administration setting fresh policy priorities, supply chains are facing shifting rules and growing pressure to adapt. Maggie Lyons, Vice President of Government and Regulatory Affairs at GS1 US, joins hosts Reid Jackson and Liz Sertl to decode the changes affecting how products are made, moved, and sold, and w

Confused by the new regulations and a patchwork of state-level policies?

With a new administration setting fresh policy priorities, supply chains are facing shifting rules and growing pressure to adapt.

Maggie Lyons, Vice President of Government and Regulatory Affairs at GS1 US, joins hosts Reid Jackson and Liz Sertl to decode the changes affecting how products are made, moved, and sold, and what businesses can do to stay ahead. From SNAP waivers and red dye bans to extended producer responsibility (EPR) laws and 2D barcodes, this episode breaks down how government decisions are impacting daily operations across food, retail, and consumer packaged goods (CPG).

Maggie's team works with policymakers and industry leaders to align mandates with existing systems, helping avoid duplication and enabling efficient, standards-based implementation.

In this episode, you'll learn:

How state-level regulation is influencing national supply chain strategies

Why new ingredient bans could create a ripple effect across CPG brands

What you can do to stay ahead of policy changes impacting your industry

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:07) Why GS1 built a policy team

(04:02) From Capitol Hill to CPG strategy

(06:34) Staying focused amid constant regulatory shifts

(08:48) Government agencies shaping supply chain standards

(10:38) Customs, tariffs, and food assistance priorities

(14:59) How SNAP waivers complicate retail operations

(17:57) What red dye bans mean next

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Maggie Lyons on LinkedIn

Tuesday, 24. June 2025

FIDO Alliance

asmag Security & IoT: Passwordless authentication: From trend to ‘strategic imperative’

For modern IT or Internet users, logging in to a website or app using a password is all too familiar. Increasingly, however, passwords create security concerns as they can be […]

For modern IT or Internet users, logging in to a website or app using a password is all too familiar. Increasingly, however, passwords create security concerns as they can be easily cracked or stolen. This is where passwordless authentication provides a more secure and convenient alternative.

“Password fatigue is real. Users demand faster, frictionless ways to authenticate without remembering complex strings,” said Edwardcher Monreal, Principal Solutions Architect for IAM Consumer Authentication Solutions at HID.


Digital Identity NZ

Ready…Reset…Go!

Matariki is a time for remembrance, celebration, and looking toward the future. Traditionally, Matariki was a time to honour ancestors, celebrate the harvest, and acknowledge the changing seasons. The post Ready…Reset…Go! appeared first on Digital Identity New Zealand.

Kia ora,

Matariki is a time for remembrance, celebration, and looking toward the future. Traditionally, Matariki was a time to honour ancestors, celebrate the harvest, and acknowledge the changing seasons.

Today, we stand at a pivotal moment where two transformative technologies – digital identity and artificial intelligence – are reshaping the fabric of society.  As the marginal cost of knowledge approaches zero, decentralised systems offer abundance creation through co-operation, as opposed to sticking with a more traditional scarcity-based approach. 

For DINZ members, the adoption playbook demands conscious choices: about focus, trade-offs, standards, and sovereignty. As local momentum builds around the world’s first decentralised credential identity ecosystem, the question is no longer whether disruption will occur – but how we collectively shape it, and who it serves. Perhaps Moby put it best in his 2002 song “We are all made of stars.”

It is hard to believe we are still using multi-password sign-on and facsimiles of identity proofs in an era where nearly everything we do is online. For those frustrated by the pace of change, it is important to remember we’re building a system for 30 years, of which the past five years has been very much design and build. The next five years will be about driving adoption.

With a tsunami of digital credentials hitting global markets, the race to be “first to issue” DISTF accredited credentials is on!  At the starting line are the DIA, NZTA, Air NZ, Hospitality NZ, alongside other go-to-market models such as Apple and Google’s online proxies.

Techweek Highlights

Those fortunate enough to attend Digital Public Infrastructure: The Invisible Foundation for NZ’s Digital Future in the General Assembly room in Parliament would have heard Minister Collins deliver the powerful message that “digital identity is the key to unlocking productivity in New Zealand.” 

The Minister’s call to action was compelling, raising the question of whether centralised apps are the answer for a citizen-centric experience at this particular technology inflection point still open for debate.

It is encouraging to see a shared understanding that the implementation of digital public infrastructure (DPI) will provide New Zealanders with an inclusive, future-ready trust layer that enhances privacy, security, provenance and fraud prevention, while preserving Aotearoa’s economic sovereignty. 

What’s more, an interoperable decentralised identity ecosystem will support the profound productivity improvements presented by hyperscale AI, without diminishing human agency by moving our data (and your soul) into someone else’s “cloud”.

Digital Trust Hui Taumata – Update

We have secured the world’s foremost native digital credential architect and builder James Monaghan to deliver the international keynote and host roundtable discussions at the Digital Trust Hui at Te Papa on 12 August, 2025.  

This must-attend identity conference promises to be more “Doey” than Hui. Our valued sponsors will showcase the most exciting identity projects in this space, and show how accelerating adoption of Trust Technology can help mitigate concerns around AI-related risks.

Roundtable discussions are currently being shaped around four key focus areas:

Regulatory barriers to change (public policy enablers such as omnibus bills to allow digital identity implementation) Identity for natural persons (early adopters, Privacy and taking control, proof of age, everyday applications, online verification, ID assurance hierarchy) Identity for legal entities (identity as a service, AML, license to operate, compliance, monitoring, directory enabled and real world asset marketplaces) Identity for machines (delegation, reputation, agents, bots, agentic commerce)

Assuming sufficient interest from government and industry sponsors, we plan to arrange an ecosystem design workshop with James Monaghan while he is in Wellington.

The Census Goes Digital: A Shift Towards Data-Driven Public Infrastructure

Stats NZ has officially retired the traditional census, opting instead for a new approach that leverages integrated administrative data. This marks a significant shift in how the government collects and uses trusted digital information – reinforcing the need for secure, privacy-preserving identity systems to ensure accuracy, inclusion, and transparency. It’s a timely reminder of the critical role digital identity plays in building smarter, citizen-centric public services. Read the full RNZ article.

Welcome to Our New Chair – Maria Robertson

Please join me in welcoming Maria Robertson as the new Chair of DINZ. You can read her introductory statement to the DINZ community on our website.

As we reset the DINZ playbook and hone our focus to accelerate adoption, Maria has already started to elevate our thinking thanks to her extensive experience across the public service, infrastructure and secondary industries.

Suffice to say, the first few weeks have been a baptism by fire, but I’m thoroughly enjoying being back in the identity services world at such a pivotal time. Your feedback is encouraged as we strive to fan the flames of adoption by issuers, holders and relying parties for the empowerment of all New Zealanders.

Mānawatia a Matariki,

Andy Higgs
Executive Director, Digital Identity NZ

Read full news here: Ready…Reset…Go!

SUBSCRIBE FOR MORE

The post Ready…Reset…Go! appeared first on Digital Identity New Zealand.


DIF Blog

DIF Labs Beta Cohort 2 Kick Off! 🚀

DIF Labs Beta Cohort 2 officially kicks off tomorrow, June 24th at 8 AM PST! This cohort brings together three projects that will advance privacy, legal frameworks, and governance in verifiable credentials. Meet Our Beta Cohort 2 Projects Legally-Binding Proof of Personhood for Verifiable Credentials via QES Led by Jon

DIF Labs Beta Cohort 2 officially kicks off tomorrow, June 24th at 8 AM PST! This cohort brings together three projects that will advance privacy, legal frameworks, and governance in verifiable credentials.

Meet Our Beta Cohort 2 Projects Legally-Binding Proof of Personhood for Verifiable Credentials via QES

Led by Jon Bauer and Roberto Carvajal

This project creates a standardized method to anchor Verifiable Credentials to legally recognized, high-assurance proof of an individual's identity through Qualified Electronic Signatures (QES). By leveraging eIDAS-recognized QES technology, this work will enable any W3C Verifiable Credential to carry the same legal weight as a handwritten signature.

labs/proposals/beta-cohort-2-2025/legallybinding-vcs/legallybinding-vcs.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Privacy-Preserving Revocation Mechanism

Led by Kai Otsuki and Ken Watanabe

This research project delivers an analysis of privacy-preserving revocation mechanisms for W3C Verifiable Credentials. The team will catalog real-world revocation scenarios, benchmark cryptographic mechanisms including status lists, dynamic accumulators, zk-SNARK proofs, and short-term credentials, and provide an open-source prototype evaluating computational costs for Issuers, Holders, and Verifiers.

labs/proposals/beta-cohort-2-2025/pp-revocation-mechanism/001_proposal.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Anonymous Multi-Signature Verifiable Credentials

Led by Seohee Park and Lukas Han

This protocol enables Verifiable Credential issuance that requires m-of-n multi-signature approval while maintaining anonymity of individual signers. Using Semaphore, it enables decentralized governance for VC issuance in organizations such as DAOs or government agencies, cryptographically proving sufficient participation without revealing participating member identities.

labs/proposals/beta-cohort-2-2025/anon-multi-sig-vc/anon_multi_sig_vc_proposal.md at main · decentralized-identity/labs An incubation chamber for Decentralized Identity focused development - decentralized-identity/labs GitHubdecentralized-identity Gratitude to Our Leadership Team

We extend our thanks to our project leads who will be driving these initiatives forward. Their expertise is essential to advancing the state of verifiable credentials technology.

We're grateful to our mentors who share their knowledge and experience with our cohort participants. You can learn more about our mentor network in our directory.

Recognition goes to our chairs who provide strategic guidance and oversight:

Andor Kesselman Ankur Banerjee Daniel Thompson-Yvetot What's Next?

Tomorrow's kick-off session will bring together all participants to align on project goals, establish collaboration frameworks, and set the stage for three months of research and development. These projects address challenges in legal compliance, privacy preservation, and decentralized governance.

Stay tuned for updates as Beta Cohort 2 progresses.

DIF Labs continues to foster innovation in decentralized identity through collaborative research projects. Learn more about Lab's work at labs.identity.foundation.

Monday, 23. June 2025

Digital Identity NZ

Introductory Statement – Chair of the Executive Council, Digital Identity NZ

As the new Chair of the Executive Council of Digital Identity NZ, I am looking forward to leading a passionate and future-focused community working to build a trusted, inclusive, and interoperable digital identity ecosystem for New Zealand, building on the impressive work the Executive Council has done over the past few years .  The post Introductory Statement – Chair of the Executive

As the new Chair of the Executive Council of Digital Identity NZ, I am looking forward to leading a passionate and future-focused community working to build a trusted, inclusive, and interoperable digital identity ecosystem for New Zealand, building on the impressive work the Executive Council has done over the past few years . 

With a background in executive leadership across public service and advisory roles I have advocated for and delivered the transformative potential of digital identity as public infrastructure. Whether enabling seamless access to services, supporting mobility and consent, or underlining trust in our digital economy, identity – in all of its forms – is foundational to a modern, resilient society. 

DINZ plays a crucial convening role across government, iwi, industry, and civil society. As Chair, my focus is to champion practical progress: supporting policy and technical frameworks that uphold te ao Māori perspectives on identity, ensuring identity solutions reflect the needs of all New Zealanders, and advocating for interoperability that positions us as globally connected and locally grounded.

Our mission is urgent and clear: to enable every person in Aotearoa to participate safely and confidently in the digital world. I look forward to working with all our members to realise that vision.

Maria Robertson.

The post Introductory Statement – Chair of the Executive Council, Digital Identity NZ appeared first on Digital Identity New Zealand.

Thursday, 19. June 2025

Trust over IP

How Can We Trust What We See Online? Here’s One Way Forward

In a world where AI can create photos, videos, and even voices that look and sound real, how do we know what to trust? Every day, more content we see online is generated or altered by AI. That’s not always a bad thing. AI can help us create amazing art, get work done faster, or imagine new possibilities. But it also opens the door to misinformation, impersonation, and confusion. When anyone

Source: Original LF Decentralized Trust post

Wenjing Chu, Chair of the AI and Human Trust Working Group at Trust over IP, an LF Decentralized Trust project | Jun 12, 2025

In a world where AI can create photos, videos, and even voices that look and sound real, how do we know what to trust?

Every day, more content we see online is generated or altered by AI. That’s not always a bad thing. AI can help us create amazing art, get work done faster, or imagine new possibilities. But it also opens the door to misinformation, impersonation, and confusion. When anyone can create content that looks authentic, how do we tell what’s actually real?

To enhance human trust in AI systems and explore how AI itself can be used to address complex trust challenges in digital ecosystems, Trust over IP (ToIP), a project of LF Foundation Decentralized Trust, has launched a new AI and Human Trust (AIM) Working Group. It builds on the work done over the past three years by ToIP’s AIM task force.

The recently released white paper from the working group, ToIP Trust Spanning Protocol (TSP): Strengthening Trust in Human and AI Interactions, offers a way forward for building, maintaining and verifying interactions involving AI technologies. It brings together three powerful tools, the Trust Spanning Protocol (TSP)1, the C2PA Specification2, and the work of the Creator Assertion Working Group (CAWG)3, to build a system of authenticity for the digital world.

The key components include:

TSP (Trust Spanning Protocol) provides a strong foundation for online trust between people, platforms, and tools—making sure that when something claims to come from someone, it actually does. (The “Connector”) The C2PA Specification is a growing standard that helps attach a digital “nutrition label” to content—showing when it was made, how it was edited, and by what capture devices or software. (The “How” and the “What”) CAWG (Creator Assertion Working Group at DIF) focuses on making sure that individual and organizational content creators can identify themselves with their content and provide additional information for their audience to understand their content. (The “Who”)

Why do we need all three? Because content authenticity isn’t just about how something is created. It’s also about who made it, and how it gets communicated through public networks while retaining the integrity of actions made to it. C2PA gives us technical metadata about tools and edits. CAWG ensures the human creator is identified and attributed. And TSP makes the entire chain, from camera or AI tool to multiple individual human collaborators to final distribution platform, trustworthy at every step. Together, they provide a complete system covering creation, collaboration, and distribution.

All put together, these can help us answer the most important question about this digital artifact: Can I trust this?

This isn’t just a technical fix. It’s a new way to think about digital truth. And the paper lays out a path toward a future where users can more confidently trust the source and actions made to digital content in a way that’s accountable, verifiable, and respectful of creators.

Read the full white paper here.

We invite technologists, developers, artists, policy makers, and everyday internet users to take a look. It’s about restoring trust in a world where AI has blurred the lines of what is real and what is artificially generated.

1. Trust Spanning Protocol (TSP) is an ongoing work by Trust over IP (ToIP), a project of LFDT: https://trustoverip.github.io/tswg-tsp-specification
2. The C2PA Specification is an ongoing work by The Coalition for Content Provenance and Authenticity (C2PA): https://c2pa.org/specifications/specifications/2.2/index.html
3. The Creator Assertions Working Group (CAWG) is a joint effort by the Decentralized Identity Foundation (DIF) and ToIP. See https://cawg.io

__

Want to dive deeper into ToIP’s work on verifying authenticity? Check out this LF Decentralized Trust Webinar: Verifiable Authenticity—Answering the Threat of AI Deep Fakes

The post How Can We Trust What We See Online? Here’s One Way Forward appeared first on Trust Over IP.

Wednesday, 18. June 2025

Project VRM

Toward a Personal AI Roadmap for VRM

On the ProjectVRM list, John Wunderlich shared a find that makes clear how advanced and widespread  AI-based shopping recommendation has gone so far (and not just with ChatGPT and Amazon). Here it is: Envisioning Recommendations on an LLM-Based Agent Platform: Can LLM-based agents take recommender systems to the next level? It’s by Jizhi Zhang, Keqin Bao, Wenjie […]

On the ProjectVRM list, John Wunderlich shared a find that makes clear how advanced and widespread  AI-based shopping recommendation has gone so far (and not just with ChatGPT and Amazon). Here it is: Envisioning Recommendations on an LLM-Based Agent Platform: Can LLM-based agents take recommender systems to the next level?

It’s by Jizhi ZhangKeqin BaoWenjie WangYang ZhangWentao ShiWanhong XuFuli Feng, and Tat-Seng Chua* and is published in the Artificial Intelligence and Machine Learning section of Research and Advances in Communications of the ACM. So it’s serious stuff.

Here’s one graphic from the piece, with its caption:

Illustration of the Rec4Agentverse. The left side depicts three roles in the RecAgentverse: the user, the Agent Recommender, and Item Agents, along with their interconnected relationships. In contrast to traditional recommender systems, the Rec4Agentverse has more intimate relationships among the three roles. For instance, there are multi-round interactions between 1) users and Item Agents and 2) the Agent Recommender and Item Agents. The right side demonstrates how the Agent Recommender can collaborate with Item Agents to affect the information flow of users and offer personalized information services.

With VRM, however, we need our own personal AI agents, and useful data that’s not limited to what’s gathered by surveillance (which we can assume is most if not all of the Information layer). So I started a conversation with ChatGPT. I’ll divide it into sections: Who wrote the CACM piece The state of recommendation science Who is working on personal AI How one can get started Forms of personal data to manage

There’s a lot here. The meat of it, for ProjectVRM purposes, starts in section 3. (The first two are more about what we’re up against.) Please read the whole thing and help us think through where to go with the challenge facing us. As we see in section 3, we do have some stuff on our side.

1. Who wrote the CACM piece Me: Can you tell me more about the authors here: https://cacm.acm.org/research/envisioning-recommendations-on-an-llm-based-agent-platform/? ChatGPT:
 Jizhi Zhang

Affiliation: University of Science and Technology of China (USTC), Data Science Lab (inferred from USTC page) cacm.acm.org+14data-science.ustc.edu.cn+14x.com+14.
Research Interests: Recommendation systems, LLM-based agent platforms, and interactive intelligent systems—his lead authorship and continued work on Prospect Personalized Recommendation on LLM-Based Agent Platform (arXiv Feb 2024) reinforce this arxiv.org.

 Keqin Bao, Wenjie Wang, Yang Zhang, Wentao Shi, Fuli Feng, Tat‑Seng Chua

Affiliations: While CACM doesn’t list this, Jizhi Zhang and Fuli Feng are part of USTC’s Data Science Lab; Tat‑Seng Chua is a distinguished researcher from the National University of Singapore known for multimedia and information retrieval.
Expertise: A mix of recommender system design, LLM-agent evaluation, multimedia analysis, and human-AI interaction—reflecting a blend of USTC’s strengths and CACM relevance data-science.ustc.edu.cn+8cacm.acm.org+8data-science.ustc.edu.cn+8linkedin.com+4arxiv.org+4researchgate.net+4.

 Wanhong Xu

Affiliation: USTC Data Science Lab (USTC LDS), contributor to the same article .
Role: Emerging researcher whose latest publication is this Rec4Agentverse article, indicating an early-career focus in LLM-based recommender systems linkedin.com+8data-science.ustc.edu.cn+8linkedin.com+8.

Broader Context & Contributions

The group operates at the intersection of Large Language Models, agent-oriented systems, and recommender systems, pioneering a Rec4Agentverse paradigm that treats LLM-powered agents as “recommendable” items collaborating with one another—an evolution of traditional recommendation frameworkscacm.acm.org+5cacm.acm.org+5cacm.acm.org+5.
Their work continues earlier research (e.g., Prospect Personalized Recommendation… in Feb 2024), solidifying their ongoing commitment to expanding intelligent, conversational recommendation platforms arxiv.org.

Why This Matters

Academic Thought Leadership: Jizhi Zhang and colleagues are pushing forward thinking about how recommendations can evolve when agents—not passive items—are central.
Institutional Backing: All deeply connected to USTC’s Data Science Lab, underscoring strong Chinese academic investment in AI-driven recommender innovation.
Collaborative Reach: Including scholars like Tat‑Seng Chua broadens the research to include international perspectives in multimedia and retrieval systems.

In summary: This team is a multidisciplinary cohort—centered at USTC with international collaboration—who are pioneering the idea that LLM agents can become interactive, dynamic items in recommender systems. Their work positions them at the forefront of intelligent agent systems and could reshape how platforms personalize experiences in the future.

If you’re curious about any individual’s past publications, citation impact, or related work (e.g., on Agent4Rec), I’d be happy to dive deeper!

 

3. The state of recommendation science

Me:

How large is the emerging science, within the AI field, of recommendations?

ChatGPT:

The science of recommendations, particularly within the AI field, is not just emerging—it’s a massive, rapidly evolving subfield that is both academically rich and commercially dominant. That said, it’s also entering a new phase, driven by the rise of large language models (LLMs), interactive agents, and multimodal platforms.

Academic Scope 1. Established Field: Recommender Systems

Recommender systems (RecSys) are a well-developed domain within AI and machine learning, with roots in the 1990s.
Annual conferences like:
ACM RecSys (since 2007)
SIGIR, KDD, WSDM, and NeurIPS regularly feature recommendation research.
Thousands of papers annually on topics like:
Collaborative filtering
Matrix/tensor factorization
Deep learning for recommendation
Fairness, transparency, and explainability in RecSys
Reinforcement learning and bandit algorithms for personalization

As of 2024, over 60,000 papers in academic databases (e.g., Google Scholar) are tagged or indexed under “recommender systems”.

Current Trends & Expanding Frontiers 2. LLMs + Recommendation = New Subfield

A major shift is underway: integrating Large Language Models (LLMs) into recommendation systems.
These go beyond “you may also like” to:
Use natural language context and dialogue
Generate explanations, not just rankings
Support conversational recommendation
Coordinate agent ecosystems, e.g., multi-agent reasoning environments

Example paradigms:

RecMind – reasoning-aware recommendation agents
Rec4Agentverse – treating agents as recommendable entities
ChatRecs – integrating LLMs into live user conversations

3. Multimodal & Multitask Integration

Systems increasingly blend text, image, video, behavior, and user feedback.
LLMs allow richer personalization using complex user intent derived from unstructured data.

4. Industry Investment

FAANG companies (especially YouTube, Netflix, Amazon, TikTok, and Spotify) base their core user experience on AI-driven recommendation.
Billions of dollars are spent optimizing:
Ranking quality
Engagement prediction
Real-time inference
Federated and privacy-preserving personalization

Estimate of Scope

Area

Estimate (2024–25)

Academic papers (RecSys + AI)

60,000+ overall; thousands per year

Top conferences with recommender tracks

20+ (ACM RecSys, SIGIR, KDD, NeurIPS, etc.)

Active researchers

Tens of thousands globally

Emerging subfields

LLM-based Rec, conversational Rec, agent-based Rec, explainable Rec

Industry scale

$10B+ annually invested in personalization tech

In Summary

The science of recommendation is:

Not new, but rapidly evolving
Core to AI applications, especially with LLMs and agents
One of the most commercially and socially impactful domains in modern AI
Growing into new subfields (e.g. Rec4Agentverse, LLM Rec, multi-agent Rec)

Let me know if you’d like a visual map of how the recommendation science ecosystem is structured—or who the top authors and institutions are in this space

3. Who is working on personal AI Me: And who, if anyone, is working to provide customers with their own personal AI tools: ones that work for them, to empower their own agency in the marketplace? By which I mean truly personal AI, and not the kind personalized by some external LLM (or similar) system. ChatGPT:

That’s an essential and forward-looking question—especially as most “AI for you” today really means “AI about you, for someone else.”

While Big Tech firms dominate personalized AI, a growing ecosystem is focused on truly personal AI—that is, AI you own, that works for you (not advertisers or platforms), and that enhances your agency in the marketplace and your life.

Here are the most notable people, projects, and initiatives advancing truly personal AI today:

1. Kwaai and pAI-OS (South Africa / Global)

Kwaai is building an open-source, user-sovereign personal AI OS called pAI-OS.
Designed to run locally or at user discretion, with full data ownership, modular agent systems, and plugin capability.
Aligns with IEEE P7012 (“MyTerms”) privacy standards (see below).
Actively exploring intentcasting, customer signaling, and VRM (Vendor Relationship Management).
Site: https://kwaai.ai

2. Indie AI and Personal Agent projects

Several smaller open-source initiatives are trying to build personal agent toolkits that you can host and configure:

Examples:

AutoGen Studio (Microsoft) – allows building agent workflows, some of which can be user-defined. But it’s not yet fully personal or private.
Langroid – open-source framework for local agent-based LLM systems. Designed for extensibility, can be made personal with effort.
OpenDevin, Superagent, Auto-GPT – can be self-hosted, though most are still tool-centric rather than user-centric.
Personal.ai – A commercial startup that says it is personal, but uses centralized storage and fine-tuning, so actual user control is limited.

 Reality check: Most of these still rely on external LLMs (like OpenAI or Anthropic), even if they claim to be “your AI”.

3. Solid Project (by Tim Berners-Lee)

While not AI per se, Solid is a decentralized identity and data storage architecture that allows individuals to store their own data in personal data pods.
AI tools can be layered on top to access only the data you choose to share.
Foundation for personal AI assistants that can work with your data, not on others’ clouds.
https://solidproject.org

4. IEEE P7012 – MyTerms

A standard being developed to flip the notice-and-consent regime—so individuals, not companies, dictate the terms of data use.
Backed by Customer Commons and designed to support intentcasting and personal marketplace signaling.
This enables a personal AI to serve as your agent, enforcing your terms and engaging the marketplace on your behalf.

5. Project VRM (Vendor Relationship Management)

Initiated by Doc Searls to empower customers with tools to manage vendors, not just the reverse.
Envisions personal tools (including AI) to:
Signal intent
Manage service contracts
Retain records of offers, purchases, warranties
Avoid manipulation, surveillance, and lock-in

Project VRM is not a software project per se, but a framework guiding developers to build personal tools that support individual autonomy.

6. Open-source LLM stacks for local use

Several local LLM projects can be combined with personal data layers to build truly personal AI:

LM Studio – runs local models on desktop
PrivateGPT – local inference + local documents
Ollama – macOS-native interface for fast-running models like LLaMA, Mistral
LangChain / LlamaIndex – frameworks for building RAG systems with local documents

These aren’t personal AI systems yet—but they are key ingredients.

Summary

Name / Project

What They Do

Status

Kwaai / pAI-OS

Open, personal AI OS and agent platform

Early but active

Solid

Personal data control layer

Stable; in use

IEEE P7012

Standards for user-enforced data terms

In development

Project VRM

Framework for tools that empower buyers

Thought leadership; toolkits emerging

Local AI frameworks

Tech foundations for personal AI

Available, DIY-friendly

The Gap

Despite this promising landscape, no major platform today delivers a fully integrated, easy-to-use personal AI that you own, configure, and direct—yet. But:

A personal AI revolution is underway, led by privacy advocates, open-source communities, and a few visionary developers—not Big Tech.

Would you like help putting together a DIY stack for your own personal AI toolkit based on this landscape?

Me: Can you name some of the people in the personal AI revolution, and tell me what that DIY stack would contain? ChatGPT:

Absolutely. Here’s a breakdown of key people involved in the personal AI movement, followed by a practical DIY stack you can start building today.

People Leading or Advancing the Personal AI Revolution

These individuals aren’t all working on the same projects, but they’re aligned in trying to build AI that’s owned, run, and controlled by the individual—not centralized platforms.

 Tim Berners-Lee

Project: Solid (stewarded by ODI, commercially implemented by companies including TBLs Inrupt)
Focus: Decentralized web and personal data ownership via data pods
Impact: Infrastructure for privacy-respecting personal AI agents
https://solidproject.org

 Doc Searls

Project: Project VRM, Customer Commons, IEEE P7012
Focus: Customer empowerment, intentcasting, reversing surveillance capitalism
Impact: Philosophy and standards for customer-directed personal AI
https://projectvrm.org

 Tawanda Kembo & Kwaai team

Project: pAI-OS (personal AI operating system)
Focus: Agent-centric open-source operating system for personal AI
Impact: One of the few efforts aiming to create a complete personal AI stack
https://kwaai.ai

 Simon Willison

Project: Datasette, open tools for data management
Focus: Personal data control, lightweight databases, LLM + local tooling
Impact: Key figure in the open data + local AI tooling space
https://simonwillison.net

 Brian Roemmele

Project: Multiplex (personal voice AI)
Focus: Voice-first personal assistants you run locally
Impact: Vocal advocate for true personal AI, often referencing “software you own”

A DIY Personal AI Stack (2025 Edition)

This stack assumes local-first, modular, privacy-respecting tools. You can mix and match based on your needs.

Core Components

Layer

Tools / Projects

Notes

1. OS Layer

Linux, pAI-OS (in dev)

Linux gives flexibility; pAI-OS aims to be a full personal AI OS

2. Local LLM Inference

Ollama, LM Studio, PrivateGPT, LocalAI

Run models like LLaMA 3, Mistral, Phi-3 entirely offline

3. Vector Database

Chroma, Weaviate, Milvus, or SQLite with FTS5

Store and retrieve embedded text data

4. Personal Indexing

LlamaIndex, LangChain, AutoGen, GPTPilot

Feed personal documents and data to your LLM

5. Personal Data Storage

Solid POD, Datasette, Obsidian

Your life’s data, structured and queryable

6. Personal Agent Framework

Superagent, CrewAI, Langroid, AutoGen

Design agents that act on your behalf, modular and scriptable

7. Front-end Interface

Gradio, Streamlit, Tauri app, Electron

Local GUI/UX to interact with your agent

8. Voice Interface (optional)

OpenVoice, Vosk, Whisper.cpp, Coqui TTS

Build voice-in/voice-out agent on your machine

Optional Add-ons

Offline search: Chat over your files, PDFs, photos, and emails with LlamaIndex + PrivateGPT
Calendar/notes: Sync with Obsidian or local markdown-based knowledge base
Browser agent: Auto-control browser via Puppeteer/Playwright for marketplace navigation

What’s Still Missing?

True personal OS integration (like what pAI-OS is trying to do)
Seamless data portability and contract enforcement (IEEE P7012 is working on it)
Trusted personal AI app ecosystem (some apps, like Beeper or Obsidian, are getting close)

Would you like a tailored recommendation for building a personal AI stack based on your technical skill level or use cases (e.g., journaling, research, negotiation, shopping, data querying)?

4. How can I get started? Me: Yes. My technical skill level is modest at best. I can write HTML and do very basic stuff in the command line. I have an extra laptop I can devote to solo work, and Ethernet to a 2.1GB/s symmetrical fiber connection to the Internet with no data limits. ChatGPT:

Great — with your setup (modest technical skills, spare laptop, blazing-fast fiber), you’re in an excellent position to begin building a practical personal AI system without diving too deep into complex dev work. Here’s a recommended stack tailored to your current skills and goals:

YOUR PERSONAL AI STARTER STACK (2025) 1. Platform Base

Use a Linux distribution designed for ease of use and AI tinkering:

Recommended: Pop!_OS or Ubuntu 24.04
Easy GUI, good hardware compatibility, lots of online support
Optional: Use your laptop in “air-gapped” mode for privacy and full local control

2. Run a Local LLM with LM Studio

LM Studio (macOS/Windows/Linux) lets you run LLaMA 3, Mistral, Phi-3, and others without coding.
Built-in UI — chat with models locally, no internet required after download.
You can download 7B or 13B models based on your hardware.
Best for: private journaling, summarizing files, answering questions.

 Why it’s ideal for you:

GUI-based
Just click and download models
No API keys or command-line needed once installed

3. Chat with Your Files (PrivateGPT / LlamaIndex)

PrivateGPT or GPTPilot can let you ask questions across PDFs, emails, and notes stored on your machine.
Start simple: drop files into a folder, let the AI index them.
Optionally install via Docker (one command), or use their prebuilt apps.

Use case:

“Summarize this research PDF”
“What did I write about project X last month?”
“Find the email where someone mentioned a refund”

4. Knowledge Base + Journaling with Obsidian

Obsidian = Markdown-based knowledge base.
Everything you write stays local.
You can point LM Studio to your notes to query your own memory.

Easy way to:

Keep private journals
Index your life
Ask your AI things like “What are my goals for June?”

5. Personal Data Management with Datasette

Datasette by Simon Willison lets you browse and query your own data with simple SQL and CSV files.
Feeds AI with structured personal info (purchases, inventory, plans, etc.)
You can install it with a single terminal command:
bash
CopyEdit
pip install datasette datasette serve your-data.db
datasette serve your-data.db

Combine with LM Studio to get responses like:

“What subscriptions do I need to cancel?”
“When did I last back up my photos?”

6. Optional: Browser Helper Agent (Local, Simple)

Use AutoBrowser or a simpler Puppeteer script to automate browsing tasks (like checking prices, news, account activity).
Start with prebuilt scripts. No coding needed if you copy/paste working examples.

7. Privacy Boosters

Run Tailscale or WireGuard to access your AI remotely if needed.
Use LocalAI or Ollama if you later want to run everything without GUI (more advanced).

Putting It All Together

Task

Tool

Chat privately with an AI

LM Studio

Summarize your own notes or PDFs

PrivateGPT or GPTPilot

Store and reflect on your life data

Obsidian, Datasette

Manage subscriptions and tasks

Datasette + Chat

Ask about your files or history

LlamaIndex or GPT over Obsidian notes

Use voice later (optional)

Whisper.cpp, OpenVoice (easy tools)

Next Steps (in plain order)

Install Pop!_OS or Ubuntu on your spare laptop
Install LM Studio and download a 7B model like Mistral or LLaMA 3
Start using it to answer personal questions and summarize documents
Add Obsidian to start journaling and reflecting
Try PrivateGPT or Datasette to search your past or track your life

Would you like a checklist or an install script to get started with this stack? I can also help you set up a file structure to begin tracking and querying your own data.

5. Forms of personal data to manage Me: The personal data I need to manage are health records (from 20-plus providers), financial records (fewer than health), travel records (including the kind kept by apps that watch my movements), subscriptions and other recurring payments, shows I have watched and on what channels or services (data gathered by my Samsung and Roku TVs and presumably shared with others but to which I don’t yet have access), continuing work and work history with various clients and employers, contacts, archival and current calendars, real estate, personal items bought or acquired from various sources and where they are kept or stored, and business dealings. What would be the file structure for all that, and how would I manage it? Note that I am not talking about shopping yet; that’s a separate concern. ChatGPT:

Thanks for the detailed breakdown — you’re describing the foundation of a true personal data architecture: structured, queryable, and under your control. Let’s build you a file structure and management strategy that matches your needs without overcomplicating the tech.

Recommended Top-Level Folder Structure

This folder structure is designed to be future-proof, machine-readable, and human-usable. Use plain folders, CSV/JSON/Markdown files, and optional SQLite databases to store structured data. You can later layer tools (like LLMs or Datasette) on top.

markdown CopyEdit /PersonalAI_Data

├── Health/
│ ├── Providers/
│ ├── Visits/
│ ├── Prescriptions/
│ ├── Labs_Imaging/
│ └── Insurance_Claims/

├── Finance/
│ ├── Bank_Accounts/
│ ├── Credit_Cards/
│ ├── Investments/
│ └── Taxes/

├── Travel/
│ ├── Trips/
│ ├── Locations_History/
│ └── Maps_GPX_KML/

├── Subscriptions/
│ ├── Streaming/
│ ├── News_Publications/
│ ├── Software_Tools/
│ └── Utilities_Bills/

├── Media_Viewed/
│ ├── TV_Shows/
│ ├── Movies/
│ ├── Channels_Services/
│ └── Watch_History.csv

├── Work/
│ ├── Employers/
│ ├── Clients/
│ ├── Projects/
│ └── Timesheets_Invoices/

├── Contacts/
│ └── contacts.csv

├── Calendar/
│ ├── Archive_ICS/
│ ├── Current_ICS/
│ └── Events_Tags.csv

├── Real_Estate/
│ ├── Properties_Owned/
│ ├── Rentals/
│ └── Maintenance_Records/

├── Personal_Items/
│ ├── Furniture/
│ ├── Electronics/
│ ├── Art_Collectibles/
│ └── Inventory.csv

└── Business/
├── Deals/
├── Agreements/
├── Partners/
└── Notes/
Strategy for Managing It All 1. Use Consistent File Types Data Type File Format Structured entries .csv.json.sqlite Notes or logs .md (markdown) Documents .pdf.docx.txt Images/attachments store in attachments/ subfolder per topic 2. Naming Conventions (Machine + Human Friendly)

Use ISO-like formats to make things sortable and parseable:

yaml CopyEdit 2024-06-16_labresults_providerX.csv
2023_Q4_taxes_federal.pdf
clientA_2022-11_contract.pdf
3. Data Capture Tools

You don’t need to build it all manually:

Use Obsidian for ongoing notes (e.g. travel journal, work logs)

Use Paperless-ngx (optional) to OCR and tag your scanned PDFs (health, finance)

Use GPS History Export from Google and Apple for travel/locations

Use email filters or Zapier to auto-download bills, statements, confirmations

4. Querying and Reviewing

Add one or more of these later:

Datasette → browse and search CSVs/SQLite locally

LM Studio + PrivateGPT → ask, “When did I last visit Dr. Chen?” or “What devices are in storage?”

LlamaIndex → index your file tree and let AI search/summarize it

Summary of Benefits Goal This Setup Delivers Total local control You decide access, storage, retention Future AI-readiness Structured enough for agents to use Easy to back up Plain files, portable databases Modular growth Start simple, layer in AI when ready

Would you like me to generate a starter template with folder structure and some CSV/Markdown file formats filled in with mock data for each domain? That way you can load it directly into your system and start customizing.

Me again:

That’s it for now.

What should my next request be, or what other direction would you like me (or us) to take this? Or, what are the learnings we have so far in this field that barely exists? Oh, and excuse whatever inaccuracies ChatGPT may have floated above. Or give me fixes. Thanks. *The links are mine and based on searches for those names. I might have some of them wrong, however, since searches for those names bring up many scholars with the same names. And, while each name had a link, they all went to an author archive page that was absent (with one exception) of other contributions to the publication.

FIDO Alliance

Meta Newsroom: Introducing Passkeys on Facebook for an Easier Sign-In

We’re introducing passkeys on Facebook for mobile devices, offering another tool to safeguard your privacy and security. Passkeys are a new way to verify your identity and log in to […]

We’re introducing passkeys on Facebook for mobile devices, offering another tool to safeguard your privacy and security. Passkeys are a new way to verify your identity and log in to your account that’s easier and more secure than traditional passwords. 

Passkeys will soon be available on iOS and Android mobile devices for Facebook, and we will begin rolling out passkeys to Messenger in the coming months. The same passkey you set up for Facebook will also work on Messenger once this capability launches. 


FIDO Alliance Releases Authenticate 2025 Agenda

Carlsbad, Calif., June 18, 2025 – The FIDO Alliance has announced the agenda for Authenticate 2025, the only industry conference dedicated to digital identity and authentication with a focus on phishing-resistant […]
FIDO Alliance’s flagship event features an expanded agenda to deliver practical strategies for implementing usable, phishing-resistant security across the entire account lifecycle.  Super Early Bird discounts are available through June 20.

Carlsbad, Calif., June 18, 2025 – The FIDO Alliance has announced the agenda for Authenticate 2025, the only industry conference dedicated to digital identity and authentication with a focus on phishing-resistant sign-ins with passkeys. The event will take place October 13–15, 2025 at the Omni La Costa Resort and Spa in Carlsbad, Calif., with options for virtual participation available.

The focus of the program for the Authenticate 2025 conference is achieving phishing-resistant authentication with passkeys and the adjacent considerations required to achieve end-to-end account security with usability in mind.

Visit https://authenticatecon.com/event/authenticate-2025/ to view the full session guide and register ahead of the June 20th Super Early Bird deadline.

Authenticate is built for CISOs, security strategists, enterprise architects, product leaders, UX professionals, and anyone engaged in the identity lifecycle from strategy to implementation. Attendees will gain practical knowledge around deploying phishing-resistant authentication at scale, designing secure user experiences, understanding complementary technologies, and navigating policy and compliance requirements. 

This year’s event will showcase keynotes and sessions led by top executives and industry leaders at the forefront of the passwordless movement. The agenda for 2025 has been revamped to include: longer track sessions for more in-depth presentations, an increased focus on masterclasses for actionable synced and device-bound passkey implementation best practices, and a new solutions theater track to showcase live demonstrations of the latest identity and authentication solutions. This year’s agenda also features more opportunities for networking and exploration of the interactive expo hall to foster collaboration and idea sharing.

With four dynamic stages across four curated content tracks,  Authenticate 2025 will offer sessions on: 

Account onboarding Remote identity verification and proofing Authorization Biometrics Session security Device onboarding and authentication Cybersecurity/fraud threats and detection Digital identity/digital wallets The future of digital identity and authentication

Sponsorship Opportunities Available
Authenticate 2025 offers unique sponsorship opportunities for companies to showcase solutions to an engaged, decision-making audience. With limited availability remaining, prospective sponsors can learn more and apply at https://authenticatecon.com/sponsors/ or contact authenticate@fidoalliance.org

About Authenticate 

Authenticate is the premier conference dedicated to advancing digital identity and authentication, with an emphasis on phishing-resistant sign-ins using passkeys. Hosted by the FIDO Alliance, this event brings together CISOs, security strategists, product managers and identity architects to explore best practices, technical insights and real-world case studies in modern authentication. The 2025 conference will take place from October 13-15 at the Omni La Costa Resort & Spa in Carlsbad, California, and will be co-located with the FIDO Alliance member plenary sessions, which run through October 16. 

Authenticate is hosted by the FIDO Alliance, the cross-industry consortium providing standards, certifications and market adoption programs to accelerate utilization of simpler, stronger authentication with innovations, like passkeys. Signature sponsors for Authenticate 2025 are Google, Microsoft, Visa, and Yubico.

To learn more and register, visit https://authenticatecon.com/event/authenticate-2025/, and follow @AuthenticateCon on X. Register now and get the super early bird discount through June 20, 2025.

Authenticate Contact
authenticate@fidoalliance.org

PR Contact
press@fidoalliance.org

Tuesday, 17. June 2025

DIF Blog

Progress Report from the Creator Assertions WG

v1.1 specification integrates the novel Identity Aggregator mechanism

v1.1 of the 3 specifications includes one powerful new moving part

As CAWG activity picks up steam at DIF, there are a few details the rest of DIF and the broader community of decentralizers might want to be tracking:

The specification includes a powerful new indirection called an Identity Aggregator, which designates an external authority to translate a long-lived identifier embedded in signed credentials (at time of publication) to one or more identifiers with local significance anywhere an asset is used (at time of republication or consumption). Industry-specific identifier schemes are being researched by a distinct task force within the group for prototyping and getting adoption in media verticals. Registering/organizing external metadata standards and DID interop are on-going discussions. Wait, what Working Group is this?

If you're following from a distance, you have a vague sense that CAWG is a DIF working group doing something-something C2PA. If that distance is a long distance, you might know C2PA is a big-name, media-authenticity initiative with many mega-corporations signed on. The reality is actually more decentralized than meets the eye: CAWG is specifying open-world extension points that use Verifiable Credentials to let all kinds of claims and all kinds of actors embed metadata and rights declarations in C2PA documents, not just the big boys.

As the name would imply, "creators" is a capacious term which includes influencers, independents, rights-holders unions, creative agencies, freelancers, and even anonymous social media microcelebrities alike. The extension points and interoperability mechanisms this group is working on bring verifiability at various scales at once, and to various kinds of ecosystems and markets. The "Assertions" that these creators are inserting into signed C2PA manifests embedded in professional media assets are the open-world extension point that let C2PA manifests contain arbitrary metadata (treated in a separate specification being iterated by the group), arbitrary trust signals, and arbitrary attached credentials.

Enter the Aggregator

The Identity Claims Aggregator (often referred to as "aggregator" in the group) names a piece of software (which can be internal or external to an authoring tool) that tracks multiple identifiers enabling verifiable credentials (issued against them) to be inserted meaningfully. It also witnesses proofs of (and later attests to) control of many kinds of external identifiers, and generally organizes the chaos of the world's many overlapping identifier schemes and attestation formats. To the outside observer, this might seem a very complicated translation mechanism, but to the decentralized identity veteran, this is a familiar necessity. Every verifiable credential scheme eventually needs this kind of translator/aggregator role, if it is to be an open system or even if it is "only" going to federate across the tech stacks of multiple existing systems.

Taken from section 8.1.1.2 of the v1.1 Identity Assertion specification

The conversation so far in the working group has been working its way from the general to the concrete: do aggregators only aggregate identifiers and information sources known at the time of authoring/inserting, or can an aggregator add on new attestations at a later time? Must aggregators limit themselves to public identifiers, or can they use an internal/aggregator-specific one? Can an aggregator host a live API for post-facto information to be passed out-of-band, like additional credentials or updated credentials? How tightly, if at all, should this group specify such an API, if so? These are the high-level questions being debated on the back-burner of CAWG meetings this summer.

The Aggregator-Indirection Question

Zooming in a little more, there are further questions being tackled. Aggregators model a great happy-path solution for embedding declarations and strong identifications into each asset, but what about the many unhappy paths? For example, what if a creator’s assertions are scattered across many identifiers and credential types, and embedding those assertions requires all kinds translations and metadata to be legible? What if identifiers change, or new assertions become relevant after publishing– can “placeholder” or indirection identifiers be used to query data sources that continue receiving assertions after publication? Can an indirection or service be used to display more or less assertions depending on audience, or changeable-over-time consent policies? Can assertions and identities be “self-custodied”? What is the “account recovery” story for these increasingly complex use-cases?

While adding the aggregator was the biggest change in v1.1, it will be a long while until the exact scope and limits of this role are decided. It may well be that some advanced features get postponed to a later stage in the roadmap, because of the sheer complexity they entail, but it will definitely be an ongoing topic simmering in the background whenever smaller debates come up.

Separate Work Stream: Industry-Specific Identifiers

In parallel, a subgroup is meeting separately to research and sanity-check the integration of major media-industry identifier schemes and metadata schemes, looking for interop corner-cases and relevant prior art.

Interested parties are encouraged to pop into the subgroup's github issues and meetings if they are working on (or just curious about, or experienced with) industry associations and media archiving best practices. The usual IP caveats apply: if joining a live call as a non-DIF member or commenting on github issues, refrain from going into concrete detail on anything "patentable" like implementation details or solutions.

Other Big Questions between V1.1 and v2.0

These advanced features of the aggregator aren’t the only big questions that we can expect to simmer and percolate across the next few "minor versions". Additionally, the interop issues around existing metadata standards (not just major W3C standards, but real-world ones from industry and library sciences) are potentially inexhaustible, as the group's Metadata specification gives scaffolding for inserting any structured metadata into assertions.

A slightly less vast but still very open-ended interoperability question is which DID methods to recommend or even require of all implementers, and how to manage or triage the remainder of the list of possible current (and future!) DID methods. Intersecting with the ongoing work of the DID Method Standardization Working Group, and older efforts like DID Traits to define equivalence or translatability between DID methods with similar architectures, semantics and guarantees, there is something of a simmering backlog-debate about which forms of DID make how much sense for the CAWG use-cases.

Of course, the evaluation of DID methods for these tiered-accreditation and decentralized reputation use-cases necessarily includes more than just technical analysis; legal and business readiness factor in as well, including competitiveness and market health/structure considerations to keep media authenticity from being a perk in closed ecosystems. Luckily, the new co-chair Scott Perry brings much experience and open lines of dialogue with multiple working groups at the Trust-over-IP Foundation which work on exactly these aspects of DID technology and business processes. In particular, agentic identity is a topic that ToIP generally, and Scott specifically, are bringing into the scope of the WG, so keep an eye out for issues and PRs along those lines in the coming months as well.


FIDO Alliance

MSN: Google Pushes 2 Billion Gmail Users to Adopt Passkeys Over Passwords

Google is making its biggest security push yet. The company strongly urges its 2 billion Gmail users to switch passwords to passkeys. While not mandating immediate changes, Google has made passkeys the default authentication method. They’ve also set […]

Google is making its biggest security push yet. The company strongly urges its 2 billion Gmail users to switch passwords to passkeys. While not mandating immediate changes, Google has made passkeys the default authentication method. They’ve also set a hard deadline for third-party apps. The FBI reported cyber attacks jumped 33% last year. Those attacks cost over $16 billion in damages. Google’s response shows how seriously Big Tech is taking the password problem affecting every internet user.

Monday, 16. June 2025

We Are Open co-op

Gaps in AI Literacy provision

In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC.
A values-based list of barriers faced by 14–19 year olds in the UK

In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. We’ve already published:

What does AI Literacy look like for young people aged 14–19? What makes for a good AI Literacy framework? Core Values for AI Literacy AI Literacy or AI Literacies? Image CC BY-ND Visual Thinkery for WAO

This project has involved both desk research and input from experts in the form of a survey, individual interviews, and a roundtable which we hosted a couple of weeks ago. One area we wanted to ensure we understood were gaps in existing provision around AI Literacies for young people.

The gaps we identified were focused on the 14–19 age range in the UK, with a long list of areas with many themes. We have organised and summarised these around the core values identified in a previous post.

The gaps reflect a pattern seen across education, media, and wider society: provision is uneven. It is often shaped by short-term thinking and competing interests. Overall, it is limited by a lack of clear leadership or coordination.

Unfortunately, many interventions around AI Literacies are focused on technical skills or compliance. These do not connect with young people’s real interests or lived experiences, nor do they address the deeper ethical, social, and cultural questions raised by AI.

As a result of this, many learners — especially those already facing disadvantage — are left with fragmented support and few opportunities to develop genuine agency or critical judgement.

Human Agency and Informed Participation Lack of systemic, rights-based frameworks: There is little structured provision to help young people shape, question, or influence AI, with most education focused on adapting to technology rather than encouraging agency or clarifying institutional responsibilities. Dominance of industry narratives: Commercial interests and tech industry funding often drive the agenda, narrowing the conversation and limiting opportunities for young people to challenge prevailing narratives or understand the political dimensions of AI. Insufficient progression and curriculum integration: There is no standardised, dynamic curriculum or progression framework for AI Literacies, especially for post-16 learners, and limited integration across subjects beyond computing or digital studies. Teacher confidence and support gaps: Many teachers lack confidence, training, and adaptable resources to support the development of AI Literacies, resulting in inconsistent, sometimes contradictory, messaging and limited support for critical engagement. Disconnect between knowledge and action: Awareness of AI bias, manipulation, or power structures does not reliably translate into agency or behavioural change, with motivation and broader social context often overlooked. Equity, Diversity, and Inclusion Persistent digital and social divides: Access to tools and resources to develop AI Literacies is highly unequal, shaped by school policies, family resources, and broader digital divides, with privileged students often able to bypass restrictions. Lack of cultural and global adaptation: Most resources are developed in the global north and do not reflect the needs or realities of diverse cultural, socioeconomic, or linguistic backgrounds, including those from in the global south. Barriers for marginalised groups: AI tools and resources can disadvantage non-native English speakers, students with disabilities, and those with limited digital access, reinforcing existing inequalities. Neglect of visual and multimodal literacy: There is insufficient focus on images, deepfakes, and multimodal content, despite their growing importance for misinformation and manipulation. Resource design and authenticity: Overly polished, anthropomorphised, or inaccessible resources can alienate young people; there is a need to co-design authentic, relatable, and context-driven materials that reflect lived experiences with young people from a range of background Creativity, Participation, and Lifelong Learning Short-termism and lack of sustainability: Funding and interventions are often short-lived, with little focus on long-term, joined-up strategies or progression frameworks. Imbalance between creativity and consumption: Most young people are consumers, not creators, of AI content; there is insufficient emphasis on participatory, creative, and hands-on engagement with AI. Restrictive and risk-averse policies: Overly strict barriers on access to AI tools in schools can limit meaningful learning opportunities and create anxiety or underground use. Missed opportunities for experiential and peer learning: There is underuse of hands-on, constructionist, and peer-led approaches, which are effective for this age group and for a rapidly evolving field like AI. Failure to address entrenched digital habits: Many interventions come too late to shift established digital habits; young people may have high digital skill but lack guidance on purposeful, critical, or participatory use. Critical Thinking and Responsible Use Overemphasis on technical skills: Current provision is skewed towards prompt engineering and functional tool use, with insufficient attention to understanding different kinds of AI, ethical reasoning, systemic impacts, and critical engagement. Insufficient ethical, environmental, and societal focus: Real-world harms, environmental costs, and the broader impact of AI are rarely discussed, leaving gaps in understanding responsible use. Media and information literacy gaps: Algorithmic and data literacy gaps: Young people struggle to understand how data shapes AI outputs, how to assess real versus fake (including deepfakes), and how to evaluate, challenge or seek redress for algorithmic decisions or AI-generated content. Anthropomorphism and mental models: Many young people, particularly younger teens, misattribute human-like qualities to AI, affecting their critical judgement and ability to interrogate outputs. Lack of robust assessment and evidence: There is a shortage of baseline data on AI literacy levels and limited frameworks for evaluating the effectiveness and impact of interventions, especially in terms of behavioural change. Upholding Human Rights and Wellbeing Disconnection from youth interests and lived experience: AI Literacy resources often fail to connect to young people’s real interests (creativity, sports, mental health), focusing instead on employability or compliance. Socio-emotional and privacy risks: Young people may use AI for companionship or advice, sharing sensitive information without understanding privacy or data risks; frameworks rarely address identity, trust, or changing markers of adulthood. Confusion and inconsistency in terminology: There is no consensus on what “AI literacy” means, and inconsistent definitions can intimidate learners or place excessive responsibility on individuals. Unclear responsibility and leadership: It remains unclear who should lead on the development of AI Literacies. Schools, parents, government, industry, and third sector bodies all have a role to play, but the current situation leads to fragmented provision and a lack of accountability. Neglect of digital relationships and boundaries: The role of AI as an “invisible third party” in relationships, and the shifting boundaries of privacy and identity, are rarely addressed in current resources. Next up

We’re still finalising our framework for AI Literacies and will be sharing it soon. Meanwhile, you can follow our work on this topic so far at https://ailiteracy.fyi.

Please do get in touch if you have projects and programmes that can benefit from our experience and expertise in education and technology!

Acknowledgements

The following people have willingly given up their time to provide invaluable input to this project:

Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox​, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager


DIDAS

GC25 – Global Digital Collaboration – July 1–2, 2025

At Global Digital Collaboration on July 2nd, a full day of sessions co-curated by DIDAS and partners will address how privacy-enhancing technologies (PETs) and trustworthy governance models can become core enablers of digital trust across sectors and jurisdictions. The day begins with a high-level update session featuring SPRIND, Google, EPFL, Johannes Kepler University, and others. It wi

At Global Digital Collaboration on July 2nd, a full day of sessions co-curated by DIDAS and partners will address how privacy-enhancing technologies (PETs) and trustworthy governance models can become core enablers of digital trust across sectors and jurisdictions.

The day begins with a high-level update session featuring SPRIND, Google, EPFL, Johannes Kepler University, and others. It will explore the current maturity, post-quantum readiness, and practical deployment of PETs such as BBS+, SD-JWT, and ZK-mDoc. The session aims to establish shared terminology and frameworks for unlinkability and selective disclosure across global credential ecosystems.

In parallel, the e-democracy workshop series (Part 1 & 2), led by the Center for Digital Trust (C4DT) at EPFL, DIDAS, the Human Colossus Foundation, and other civil society actors, will explore how digital services like e-ID and e-collecting e-voting and related challenges which must be redesigned for resilience to protect public trust, prevent fraud, and ensure accountability. The sessions aim to define foundational principles for a trustworthy digital democracy, co-created by experts in law, governance, cryptography, and policy.

Running alongside, a collaborative mapping session by Johannes Kepler University, Orange, Ethereum researchers, DIDAS, EUDI and other Global Ecosystems and pilot teams are invited to identify and classify global use cases where PETs-particularly zero-knowledge proofs-are essential. The session will help align performance and privacy requirements across deployment contexts, feeding into implementation roadmaps and standards discussions.

In the afternoon, a deep dive on unlinkability will be led by experts from Google, SPRIND, EPFL and the Linux Foundation’s decentralized trust initiatives. This session will focus on the risks of issuer–relying party collusion in credential ecosystems, and why unlinkability is non-negotiable for use cases like transport and location-sensitive infrastructure.

Later, a technically grounded session titled “ZKProofs: From Crypto Potential to Regulatory Acceptance” will bring together Google, ETSI, and NIST to map out viable ZKP schemes, their mobile-readiness, and interoperability features. The goal is to bridge the gap between cryptographic innovation and institutional trust, and to align stakeholders around a roadmap for responsible, cross-border adoption and acceptance.

The day concludes with a multi-stakeholder roundtable moderated by DIDAS with invitees from the ITU, the OpenWallet Foundation, LF Decentralized Trust, OECD, UNHCR the Swiss confederation, the EU Commission and other country delegates and potential funding partners to explore long-term collaboration structures. This final session will address how to sustain PET development through ongoing working groups, interoperable governance, and shared funding models.

 

Public Sector & Multilateral Institutions

Swiss Confederation European Commission ITU (International Telecommunication Union) OECD UNHCR SPRIND (Federal Agency for Disruptive Innovation, Germany) EUDI Pilot Teams (various EU member states)

 

Research & Academia

EPFL – École Polytechnique Fédérale de Lausanne C4DT – Center for Digital Trust (EPFL) Johannes Kepler University Linz Ethereum Research Community

 

Civil Society & Ecosystem Actors

DIDAS – Digital Identity and Data Sovereignty Association Digital Society Association (Switzerland) Human Colossus Foundation Other invited civil society contributors

Private Sector & Standards Bodies

Google Orange Linux Foundation – Decentralized Trust Initiative OpenWallet Foundation ETSI – European Telecommunications Standards Institute NIST – U.S. National Institute of Standards and Technology LF Decentralized Trust

Core Themes

Privacy-enhancing technologies (PETs), ZKPs, unlinkability Verifiable credentials, digital identity, selective disclosure Trust infrastructure governance, interoperability, post-quantum security E-democracy, civic trust, institutional resilience Multi-stakeholder collaboration, sustainable funding, global alignment

This collaborative agenda reflects a global commitment to building privacy-preserving, interoperable, and inclusive digital ecosystems with shared responsibility across sectors.

Friday, 13. June 2025

FIDO Alliance

MSSP Alert: authID Integrates with Ping to Spread Passwordless Authentication

authID’s decision this month to integrate its biometric identity verification technology with Ping Identity’s PingOne DaVinci service is a necessary step at a time when humans continue to be the […]

authID’s decision this month to integrate its biometric identity verification technology with Ping Identity’s PingOne DaVinci service is a necessary step at a time when humans continue to be the weakest security link for organizations and bad actors increasingly target passwords to gain access to corporate networks, according to Jeff Scheidel, vice president of operations for the Denver-based company.


Ars Technica: Coming to Apple OSes: A seamless, secure way to import and export passkeys

Apple OSes will soon transfer passkeys seamlessly and securely across platforms. Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the […]

Apple OSes will soon transfer passkeys seamlessly and securely across platforms.

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn’t susceptible to credential phishing and other attacks targeting passwords.


FIDO Seminar: Authentication, Identity and the Road Ahead

Overview The FIDO Alliance and host sponsor Thales recently held a one day seminar on authentication, identity and the road ahead. Seminar sessions provided an exploration of the current state […]
Overview

The FIDO Alliance and host sponsor Thales recently held a one day seminar on authentication, identity and the road ahead.

Seminar sessions provided an exploration of the current state of authentication for workforce and consumer sign-ins – with a focus on FIDO and passkeys including adoption status and case studies. The seminar also featured discussions on other relevant topics for IAM professionals, such as the latest in attacks and threats, identity verification technology advances, and post-quantum cryptography. Attendees had the opportunity to engage directly with authentication and identity experts through open Q&A, networking and demos.

View the presentations below:

Thursday, 12. June 2025

FIDO Alliance

Passwordless Authentication and the Rise of Passkeys: Expert Insights Podcast with Andrew Shikiar

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. […]

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance, joins us to discuss the shift from passwords to passkeys and the role of FIDO in driving secure, passwordless authentication. He explores the challenges of adoption, the importance of identity verification, and how cross-platform interoperability is accelerating passkey use. The conversation also touches on the impact of generative AI on cybersecurity and what the future holds for passkeys in building long-term resilience.

About Expert Insights:

Expert Insights saves you time and hassle by rigorously analyzing cybersecurity solutions and cutting through the hype to deliver clear, actionable shortlists. We specialize in cybersecurity. So, our focus is sharper, our knowledge is deeper, and our insights are better. What’s more, our advice is completely impartial.

In a world saturated with information, we exist to arm experts with the insights they need to protect their organization. That is why over 1 million businesses have used us to inform their cybersecurity research.

Listen to the podcast.


We Are Open co-op

Exploring AI's Role in Education: A Balanced Perspective from Six Think Pieces

Collaborating to respond to UNESCO’s call for think pieces
Collaborating to respond to UNESCO’s call for think pieces

A couple of months ago, after seeing a UNESCO call for contributions, Doug began wrangling a group of thinkers and educators to respond to the call in a collaborative and open way. Naturally, I got involved, and thus we thought this story would be a good one for the WAO blog :)

There were six of us on that stormy night…

Bryan Alexander – an internationally known futurist, researcher, writer with this popular blog and newsletter, AI, academia, and the Future.  ​Helen Beetham – a researcher and consultant in digital education who has edited several standard texts including Rethinking Pedagogy for a Digital Age. Her articles on AI, education, and society can be found at imperfect offerings. ​Doug Belshaw – co-founder of We Are Open Co-op, working at the intersection of systems thinking, digital literacies, and Open Recognition. Doug's writings can be accessed via his website. ​Laura Hilliger – concept architect, open strategist, and co-founder of We Are Open Co-op. Her website contains links to her blog and newsletter. ​Ian O'Byrne – Associate Professor of Literacy Education at the College of Charleston. He maintains an active presence through his website and weekly newsletter Digitally Literate. ​Karen Louise Smith – Associate Professor in the Department of Communication, Popular Culture and Film at Brock University. She teaches courses related to social media, surveillance, and new media policy and her writing can be accessed via her website.

The six of us met up online to chat about how we might collaborate to respond to UNESCO’s call, and decided that we would each write our own think pieces. We then met up regularly to chat about where we were, what ideas were floating around and get inspiration from one another. Once we had drafts, we each read each other's pieces offering comments and suggestions. 

Our finished pieces can be found at this linktree: https://linktr.ee/ai_future_education

After we submitted our think pieces to UNESCO, we decided to do a roundtable event hosted by Doug. Over 100 people signed up, with 50 more on the waitlist. Participants gathered to discuss the transformative potential of Artificial Intelligence (AI) in education. The event brought together educators, tech experts, and policymakers to dissect our six pieces on AI's future role in learning environments.

This blog post summarises our very nuanced discussion surrounding AI's role in education. If you’re interested in this type of thing, we’d recommend watching the full session.

1. Personalized Learning

One of the themes discussed was personalized learning facilitated by AI. While AI can functionally analyze student data to tailor curricula and pacing, we talked about the emotional and social dimensions of learning. The lack of empathy in AI systems hinders holistic education, which requires human interaction.

2. Equity and Access

We talked about the theory that AI could democratize access to quality education, but we highlighted significant challenges. Unequal access exacerbates existing disparities in educational opportunities, and it is not just infrastructure that is inequitable in our education system.

3. The Educator's Role

AI’s impact on educators’ roles was another theme in our conversation. While AI can handle routine tasks like grading, potentially reducing administrative burdens, there are concerns about over-reliance on technology displacing human interaction. We argued for understanding teachers as mentors, facilitators, guardians and creatives to try and make clear that “efficiency” is not a goal in education.

4. Privacy and Bias

We talked a lot about the ethical implications surrounding various aspects of AI. From data privacy concerns, algorithmic bias, and transparency in AI decision-making, we stressed the importance of ethical guidelines and accountability measures.

5. Collaboration

We talked a bit about the potential for collaboration between educators and AI. We tried to think about what a complementary relationship with AI might look like for education. While AI might enhance some of our teaching tools, it cannot be allowed to overshadow the irreplaceable human element.

6. The Future of Education

In conclusion, we know and spoke about the fact that any technology and its implementation in education must be approached with caution and balance. Our conversation underscored that AI is not a panacea but a technology that exists within a particular context, and that like any technology, it’s how we use it that matters.

Go deeper:

This is the video ​Recording for the full session  We made an AI generated ​Chat summary ​All of our think pieces are worth a close read. Jump in individually: Bryan Alexander - Several futures for AI and education Helen Beetham - The implications of ‘artificial intelligence’ for the right to education in equality and dignity Doug Belshaw - Marching Backwards into the Future: AI’s Role in the Future of Education Laura Hilliger - It is not the tool, it is the artist who sparks the revolution: The Importance of Art Education with or without AI Ian O'Byrne - Amplifying Human Cognition: Artificial Intelligence as Mirror and Magnifier Karen Louise Smith: Building warm expert expertise to mitigate against data harms in AI-powered edtech

Many thanks to Bryan Mathers of Visual Thinkery, who provided the illustrations included in this post. To see all of those that he drew based on the session, visit his website

Wednesday, 11. June 2025

The Rubric

Exploring the SOL-ar System (did:sol, Part 2)

did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification.   References Civic https://www.civic.com/  x:  @civickey Cryptid https://github.com/identity-com/cryptid  DID Directory h
did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification.   References Civic https://www.civic.com/  x:  @civickey Cryptid https://github.com/identity-com/cryptid  DID Directory https://diddirectory.com/  did:sol spec  https://g.identity.com/sol-did/ did:sol on diddirectory.com  https://diddirectory.com/sol  did:sol on...

Exploring the SOL-ar System (did:sol, Part 1)

did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification. References Civic https://www.civic.com/ x:  @civickey Cryptid https://github.com/identity-com/cryptid DID Directory https://diddirectory.com/
did:sol is the Solana DID method. Solana is an application-centric blockchain praised for it’s high-throughput and composability. Today on the show we talk with Martin Riedel, and Daniel Kelleher, co-editors and implementers of the did:sol specification. References Civic https://www.civic.com/ x:  @civickey Cryptid https://github.com/identity-com/cryptid DID Directory https://diddirectory.com/ did:sol spec https://g.identity.com/sol-did/ did:sol on diddirectory.com https://diddirectory.com/sol did:sol on...

We Are Open co-op

AI Literacy or AI Literacies?

Exploring the plural, context-dependent, and socially-negotiated nature of new literacies
Exploring the plural, context-dependent, and socially-negotiated nature of new literacies

Over the past couple of months, we’ve been working on an ‘AI Literacy’ project with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. We’ve already published:

What does AI Literacy look like for young people aged 14–19? What makes for a good AI Literacy framework? Core Values for AI Literacy

In this post, we want to explore the tension we’ve felt between referring to ‘AI Literacy’ in the singular, versus referring to a plurality of ‘AI Literacies’. Ultimately, although our original brief used the singular form (as do many of our peers) we have decided to take the latter, plural, approach — for reasons we will explain below.

One very practical reason to emphasise ‘AI Literacies’ is that it is difficult to talk about ‘delivering’ a literacy. “Literacy” always begs the question of context: What does literacy mean to this particular person in this particular setting at this particular moment? What it means to be ‘AI literate’ is going to look very different to someone working in a corporate office job, compared to a teenager using AI for a creative project. Additionally, there are multiple literate behaviours when we think about AI — for example, understanding the socio-economic reality of the AI landscape versus knowing how to prompt an LLM to get the kind of information or answer you are looking for.

AI Literacies are therefore both plural and context-specific. They are also socially-negotiated. Literate behaviours depend on the community with which an individual is interacting. This becomes evident through a few examples.

Image CC BY-ND Visual Thinkery for WAO

If you are a parent of teenagers, you will have experienced a time when they respond in a way which makes sense to them and their friends, but not to you. You are likely to have to ask them what they mean or use a resource like the Urban Dictionary. Other behaviours such as using a particular emoji might be hilarious for reasons you cannot quite comprehend.

These “rhetorics of communication” are an important part of literacy practices, especially in the digital realm. They constitute ways of interacting with other people within a techno-social system which itself privileges and foregrounds certain kinds of behaviours, while either explicitly or implicitly discouraging others. For example, contemporary chat apps allow you to see not only that a message has been delivered, but whether it has been read. The act of not reading a particular message may be seen in multiple lights: Is the person ignoring me? Are they mad at me? Are they offline in the forest?

Any time we are communicating in ways which are mediated by technologies, part of literate behaviour involves understanding the “affordances” that the technology provides as well as understanding how that technology might be shaping our behaviour.

If you were, for example, quickly texting your teenager while at work, you might send your text and then open a workplace chat window which looks and feels very much like the social one which you have just been using. However, because the context is different — as well as perhaps both the demographic makeup and number of people in the chat — you act differently. =Your literate behaviours are thus socially-negotiated, meaning that you vary your behaviour in different situations.

As we start to understand AI Literacies, we need to think about the most common way in which people experience generative AI — by prompting a Large Language Model (LLM) through a chat window. The chat window is a familiar technology, but the fact that there isn’t a human on the other side of it is not. Part of AI Literacies therefore involves exhibiting and modifying our behaviours based on our knowledge and experience of factors that surround this particular chat window.

Image CC BY-ND Visual Thinkery for WAO

With personal or workplace chats, what is outside of the frame informs literate behaviours inside the frame. Similarly, when we are interacting with AI, the more we know about what is outside the frame, the more we can develop appropriate literate behaviours inside the frame. Again, these literate behaviours are plural: will others be able to tell that you are using the outputs of an LLM? (will they mind?) They are based on context: should you trust the company behind the technology you are using? And they are socially-negotiated: are there environmental concerns of which you should be aware?

Angela Gunder’s Dimensions of AI Literacies provides a helpful way to think about these issues. Building on my work on the Essential Elements of Digital Literacies, Gunder’s framework sets out a series of overlapping dimensions that shape how people interact with AI. This approach supports the idea that AI Literacies are not a fixed set of skills, but a collection of practices negotiated within communities and shaped by context.

AI Literacies, like Critical Media Literacies, Digital Literacies, Information Literacies, Data Literacies, and a whole host of “new” literacies, should be considered to be fundamentally plural. What counts as “literate behaviours” are socially-negotiated based on context. Words and phrases, however, are important to describe what we mean. And that is why we will be referring to AI Literacies in the project we’re doing with the RIC for the BBC.

Coming soon

We are working on a public version of our landscape setting and framework for AI Literacies. We’ll be sharing that soon. In the meantime, follow our contributions to this space through https://ailiteracy.fyi/ and get in touch if you have projects and programmes that can benefit from our experience and expertise in education and technology.


Next Level Supply Chain Podcast with GS1

Why Your Amazon Ads Aren't Working (And What the Best Brands Do Differently)

What if you could see what your customers are searching before they even hit 'buy'? In today's hyper-competitive marketplace, that's exactly what the best brands are doing, and it's giving them a serious edge. In this episode, Chris Barnes, General Manager of Retail & Alternative Channels at Jungle Scout, joins hosts Reid Jackson and Liz Sertl to break down how brands are using real-time s

What if you could see what your customers are searching before they even hit 'buy'?

In today's hyper-competitive marketplace, that's exactly what the best brands are doing, and it's giving them a serious edge.

In this episode, Chris Barnes, General Manager of Retail & Alternative Channels at Jungle Scout, joins hosts Reid Jackson and Liz Sertl to break down how brands are using real-time search, product, and shopper data to respond faster to market signals on Amazon and beyond.

Chris shares how companies are identifying market opportunities, protecting category share, and improving performance across pricing, inventory, and advertising, all by knowing what to look for in the data.

In this episode, you'll learn:

How to use search and shopper behavior to guide product strategy

Why category management looks different in the age of marketplaces

The data brands overlook and how it impacts sales performance

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:07) Chris Barnes on his career journey

(04:38) Transforming strategy with data and intelligence

(05:55) Category management in-store vs. online

(09:23) AI's impact on search and consumer behavior

(13:04) Dude Wipes' growth and success

(15:35) Leveraging data to understand consumer needs

(19:17) Power of data analytics in product development

(20:48) Top strategies for maximizing growth

(27:37) The future of agencies and AI in business

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Chris Barnes on LinkedInCheck out Jungle Scout

Monday, 09. June 2025

Velocity Network

Reinventing How Career Records Are Shared

The post Reinventing How Career Records Are Shared appeared first on Velocity.

Friday, 06. June 2025

FIDO Alliance

MobileIDWorld: Mastercard Launches Passkey Authentication in Europe, Achieves 50% E-commerce Adoption

Mastercard has launched advanced payment passkeys across Europe as part of its initiative to enhance online transaction security and replace traditional passwords. The company reports that its tokenization and passkey […]

Mastercard has launched advanced payment passkeys across Europe as part of its initiative to enhance online transaction security and replace traditional passwords. The company reports that its tokenization and passkey implementation has achieved nearly 50 percent adoption in European e-commerce transactions, building on its successful passkey deployment in Latin America earlier this year.

The payment technology company’s new security measures arrive at a critical time, as data shows that one in four business owners in Europe face targeting by scammers. A quarter of these businesses express concern about their ability to recover from potential cyber attacks. The expansion follows the broader industry trend toward passwordless authentication, with the FIDO Alliance reporting significant growth in enterprise passkey adoption.


PYMNTS: OneSpan Acquires Passwordless Authentication Specialist Nok Nok Labs

OneSpan announced Thursday (June 5) its acquisition of Nok Nok Labs, a provider of FIDO passwordless software authentication solutions. OneSpan said joining forces with Nok Nok enables the company to provide customers […]

OneSpan announced Thursday (June 5) its acquisition of Nok Nok Labs, a provider of FIDO passwordless software authentication solutions.

OneSpan said joining forces with Nok Nok enables the company to provide customers worldwide with a comprehensive authentication portfolio, available on-premises or in the cloud. This combined offering now includes support for OTP, FIDO, software, and hardware solutions, such as Digipass, FIDO2 protocols, and Cronto solutions for transaction signing.

Victor Limongelli, CEO at OneSpan, described the acquisition as a “bold step toward providing customers with maximum choice in authentication.” He added that the company is evolving its entire authentication platform to include FIDO standards, believing that passwordless authentication is an important part of the future. With Nok Nok’s technology and FIDO expertise, OneSpan aims to offer a comprehensive and versatile customer authentication solution.

Phillip Dunkelberger, president and CEO at Nok Nok, noted that joining OneSpan allows them to bring their vision, rooted in open standards like FIDO, to a broader audience via OneSpan’s global reach. Andrew Shikiar, executive director and CEO of the FIDO Alliance, said Nok Nok has been a “trailblazer” in the FIDO ecosystem.


PCMag: This Password Manager Now Lets You Create an Account Without a Password

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication. One of the better-known password managers is now inviting people to try it without having to create […]

Dashlane lets you open an account with a FIDO2-spec USB security key as your authentication.

One of the better-known password managers is now inviting people to try it without having to create yet another password. Instead, Dashlane is now inviting people to try opening a new account secured only by a USB security key compliant with the “FIDO2” authentication standard; FIDO being short for Fast Identity Online.

Emphasize “try.” The company’s support page for this “early access” program notes that it supports only Google Chrome and Microsoft Edge, not Dashlane’s mobile apps. For now, it doesn’t let you create an account secured only by a passkey, the form of FIDO2 security more people use. 

Wednesday, 04. June 2025

Blockchain Commons

Blockchain Commons Signs No Phone Home Initiative

At Blockchain Commons, we design open infrastructure that prioritizes privacy, autonomy, and human dignity. That’s why I support and personally signed the No Phone Home Initiative. It is not just a position, it’s a call to preserve a foundational principle of decentralized identity: Credentials must be verifiable without enabling surveillance! Why “No Phone Home” Matters The problem is simple: when

At Blockchain Commons, we design open infrastructure that prioritizes privacy, autonomy, and human dignity. That’s why I support and personally signed the No Phone Home Initiative. It is not just a position, it’s a call to preserve a foundational principle of decentralized identity: Credentials must be verifiable without enabling surveillance!

Why “No Phone Home” Matters

The problem is simple: when verifying a digital credential such as a mobile driver’s license or a diploma, many systems require contacting the original issuer. This creates a digital trail of who is verifying what, when, and why. That trail can be used to profile and surveil, allowing issuers to track credential holders without their knowledge.

Manu Sporny, in an email to the W3C Credentials Community Group (June 3, 2025), clarified the stakes:

“Retrieving a status list could be misinterpreted as ‘phoning home’ … but it’s not anywhere near the same level of “phoning home” that contacting the issuer and telling them ‘I’ve got Steve here at booze-hut.com, is he over 21?’ achieves.”

But the threat of direct identity leak is just the tip of the iceberg. That’s because credential presentation isn’t a one-off event. It’s recurrent. Even when identifiers or interactions are pseudonymous, repeated verifications leak sensitive metadata, allowing issuers or third parties to correlate time, location, and use-pattern metadata into behavioral profiles.

The Decentralized Identity Foundation talks about some of this in “Nearly 100 Experts Are Saying ‘No Phone Home’”:

“The risks multiply when applied across domains. Federated protocols developed for use within organizations become surveillance systems when used between different sectors or jurisdictions. Phone home capabilities that seem innocuous within a single domain can become tools for tracking and control when applied broadly without aggressive oversight and fine-tuning.”

Problematically, this is how these systems are designed to work! As Kim Hamilton Duffy says in the first of a series of articles on mDL privacy that she’s currently working on:

“This isn’t an unintended consequence—it’s an architectural feature that can trivially enable persistent record-keeping of when and where you use your credentials, creating patterns that can be analyzed long after the original transaction by unknown third parties.”

This also reveals yet another danger: how “normal” it feels to let credential issuers remain silently in the loop.

Revocation Without Surveillance

Some argue that checking for the revocation of a credential requires phoning home. But that’s a false dilemma. In the same W3C thread, Sporny noted:

“There are a few ways to retrieve a status list without directly contacting the issuer that use commonly deployed web technology.””

Technical mitigations discussed and developed by the community include:

Large, pseudonymous status lists (e.g., Bitstring Status List) Use of CDNs or file mirrors to avoid direct issuer contact Oblivious HTTP (OHTTP) for unlinkable status fetching

There are still issues with these potential mitigations. For example, Kyle Den Hartog (Pryvit NZ) raised concerns about status list misuse:

“An issuer creates a bitstring list of sufficiently large size, but only includes 1 active credential index per bitstring list. All other indexes are spoofed. The rest of the list would look active to the holder/verifiers but could still be a tracking mechanism by the issuer.”

But, these edge-case attacks reinforce why the core architecture must be surveillance-resistant by default.

Privacy isn’t the only issue with revocation checking: it’s also a structural risk. If we tie credential validity to live status checks, we quietly shift power from holders to issuers. It becomes a form of dependency injection, one that contradicts the goal of self-sovereign identity.

Not Just Technical: It’s Ethical

This isn’t just a technical issue. It goes to the ethical heart of self-sovereign identity design.

Daniel Hardman offered this framing in a related thread on edge cases:

“Verifiable credentials verify without issuer coordination; that is what the ‘verifiable’ in ‘verifiable credential’ means.”

Joe Andrieu argued in the same May 2025 thread:

*The identity system that wins is going to be the one we can use in any circumstance by anyone. … It’s my wallet. I expect it to serve me, as a user agent. I do not accept that it might also serve the state as a surveillance platform.

At Blockchain Commons, we agree. The ability to verify credentials offline, without depending on a central service, is essential for resilience and civil liberties.

A report prepared for the American Civil Liberties Union (ACLU) put it clearly by requiring “No Issuer ability to track via phone home mechanism” and saying:

“One way a digital ID can differ from physical ID is that it can enable the issuers of the digital ID to track where, when, and to whom one shows their ID. This tracking can reveal very private and sensitive information about the digital ID holder — namely, when and where, online or off, they present their ID. Standards and technologies should be designed so that the issuer (or any of its agents or contractors) cannot engage in any of these forms of tracking.”

Emergencies Are Not an Excuse

Some use cases such as disaster response or first responder tracking have prompted discussions around consent-based “check-ins.” These are complex and worthy of consideration. But the VC Data Model 2.0 spec is clear:

“Credential status specifications MUST NOT enable tracking of individuals, such as an issuer being notified (either directly or indirectly) when a verifier is interested in a specific holder or subject. Unacceptable approaches include “phoning home …”

But compliance doesn’t equal safety. Even digital credentials that conform to existing standards—such as ISO 18013-5—can still include implementation choices that enable surveillance. Privacy must be baked into system design, not retrofitted through policy disclaimers.

As Alexis Hancock of the Electronic Frontier Foundation (EFF) warned (via State Scoop):

“We have to act now because governments are enthusiastic about digital ID, but if we don’t pin down these basic principles now, it’s going to be a problem later.”

We can build systems that are opt-in, purpose-specific, and out-of-band, without compromising the privacy baseline for everyone else.

Our Commitment

At Blockchain Commons, we believe decentralized identity should empower the individual, not quietly report on them. We are actively designing open standards such as Gordian Envelope and dCBOR to support truly private, verifiable, interoperable credentials.

We support “No Phone Home” because surveillance should never be the default. And we invite others to join us in making sure the future of identity remains decentralized, private, and just.

Tuesday, 03. June 2025

Digital Identity NZ

Introducing Our New Executive Director | May Newsletter

Welcome to the May edition of the Digital Identity NZ newsletter. This month, we're excited to introduce our new Executive Director, share insights from Techweek25’s Foundations for Tomorrow Event and the launch of Air New Zealand’s new passport capture service. The post Introducing Our New Executive Director | May Newsletter appeared first on Digital Identity New Zealand.

Kia ora,

Welcome to the May edition of the Digital Identity NZ newsletter. This month, we’re excited to introduce our new Executive Director, share insights from Techweek25’s Foundations for Tomorrow Event.

Andy Higgs Appointed as New Executive Director

We’re pleased to announce that Andy Higgs has joined Digital Identity NZ as our new Executive Director.

Andy brings over 20 years of experience across digital identity, AI strategy, and innovation in both public and private sectors. His background includes leadership roles at Futureverse and Centrality, where he focused on self-sovereign identity solutions and ecosystem partnerships.

His experience extends to policy development with the Department of Internal Affairs and Ministry of Business, Innovation and Employment, including work on the Digital Identity Services Trust Framework and the Consumer Data Right.

Andy’s collaborative approach will be valuable as DINZ continues to work alongside members to build a trusted digital identity ecosystem for everyone in Aotearoa.

Digital Public Infrastructure: Foundations for Tomorrow Event

During Techweek25, government, industry, and public sector leaders gathered at Parliament’s Legislative Chamber to discuss how digital public infrastructure (DPI) could transform service delivery across New Zealand.

Key takeaways for the digital identity community:

Ministerial vision: Hon Judith Collins KC announced plans for an all-of-government app allowing citizens to store digital credentials, receive notifications, and access services in one secure digital space.
  Economic benefits: Pete Herlihy from AWS highlighted that digital identity is one of four core components of DPI that can deliver significant economic growth—between 1-2% of GDP in developed nations.
  Human-centered approach: Deloitte’s Adithi Pandit emphasised how unified digital infrastructure could enable more joined-up social services and reduce fragmentation.
  Implementation plans: Public Service Commissioner Sir Brian Roche indicated a move toward greater centralisation with prescribed platforms and standards to make digital infrastructure a low-cost utility.
  Industry perspective: Xero founder Rod Drury called for greater urgency in digital identity implementation, suggesting New Zealand could leverage its small size to move quickly and “solve digital identity by Christmas.”

Read the full event recap here.

Member News

Our DINZ community continues to grow! We’re delighted to welcome POLipay as a member and look forward to featuring and engaging them in our ecosystem.

See all organisation members here.

Stay Connected

Thank you for being part of our community. We look forward to sharing more updates next month. 

Ngā mihi nui,

The team at Digital Identity NZ

Read full news here: Introducing Our New Executive Director | May Newsletter

SUBSCRIBE FOR MORE

The post Introducing Our New Executive Director | May Newsletter appeared first on Digital Identity New Zealand.


Blockchain Commons

Musings of a Trust Architect: Fair Witnessing in a Decentralized World

ABSTRACT: “Fair Witnessing” is a new approach for asserting and interpreting digital claims in a way that mirrors real-world human trust: through personal observation, contextual disclosure, and progressive validation. It can be implemented with the decentralized architecture of Gordian Envelopes to allow individuals to make verifiable statements while balancing privacy, accountability, and interpr

ABSTRACT: “Fair Witnessing” is a new approach for asserting and interpreting digital claims in a way that mirrors real-world human trust: through personal observation, contextual disclosure, and progressive validation. It can be implemented with the decentralized architecture of Gordian Envelopes to allow individuals to make verifiable statements while balancing privacy, accountability, and interpretability. At its core, fair witnessing is not about declaring truth, it’s about showing your work.

In the early days of decentralized identity, we referred to what we were working on as “Verifiable Claims.” The idea was simple: let people make cryptographically signed statements and allow others to verify them. But something unexpected happened. People assumed these claims would settle arguments or stop disinformation. They saw the term “verifiable” and equated it with “truth.”

The reality was more modest: we could verify the source of a claim but not its accuracy. We could assert that a claim came from a specific person or organization (or even camera or other object) but not whether that claim was unbiased, well-observed, or contextually complete.

This misunderstanding revealed a deeper problem: how do we represent what someone actually saw and how they saw it, in a way that honors the complexity of human trust?

A Heinleinian Inspiration

Iin Stranger in a Strange Land, Robert Heinlein described a special profession: the Fair Witness. A Fair Witness would be trained to observe carefully, report precisely, make no assumptions, and avoid bias. If asked what color a house was, a Fair Witness would respond, “It appears to be white on this side.”

It is this spirit we want to capture to fulfill the promise of the original verifiable claims.

A Fair Witness in our digital era is someone who not only asserts a claim but also shares the conditions under which it was made, including context, methodology, limitations, and bias:

What were the physical conditions of the observation? Was the observer physically present? Did they act independently? What interests or leanings might have shaped their perception? How did they minimize those biases?

These are not just nice-to-haves. They are necessary components of evaluating a claim’s credibility.

Beyond Binary Trust

Fair witnessing challenges binary notions of trust. Traditional systems ask a “yes” or “no” question: do you trust this certificate? This issuer?

But trust is rarely binary like this in the real world. It is layered, contextual, and progressive. The claim made by a pseudonymous environmental scientist might start out with low trust but could grow in credibility as:

They reveal their professional history. Others endorse their work. They disclose how they mitigated their potential biases.

Trust builds over time, not in a single transaction. That’s progressive trust.

Trust as a Nested Statement

To marry a fair witness claim to the notion of progressive trust requires the nesting of information. As shown in the example of the environmental scientist, the witnessing of an observation gains weight as the context is added: turning the scientist’s claims into a fair-witness statement required collecting together information about who the scientist is, what their training is, and what their peers think of them.

But as noted, progressive trust isn’t something that occurs in a single transaction: it’s revealed over time. We don’t want it to all be revealed at once, because that could result in information overload for someone consulting a claim and could have privacy implications for the witness.

A progressive trust model of fair witnessing requires that you show what you must and that you withhold what’s not needed—until it is.

Privacy and Accountability, Together

This model strikes a crucial balance. On one hand, it empowers individuals (fair witnesses) to speak from experience without needing permission from a centralized authority. On the other hand, it allows others to verify the integrity of the claim without requiring total exposure.

There are numerous use cases:

You can prove you were trained without revealing your name. You can demonstrate personal observation without revealing your exact location. You can commit to a fact today and prove you knew it later. Fair Witnessing with Gordian Envelope

The demands of Fair Witnessing go beyond the capabilities of traditional verifiable credentials (VCs), primarily because VCs can’t remove signed information but maintain its validation—and the ability to do so is critically important if you want to nest information for revelation over time.

Fortunately, a technology already exists that provides this precise capability: Blockchain Commons’ Gordian Envelope, which allows for: the organized storage of information; the validation of that information through signatures; the elision of that information; the continued validation of the information after elision; and the provable restoration of that information.

Any subject, predicate, or object in Gordian Envelope can itself be a claim, optionally encrypted or elided. This enables a deeply contextual, inspectable form of expression.

For example:

Alice could make a fair-witness observation, which would be an envelope. Information on the context of Alice’s assertion can be a sub-envelope. A credential for fair witness training can be a sub-envelope. Endorsements of Alice’s work as a fair witness can be sub-envelopes. Endorsements, credentials, and even the entire envelope can be signed by the appropriate parties. Any envelope or sub-envelope can be elided, without affecting these signatures and without impacting the ability to provably restore the data later.

It’s progressive trust appropriate for use with fair witnessing in an existing form!

Toward a New Epistemology

Being a Fair Witness isn’t about declaring truth. It’s about saying what’s known, with context, so others can assess what’s truth. Truth, in this model, is interpreted, not imposed. A verifier—or a jury—decides if a claim is credible, not because a central authority says so, but because the Fair Witness has provided information with sufficient context and endorsements.

In other words, fair witnessing is not about what is true, but about how we responsibly say what we believe to be true—and what others can do with that.

This is epistemology (the theory of knowledge) that’s structured as a graph. It’s cryptographically sound, privacy-respecting, and human-auditable. It reflects real-world trust: messy, contextual, and layered. By modeling that complexity rather than flattening it, we gain both rigor and realism.

Conclusion

In a world of machine-generated misinformation, ideological polarization, and institutional distrust, we must return to the foundations: observation, context, and human responsibility.

Fair witnessing offers a new path forward—one that is verifiable, privacy-respecting, and grounded in how humans actually trust.

Learn more: [ Progressive Trust Gordian Envelope ]

Monday, 02. June 2025

DIF Blog

Nearly 100 Experts Are Saying "No Phone Home"

After decades of cautiously watching from the sidelines, governments around the world have started investing in, rolling out, and regulating digital identity systems on aggressive timelines. These foundational changes to government infrastructure and the economy are happening largely outside public awareness, despite their generational consequences for privacy. Digital identity systems

After decades of cautiously watching from the sidelines, governments around the world have started investing in, rolling out, and regulating digital identity systems on aggressive timelines. These foundational changes to government infrastructure and the economy are happening largely outside public awareness, despite their generational consequences for privacy.

Digital identity systems implemented by governments today will shape privacy for decades. Whatever ecosystems and technical architectures are established in the coming years could ossify quickly, and it would take enormous political will to make changes at such a foundational level if society develops buyer's remorse once the ripple effects become clear.

That's why nearly 100 experts across technology, policy, and civil liberties have united around one principle: digital identity systems must be built without latent tracking capabilities that could enable ubiquitous surveillance. Thus, the nophonehome.com petition.

Who's Behind This

Civil society groups working on legal advocacy and industry oversight (ACLU, EFF), cybersecurity experts (including Bruce Schneier), privacy-by-design software companies of various sizes (Brave, many DIF members), and experts from university faculties (Brown, Columbia, Imperial College London) all signed on. The list includes authors of collaborative open standards, chief executives, state privacy officers, and other public servants. This is not a coalition of "activists" so much as a broad coalition of experts and policy-watchers sounding an alarm about consequential decisions passing largely unnoticed by the average citizen and end-user.

The breadth of this coalition reflects widespread concern about the technical and policy implications of embedded tracking capabilities.

What "Phone Home" Means

As a general rule, "phone-home" is a shorthand for architectural principles of tracking enablement (just as "no phone-home" refers to tracking mitigation, broadly speaking). When a verifier of credentials interacts directly with the credential's issuer—even if just to check validity or revocation status—they are "phoning" the credential's "home." This opens the subject and/or the holder of that credential to privacy risks, no matter how well the request is anonymized or handled. These API connections create data that can be combined, correlated, and abused, especially when verifiers share information or when issuers abuse their role.

The risks multiply when applied across domains. Federated protocols developed for use within organizations become surveillance systems when used between different sectors or jurisdictions. Phone home capabilities that seem innocuous within a single domain can become tools for tracking and control when applied broadly without aggressive oversight and fine-tuning. Over time, little mismatches and slippages in how these protocols work get exploited and stretched, amplifying glitches.

In the worst-case scenario, some systems enable real-time revocation decisions, giving issuers—potentially governments—immediate control over citizens' ability to access services, travel, or participate in society. A natural tendency to "over-request" foundational documents in situations where such strong identification is unjustified is amplified by familiarity, lack of friction, and other UX catnip; all the SHOULDs in the world won't stop verifiers from doing it. And verifiers over-asking without also providing a fallback or "slow lane" can make a sudden or temporary unavailability of foundational credentials painful or even exclusionary. The side-effects and externalities pile up dangerously in this industry!

Technologists see these kinds of capabilities (phone-home of any kind, remote revocation, low-friction foundational identity requests) like loaded guns in Act 1 of a Chekhov play: "If this capability exists within a digital identity system, even inactively, it will eventually be misused."

The Scale and Timing Problem

Most foundational identity systems being implemented for national-scale deployment include system-wide phone home tracking capabilities, either actively or latently. Many policymakers involved in these rollouts are not even aware of the tracking potential built into the standards they are adopting.

Four factors make this moment critical:

Scale of deployment: These systems will serve billions of users across developed nations, effectively replacing physical credentials. Precedent-setting effects: When one jurisdiction adopts tracking-enabled systems, it influences global practices and standards. Infrastructure persistence: Technical decisions made today will persist for decades, becoming prohibitively expensive to change once embedded. Mission creep inevitability: Capabilities developed for legitimate purposes like fraud prevention naturally accrue new private-sector and/or public-sector use-cases over time due to natural market pressures. Today's private-sector usage is tomorrow's public-sector secondary data market. The Fallacy of "Privacy by Policy"

The fundamental problem with latent tracking capabilities is that policies change, but technical architecture persists. If a system has surveillance capability—even if unused—it will eventually be activated. Emergencies, changing administrations, or shifting political priorities can quickly justify "pressing the button" to enable widespread tracking.

The solution is simple: they cannot press a button they do not have.

Consider AAMVA's recent decision to prohibit the "server retrieval" capability throughout the U.S.—a positive step that we welcome. However, most low-level implementations (e.g. core libraries) will likely implement the entire specification and leave it to the last-mile implementers to honor (or not) this policy. As an incubator of new specifications and prototypes, DIF feels strongly that jurisdiction-by-jurisdiction policies is just "turning off" what the specification still instructs software to implement for later policies to turn back on at the flick of a switch. We believe the underlying ISO specification needs to remove "server retrieval" completely, lest every authority in the U.S. remain one emergency away from activating broad, identity-based surveillance of all citizens.

Privacy-Preserving Alternatives Exist

The choice between security and privacy is false. Offline-first verification operates without server communication—the credential contains cryptographic proofs that can be validated independently. The ISO 18013-5 standard itself includes "device retrieval" mode, a privacy-preserving alternative that functions entirely offline.

Even credential revocation can be implemented without phone home capabilities. Privacy-preserving revocation systems are in production today, proving that security and privacy can coexist.

The technology exists. The standards exist. What has been missing is commitment to prioritize privacy over the operational convenience of centralized tracking.

Moving Forward

Awareness is growing. We welcome developments like AAMVA's prohibition of server retrieval, but more work is needed across the broader digital identity ecosystem to eliminate latent surveillance capabilities entirely.

The Decentralized Identity Foundation develops standards that prioritize privacy, supports implementations that respect user autonomy, and advocates for technical architectures that prevent tracking and add friction to data misuse. Our membership includes many technologists and vendors designing tracking-free alternatives for these and other use cases.

We encourage you to read the full No Phone Home statement at https://nophonehome.com. Whether you are building, deploying, or using these systems, your voice matters at this critical juncture.

The question is not whether we can build privacy-preserving digital identity—it is whether we will choose to do so. Let's build it right.

The Decentralized Identity Foundation (DIF) is an engineering-driven organization focused on developing the foundational elements necessary to establish an open ecosystem for decentralized identity and ensure interoperability between all participants. Learn more at identity.foundation.

Tuesday, 27. May 2025

FIDO Alliance

Biometric Update: 10 million passkeys registered for Mercari market app amid phishing crisis

Mercari, the Japanese e-commerce company behind the Mercari marketplace, has surpassed 10 million registered users of passkeys for authentication.

Mercari, the Japanese e-commerce company behind the Mercari marketplace, has surpassed 10 million registered users of passkeys for authentication.


Biometric Update: Yubico simplifies passwordless

Yubico, a provider of hardware authentication security keys, has announced the expanded availability of YubiKey as a Service to all countries in the European Union. This builds upon the company’s […]

Yubico, a provider of hardware authentication security keys, has announced the expanded availability of YubiKey as a Service to all countries in the European Union.

This builds upon the company’s existing reach in markets such as the UK, U.S., India, Japan, Singapore, Australia and Canada. In addition, Yubico has expanded the availability of YubiEnterprise Delivery across 117 new locations around the world.

This now brings the total to 199 locations (175 countries and 24 territories and it more than doubles existing delivery coverage of YubiKeys to both office and remote users in a fast nad turnkey way. “Enterprises today are facing evolving cyber threats like AI-driven phishing attacks,” said Jeff Wallace, senior vice president of product at Yubico.


PYMNTS: Entersekt and Stanchion Team to Enhance Payment Integration

Authentication software company Entersekt has launched a partnership with South Africa-based PayTech solution provider Stanchion. The partnership is aimed at “enhancing payment integration capabilities and delivering cutting-edge solutions to financial institutions worldwide,” the […]

Authentication software company Entersekt has launched a partnership with South Africa-based PayTech solution provider Stanchion.

The partnership is aimed at “enhancing payment integration capabilities and delivering cutting-edge solutions to financial institutions worldwide,” the companies said in a Wednesday (May 21) news release.

The collaboration combines Stanchion’s tools for “modernizing, transforming, and accelerating innovation within payment systems” with Entersekt’s 3-D Secure payment authentication solution, which provides transaction authentication across all three domains: the merchant acquirer domain, the card issuer domain and the interoperability domain.


Info Security Buzz: From Passwords to Passkeys: The Future of Digital Identity Protection

Passwords have been used as the first line of defense in protecting one’s digital identity, but they are fast becoming obsolete due to rampant identity theft. There seems to be […]

Passwords have been used as the first line of defense in protecting one’s digital identity, but they are fast becoming obsolete due to rampant identity theft. There seems to be no value in passwords anymore due to the increase in breaches of security systems on different platforms. This calls for an easier method of suppressing theft.

It is equally important to recognize the rise of passkeys as they help a great deal in bolstering digital identity protection.

Thursday, 22. May 2025

We Are Open co-op

Core Values for AI Literacy

Embedding Ethics for Audiences aged 14–19

WAO is currently working with the Responsible Innovation Centre for Public Media Futures (RIC), which is hosted by the BBC. The project, which you can read about in our kick-off post, is focused on research and analysis to help the BBC create policies and content to help improve the AI Literacy skills of young people aged 14–19.

We’re now at the stage where we’ve reviewed academic articles and resources, scrutinised frameworks, and reviewed input from over 40 experts in the field. They are thanked in the acknowledgements section at the end of this post.

One of the things that has come up time and again is the need for an ethical basis for this kind of work. As a result, in this post we want to share the core values that inform the development of our (upcoming) gap analysis, framework, and recommendations.

Public Service Media Values

Public Service Media (PSM) organisations such as the BBC have a mission to “inform, educate, and entertain” the public. The Public Media Alliance lists seven PSM values underpinning organisations’ work as being:

Accountability: to the public who fund it and hold power to account Accessibility: to the breadth of a national population across multiple platforms Impartiality: in news and quality journalist and content that informs, educates, and entertains Independence: both in terms of ownership and editorial values Pluralism: PSM should exist as part of a diverse media landscape Reliability: especially during crises and emergencies and tackling disinformation Universalism: in their availability and representation of diversity

These values are helpful to frame core values for the development of AI Literacy in young people aged 14–19.

AI Literacy Core Values

Using the PSM values as a starting point, along with our input from experts and our desk research, we have identified the following core values. These are also summarised in the graphic at the top of this post.

1. Human Agency and Empowerment

AI Literacy should empower young people to make informed, independent choices about how, when, and whether to use AI. This means helping develop not just technical ability, but also confidence, curiosity, and a sense of agency in shaping technology, rather than being shaped by it (UNESCO, 2024a; Opened Culture, n.d.). Learners should be encouraged to question, critique, adapt, and even resist AI systems, supporting both individual and collective agency.

2. Equity, Diversity, and Inclusion

All young people, regardless of background, ability, or circumstance should have meaningful access to AI Literacy education (Digital Promise, 2024; Good Things Foundation, 2024). Ensuring this in practice means addressing the digital divide, designing for accessibility, and valuing diverse perspectives and experiences. Resources and opportunities must be distributed fairly, with particular attention to those who are digitally disadvantaged or underrepresented.

3. Critical Thinking and Responsible Use

Young people should be equipped to think critically about AI, which means evaluating outputs, questioning claims, and understanding both the opportunities and risks presented by AI systems. In addition, young people should be encouraged to understand the importance of responsible use, including understanding bias, misinformation, and the ethical implications of AI in society (European Commission, 2022; Ng et al., 2021).

4. Upholding Human Rights and Wellbeing

Using a rights-based approach — including privacy, freedom of expression, and the right to participate fully in society — helps young people understand their rights, navigate issues of consent and data privacy, and recognise the broader impacts of AI on wellbeing, safety, and social justice (OECD, 2022; UNESCO, 2024a).

5. Creativity, Participation, and Lifelong Learning

AI should be presented as a tool for creativity, collaboration, and self-expression, not just as a subject to be learned for its own sake. PSM organisations should value and promote participatory approaches, encouraging young people to contribute to and shape the conversation about AI. This core value also recognises that AI Literacy is a lifelong process, requiring adaptability and a willingness to keep learning as technology evolves (UNESCO, 2024b).

Next Steps

We will be running a roundtable for invited experts and representatives of the BBC in early June to give feedback on the gap analysis and emerging framework. We will share a version of this after acting on their feedback.

If you are working in the area of AI Literacy and have comments on these values, please add them to this post, or get in touch: hello@weareopen.coop

Acknowledgements

The following people have willingly given up their time to provide invaluable input to this project:

Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox​, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager

References Digital Promise (2024). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218 European Commission (2022) DigComp 2.2, The Digital Competence framework for citizens. Luxembourg: Publications Office of the European Union. https://doi.org/10.2760/115376. Good Things Foundation (2024) Developing AI Literacy With People Who Have Low Or No Digital Skills. Available at: https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/ai-literacy Jia, X., Wang, Y., Lin, L., & Yang, X. (2025). Developing a Holistic AI Literacy Framework for Children. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–16). ACM. https://doi.org/10.1145/3727986 Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2(100041), 100041. https://doi.org/10.1016/j.caeai.2021.100041 OECD (2022) OECD Framework for Classifying AI Systems. Paris: OECD Publishing. https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html Opened Culture (n.d.) Dimensions of AI Literacies. Available at: https://openedculture.org/projects/dimensions-of-ai-literacies Open University (2025) A framework for the Learning and Teaching of Critical AI Literacy skills. Available at: https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2025/01/OU-Critical-AI-Literacy-framework-2025-external-sharing.pdf UNESCO (2024a) UNESCO AI competency framework for students. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391105 UNESCO (2024b) UNESCO AI Competency Framework for Teachers. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391104

Wednesday, 21. May 2025

Internet Safety Labs (Me2B)

IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels

ISL presented at ConPro 2025’s 9th Workshop on Technology and Consumer Protection. The conference was a perfect opportunity to showcase our presentation on Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels. Open PDF The post IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels appeared first on Internet Safety Labs.

ISL presented at ConPro 2025’s 9th Workshop on Technology and Consumer Protection. The conference was a perfect opportunity to showcase our presentation on Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels.

Open PDF

The post IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels appeared first on Internet Safety Labs.


DIF Blog

Global Digital Collaboration (GDC): Your Questions Answered

Many in the DIF community are asking about the upcoming Global Digital Collaboration conference in Geneva. As the date is quickly arriving, we wanted to give a sneak preview of what's ahead. Table of Contents: About the GDC Conference The Agenda Learn more and participate About the GDC

Many in the DIF community are asking about the upcoming Global Digital Collaboration conference in Geneva. As the date is quickly arriving, we wanted to give a sneak preview of what's ahead.

Table of Contents:

About the GDC Conference The Agenda Learn more and participate About the GDC Conference Key Details When: July 1-2, 2025 Where: Centre International de Conférences Genève (CICG), Switzerland Cost: Free (registration required) Register: https://lu.ma/gc25 (registration is available through any co-organizing partner) What is the GDC?

Global Digital Collaboration is a landmark gathering bringing together 30+ global organizations to advance digital identity, wallets, and credentials - hosted by the Swiss Confederation.

What makes this conference truly unique is that, from the beginning, it's been co-organized by the participating organizations, who have worked with their communities, and with each other, to form an agenda that will help advance the most critical topics in digital identity.

Rather than being driven by a single organization's vision, the GDC represents a collaborative effort where international organizations, standards bodies, open-source foundations, and industry consortia have jointly defined priorities and sessions that address the most pressing challenges in digital trust infrastructure. This multi-stakeholder approach ensures broader perspectives are represented and creates unprecedented opportunities for alignment across traditionally separate communities.

Why Attend? Unprecedented collaboration: This conference's collaborative nature bridges organizations that rarely coordinate at this scale. Connect: Connect with peers from government and private sectors to advance standards coordination, cross-border interoperability, and robust digital public infrastructure Network with experts: Engage directly with technical leaders, government officials, and industry pioneers shaping the future of digital trust Who Is Organizing?

The current list of co-organizers can be seen in the header image, with more to be added later this week. As a brief preview, this includes:

International & Government Organizations

European Commission (DG-CNECT) International Telecommunication Union (ITU) United Nations Economic Commission for Europe (UNECE) World Health Organization (WHO)

Standards Development Organizations & Open Source Foundations

Decentralized Identity Foundation (DIF) Eclipse Foundation European Telecommunications Standards Institute (ETSI) FIDO Alliance International Electrotechnical Commission (IEC) International Organization for Standardization (ISO) Linux Foundation Decentralized Trust (LFDT) OpenWallet Foundation (OWF) Trust Over IP (TOIP) World Wide Web Consortium (W3C)

Industry Consortia

Cloud Signature Consortium (CSC) Digital Credentials Consortium (DCC) Global Legal Entity Identifier Foundation (GLEIF)

Next, we'll look at the exciting conference agenda and highlight key sessions for the DIF community.

The Agenda

The conference is structured across two distinct days, each with a specific purpose. Day 1 features plenary sessions designed to provide comprehensive overviews of global initiatives and sector-specific developments in digital identity. This agenda is nearly finalized and a draft has been published.

Day 2 offers a more interactive format with parallel presentations, technical deep dives, workshops, and collaborative sessions. The preliminary Day 2 schedule will be published next week, but we can share an early preview of the key themes and sessions that should be of particular interest to the DIF community.

Day 1: Global Landscape & Sector Scan Morning sessions feature updates from government and industry stakeholders worldwide Afternoon sessions explore major use cases across sectors including travel, health, education, and finance Morning: Opening & Global Landscape Opening addresses by leaders from ITU, ISO, WHO, and more Regional updates from: European Commission Switzerland United States China/Singapore Japan India Korea Australia Global South Afternoon: Sector Updates 🚘 Driving Licenses 🧳 Travel Credentials ⚕️ Health Credentials 📚 Educational Credentials 📦 Trade 💸 Payments 🏢 Organizational Credentials 🪙 Digital Assets 🪪 Standards for ID and Wallets 🔏 Digital Signatures 🔑 Car Keys Day 2: Technical Deep Dives and Working Sessions

Day 2 features parallel sessions where participants will be encouraged to follow their interests plus share their experience and expertise.

Parallel sessions across multiple tracks including:

Privacy & Security: Zero-knowledge proofs, unlinkability Industry and Organizational Focus: Industry 4.0, Digital Product Passports, European Business Wallet Implementation & Deployment: Real-world wallet applications Standards & Interoperability: Cross-border credential exchange Policy & Regulation: Governance frameworks Emerging Technology: Emerging needs around AI and digital identity Demo Hour: See wallet applications and more Learn More and Participate Get Updates

There will soon be a GDC web site to more easily access event information and schedule. For now, we recommend:

Follow Global Digital Collaboration on LinkedIn And of course, subscribe to the DIF blog for additional updates focused on the DIF community Ready to Register?

You can also register through any co-organizer available at https://lu.ma/gc25

👉 DIF community members are encouraged to use DIF's dedicated registration link: https://lu.ma/gc25-dif

Tickets are free of charge and grant full access to the entire conference, regardless of the organization used during registration

Hotels & Discounts

The upcoming GDC web site will be updated with the latest information. For now, feel free to use the discount codes on this google document.

Looking Forward

The Global Digital Collaboration conference represents a unique opportunity for advancing digital identity solutions that can work across borders while putting users in control. DIF is committed to ensuring privacy and agency remain front and center in these conversations.

For those in the DIF community and beyond, this is an unparalleled opportunity to shape the future of digital identity in collaboration with global decision-makers and implementers.


Next Level Supply Chain Podcast with GS1

Turning Wellness Rituals Into Retail Wins With Aisha Chottani

How do you build a beverage brand from scratch and land in over a thousand stores? For Aisha Chottani, it started with stress and a few homemade "potions". In this episode, Aisha, Founder and CEO of Moment, joins hosts  Reid Jackson and Liz Sertl to talk through what really goes into launching and scaling a functional drink brand. From labeling boxes by hand to managing relationships

How do you build a beverage brand from scratch and land in over a thousand stores?

For Aisha Chottani, it started with stress and a few homemade "potions".

In this episode, Aisha, Founder and CEO of Moment, joins hosts  Reid Jackson and Liz Sertl to talk through what really goes into launching and scaling a functional drink brand. From labeling boxes by hand to managing relationships with co-packers and navigating supply chain failures, Aisha shares the behind-the-scenes story most startup founders keep to themselves.

She also gets real about what went wrong, like barcode mix-ups and Amazon returns gone sideways, and how those lessons became systems that power Moment's growth today.

In this episode, you'll learn:

Why small brands need relationships more than volume

How early mistakes can turn into long-term wins

What to watch out for when scaling distribution and operations

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:34) Building a global mindset from four continents

(03:07) From McKinsey burnout to homemade "potions"

(06:06) Barcode errors and the pain of early logistics

(08:21) Growing Moment to 1,000 stores and 30 DCs

(11:33) What small brands can leverage on

(14:06) Collaborating with Lululemon

(17:15) Why Moment leans into a subscription model 

(20:39) Operational failures to learn from

(27:36) Aisha's favorite technology

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Aisha Chottani on LinkedInCheck out Moment

Tuesday, 20. May 2025

Trust over IP

EGWG 2025-05-15: The C2PA Conformance Program, Scott Perry

The Coalition for Content Provenance and Authenticity (C2PA) has created an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. Over the last year, the organization has been creating a governance framework and conformance program (based on the ToIP Metamodel) and is ready to launch it this quarter. The post EGWG 20
Play Video

Watch the full recording on YouTube.

Status: Verified by Presenter

Please note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/EGWG-2025-05-15_-The-C2PA-Conformance-Program-Scott-Perry.wav

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

Briefing Document: Review of C2PA and its Governance

Date: May 15, 2024
Source: Excerpts from “GMT20250515-145218_Recording_2560x1440.mp4”
Presenter: Scott Perry, Co-chair of Trust over IP’s Foundations Steering Committee, Founder and CEO of the Digital Governance Institute, Co-chair of the Creator Assertions Working Group at the Decentralized Identity Foundation (DIF).
Topic: C2PA (Coalition for Content Provenance and Authenticity) and the Application of Trust over IP’s Governance Metamodel.

1. Executive Summary

This briefing summarizes a presentation by Scott Perry on the Coalition for Content Provenance and Authenticity (C2PA) and the application of the Trust over IP (ToIP) governance metamodel to its conformance program. The C2PA is an industry-wide initiative creating a technical standard to attach “truth signals” or provenance information to digital objects. Facing a critical need to operationalize and govern this specification to ensure market trust and adoption, the C2PA has adopted the ToIP governance metamodel. This framework provides the necessary structure to establish a conformance program, define roles and responsibilities, manage risks, and create trust lists for compliant products and certification authorities. The program is set to officially launch on June 4th, initially focusing on self-assertion for conformance and introducing two levels of implementation assurance, with plans for independent attestation and higher assurance levels in the future.

2. Key Themes and Ideas The Problem of Trust in Digital Objects: The presentation highlights the growing challenge of establishing trust and authenticity for digital content in a world of easily manipulated or AI-generated media. This is particularly relevant for industries like telecommunications struggling with identity and verification, as noted by a participant’s observation about OTPs and SMS verification. C2PA as a Standard for Provenance and Authenticity: The C2PA specification aims to provide a technical solution by creating a “content credential” or manifest that is cryptographically bound to a digital object. This manifest acts as a ledger of actions taken on the object, providing a history and “nutrition label” of its source and modifications. “basically, it’s all of the major tech companies except Apple… coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” Content Credential (Manifest): This is the core mechanism of the C2PA. It is a digitally attached ledger of actions taken on a digital object, such as “Camera took picture,” “edited picture,” or “an AI took this picture.” This manifest is “bound to it and linked to it” in a “cryptographically binding format,” providing tamper evidence. Scope of C2PA Responsibility: The C2PA primarily focuses on “created assertions,” which are “product-driven,” documenting actions taken within a product (e.g., a camera generating a picture, Photoshop editing an image). Distinction from “Gathered Assertions”: The C2PA does not take responsibility for “gathered assertions,” which are claims made by individuals or organizations outside of a product (e.g., “I Scott Perry took the picture” or industry-specific identifiers). These are the purview of other groups like CAWG (Content Authenticity Working Group) and related efforts like the Creator Assertions working group at DIF. Binding Mechanism: The C2PA uses X.509 certificates to bind the generator product to the digital asset. “when a picture is taken, the X509 certificate will be used will be binding it will be used to bind it bind the product to the asset.” This requires camera manufacturers and other product vendors to obtain certificates from approved Certification Authorities (CAs). The Need for Governance: While the C2PA created a technical specification, they recognized the critical need for a governance framework to operationalize and control the standard’s implementation and use in the market. “the key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” Application of ToIP Governance Metamodel: Scott Perry highlights how the ToIP governance metamodel provided the necessary structure for the C2PA to build its conformance program. “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” Key Components of the Governance Program (based on ToIP):Risk Assessment: Started with a “threats and harms task force” to identify major risks, particularly around the tampering of evidence and manifests. Governance Requirements and Framework: Defined primary documents (specification, security requirements, legal agreements) and control documents (privacy, inclusion, equitability requirements). A key output is a glossary of terms for the new ecosystem. Governance Roles and Processes: Identified key roles: the Governing Authority (C2PA Steering Committee), the Administering Party (Conformance Task Force), and Governed Parties (CAs, Generator Product companies, Validator Product companies). Legal Agreements: Formal agreements are being established between the C2PA and governed parties outlining roles, responsibilities, conformance requirements, and dispute resolution mechanisms. Conformance Criteria and Assurance: Defined based on the C2PA specification and implementation security requirements. The program includes “four levels of of assurance around the implementation of products,” though initially rolling out with two levels. These levels are tied to “security objectives” and assessed against the “target of evaluation” (the product and its supporting infrastructure). Conformance Process: Involves an intake form, application review, assessment of infrastructure (initially self-assertion, moving towards independent attestation), legal agreement signing, and adding records to trust lists. Residual Risk Assessment and Adaptation: The program includes a process to learn from the rollout, identify unmet requirements or issues, and adapt the program for continuous improvement. Trust Lists (Registries): Central to the program are trust lists identifying approved Generator Products, Validator Products, and Certification Authorities. A timestamp authority trust list is also being added. Levels of Assurance: The program is defining levels (initially rolling out two) to reflect different degrees of confidence in the implementation of the C2PA specification and associated security requirements. Achieving a higher level of assurance requires meeting all requirements for that level. Self-Assertion (Initial Rollout): Due to the complexity of auditing and getting the program launched quickly, the initial phase requires participants to self-assert that they meet the specification and requirements. Conformance Certificate: Upon successful conformance, products will receive a certificate tied to an OID (Object Identifier) denoting the assurance level they have achieved. This OID in the manifest’s certificate will identify the assurance level of the provenance information. JPEG Trust and Copyright: While C2PA provides provenance information that can be used for copyright, it doesn’t define ownership or copyright laws. JPEG Trust is mentioned as an organization creating an ISO standard focused on copyrights in concert with the C2PA standard. Relationship with W3C: The C2PA is actively engaged with the W3C, with discussions happening at the technical working group level regarding related standards like PROV (for provenance). Future Directions: Plans include introducing higher levels of assurance, implementing independent attested conformance, developing quality control software for assessing product compliance, and establishing a fee structure for the conformance program. CAWG (Content Authenticity Working Group) as a Broader Ecosystem: CAWG is viewed as a potentially larger ecosystem dealing with identity, metadata, endorsements, and AI learning process specifications, which will need to create their own applications and standards that can integrate with the C2PA foundation. 3. Important Ideas and Facts The C2PA is the Coalition for Content Provenance and Authenticity. It includes major tech and product manufacturers, excluding Apple initially but aiming to include them. The core technical output is the Content Credential (Manifest), a digitally attached ledger of actions on a digital object. The manifest provides tamper evidence and binds the product to the asset using X.509 certificates. C2PA focuses on “created assertions” (product-driven actions), leaving “gathered assertions” (individual/organizational claims) to other groups like CAWG. The Trust over IP governance metamodel has been successfully applied to structure the C2PA conformance program. The program addresses threats and harms related to tampering and requires adherence to implementation security requirements. The C2PA conformance program will officially launch on June 4th at the Content Authenticity Initiative symposium in New York City. The initial launch will include two levels of implementation assurance and a self-assertion confidence model. Key outputs of the governance program are legal agreements and trust lists of conforming products and certification authorities. The C2PA standard is becoming an ISO standard this year. Timestamp authorities will play a crucial role in providing trust signals related to the time of claim assertion. The program includes mediation and dispute resolution mechanisms in its legal agreements. The governance program provides the structure for the C2PA to “operationalize the spec” and control its use. 4. Key Quotes “basically, it’s all of the major tech companies except Apple… Coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” “what it what it’s proposed to do is to create a ledger of actions against a digital object that is bound to it.” “It’s kind of the nutrition label on food… it’s really the nutrition label of all digital objects.” “The C2PA did not want to get involved in all of the the potential root, you know, actions and and variances about those types of things. They wanted to create the platform.” “They create the platform and they create the binding between the digital asset and the and the manifest using X509 certificates.” “The key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” “the governance program was needed to operationalize the spec. The spec was had, you know, a limitation in its usefulness without a governance program around it.” “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” “we’re creating a program which will hold generator and validator products accountable to the specific ification that’s already been published.” “We are creating two levels of implement implementation assurance and we are are using a self assertion confidence model we don’t have the mechanisms in place to hold organizations accountable for meeting the specification we don’t have an you know an assurance mechanism in place yet to do that.” “It is the hope that you know copyright laws can use the trust signals that are coming from the CTBA specification and conformance program in use for defining ownership and copyright.” “The conformance criteria is the spec and the spec is now at at level 2.2.” “we are looking at levels of assurance around the implementation of a product. Now it’s not just the product but it’s also its infrastructure.” “These are the kinds of records that were that are in the schema for the trust list.” 5. Next Steps Official launch of the C2PA conformance program on June 4th. Continued work on independent attestation and higher levels of assurance for the conformance program. Development of quality control software or processes for assessing product compliance. Ongoing collaboration with W3C and other relevant standards bodies. Further exploration of the broader CAWG ecosystem and its integration with C2PA.

This briefing provides a foundational understanding of the C2PA, its technical specification, and the crucial role of the newly established governance program, structured using the Trust over IP metamodel, in driving its adoption and ensuring trust in the digital content landscape.

For more details, including the meeting transcript, please see our wiki 2025-05-15 Scott Perry & The C2PA Conformance Program – Home – Confluence

https://www.linkedin.com/in/scott-perry-1b7a254/ https://digitalgovernanceinstitute.com/

The post EGWG 2025-05-15: The C2PA Conformance Program, Scott Perry appeared first on Trust Over IP.


FIDO Alliance

Business Review: NETOPIA Payments launches Click to Pay: a simpler, faster, and more secure online payment experience

NETOPIA Payments becomes the first online payment processor in the world to implement Click to Pay with Passkey FIDO (Fast Identity Online) – a modern online checkout solution built on […]

NETOPIA Payments becomes the first online payment processor in the world to implement Click to Pay with Passkey FIDO (Fast Identity Online) – a modern online checkout solution built on EMV® global standards, designed to redefine the digital payment experience: faster, safer, and without manual card data entry.


White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication

Editors Shane Weeden, IBMAn Ho, IBM Abstract Session hijacking is a growing initial attack vector for online fraud and account takeover. Because FIDO authentication reduces the effectiveness of other simpler […]
Editors

Shane Weeden, IBM
An Ho, IBM

Abstract

Session hijacking is a growing initial attack vector for online fraud and account takeover. Because FIDO authentication reduces the effectiveness of other simpler forms of compromise, such as credential stuffing and phishing, cybercriminals turn to theft and re-use of bearer tokens. Bearer tokens are a form of credential which include session cookies used by browsers connecting to websites and OAuth access tokens used by other thick client application types such as native mobile applications. When these credentials are long-lived and can be “lifted and shifted” from the machine where they were created to be usable by a bad actor from another machine, their tradable value is significant. Emerging technologies such as Device Bound Session Credentials (DBSC) for browsers and Demonstrating Proof of Possession (DPoP) for OAuth applications seek to reduce the threat of session hijacking. This article describes how these technologies address the problem of session hijacking and how they complement strong phishing resistant authentication in online ecosystems.

Audience

This white paper is for chief information security officers (CISOs) and technical staff whose responsibility it is to protect the security and life cycle of online identity and access management from online fraud. 

Download the White Paper 1. Introduction

Authentication and authorization are integral parts of an identity lifecycle, especially for online credential ecosystems. The growing threat of online identity fraud with costly security incidents and breaches has enterprises looking for ways to protect and secure their workforces from account takeover through different attack vectors such as phishing, credential stuffing, and session hijacking. For authentication, FIDO authentication with passkeys provides users with “Safer, more secure, and faster online experiences”, and an increase in the adoption of passkeys has contributed to a reduction of the success of attack vectors of credential phishing, credential stuffing, and session hijacking accomplished via man-in-the-middle (MITM) phishing attacks. However, what happens after the authentication ceremony?

After authentication, browsers and application clients are typically issued other credentials. Enterprise applications generally fall into two primary categories: those that are web browser based and use session cookies for state management and those that are thick client applications using OAuth access tokens (this includes some browser-based single page applications and most native mobile applications). Both types of credentials (session cookies and access tokens) are considered, in their basic use, as “bearer” tokens. If you have the token (the session cookie or the access token), then you can continue to transact for the lifetime of that token as the user who authenticated and owned it.This whitepaper explores adjacent technologies that address the “lift and shift” attack vector for bearer tokens and how these technologies complement FIDO-based authentication mechanisms. In particular, this paper focuses on the proposed web standard Device Bound Session Credentials (DBSC) for protecting browser session cookies and OAuth 2.0 Demonstrating Proof of Possession (DPoP) for protecting OAuth grants.

2. Terminology

session hijacking: An exploitation of the web session control mechanism that is normally managed for a session cookie.

credential stuffing: An automated injection of stolen username and password pairs (credentials) into website login forms to fraudulently gain access to user accounts.

1. Passkeys – https://fidoalliance.org/passkeys/
2. Device Bound Session Credentials – https://github.com/w3c/webappsec-dbsc
3. OAuth 2.0 Demonstrating Proof of Possession (DPoP) – RFC9449: https://datatracker.ietf.org/doc/html/rfc9449
4. Session hijacking attack https://owasp.org/www-community/attacks/Session_hijacking_attack
5. Credential stuffing https://owasp.org/www-community/attacks/Credential_stuffing

access token: A credential used by a client-side application to invoke API calls on behalf of the user.

session cookie: A credential managed by browsers to maintain session state between a browser and a website.

bearer token: A token (in the context of this whitepaper may refer to either an access token or a session cookie) so called because whoever holds the token can use it to access resources. A bearer token on its own can be “lifted and shifted” for use on another computing device.

sender-constrained token: A token protected by a mechanism designed to minimize the risk that anything other than the client which established the token during an authentication process could use that token in subsequent requests for server-side resources.

Device Bound Session Credential (DBSC): A proposal for a W3C web standard defining a protocol and browser behavior to establish and maintain sender-constrained cookies. The mechanism uses proof of possession of an asymmetric cryptographic key to help mitigate session cookie hijacking.OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.

OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.

3. Adjacent/complementary technologies for a secure ecosystem

While FIDO authentication technology can effectively eliminate phishing and credential stuffing attacks that occur during the login process, the addition of solutions to mitigate threats associated with bearer token theft is equally important. Bad actors whose attacks are thwarted during the login process will go after the next weakest link in the chain and try to steal post-authentication bearer tokens. This section explores two of these technologies for protecting bearer tokens: Device Bound Session Credentials (DBSC) protect browser-based session cookies and Demonstrating Proof of Possession (DPoP) protects OAuth grants. Alternative approaches to protect bearer tokens are also discussed.

Because no single piece of technology can protect against all threats, a combination of multiple techniques is required for adequate protection.

Table 1: Combination of technologies for increased security

TechnologiesAuthentication threatsPost-authentication threatsRemote PhishingCredential StuffingToken TheftPasskeysDBSC/DPoPPasskeys + DBSC/DPoP

3.1 Browser session cookie security

Before discussing Device Bound Session Credentials (DBSC), you will need to understand the problem being addressed regarding browser session cookies. Session hijacking via cookie theft allows an attacker, who possesses stolen cookies, to bypass end-user authentication, including any strong or multi-factor authentication (MFA). This is particularly problematic when browsers create long-lived session cookies (which are a type of bearer token), since these cookies can be traded as alternatives to a user’s primary authentication credentials and then used from the attacker’s machine. This can lead to unauthorized access to sensitive data, financial loss, and damage to an organization’s reputation. 

Attackers perform cookie theft through various methods such as man-in-the-middle phishing of a user’s existing MFA login process (when phishing-resistant authentication such as FIDO is not used), client-side malware, and occasionally through vulnerabilities in server-side infrastructure or software. Regardless of how cookie theft is perpetrated, when successful, these attacks are not only dangerous, but also hard to isolate and detect. Complementary technologies, such as Device Bound Session Credentials (DBSC), minimize the risks associated with browser cookie theft by making stolen cookies impractical to use from any machine other than the machine to which they were issued during authentication.

3.2 Device Bound Sessional Credentials – DBSC

DBSC refers to a proposed web standard currently in development within the Web Application Security working group of the W3C[2]. The goal of DBSC is to combat and disrupt the stolen web session cookies market. This is achieved by defining an HTTP messaging protocol and required browser and server behaviors to result in binding the use of application session cookies to the user’s computing device. DBSC uses an asymmetric key pair and in browser implementations the private keys should be unextractable by an attacker – for example stored within a Trusted Platform Module (TPM), secure element, or similar hardware-based cryptographic module.

At a high level, the API in conjunction with the user’s browser and secure key storage capabilities, allows for the following:

The server communicates to the browser a request to establish a new DBSC session. This includes a server-provided challenge. The browser generates an asymmetric key pair, then sends the public key along with the signed challenge to the server. This process is referred to as DBSC registration. Browser implementations of DBSC should use operating system APIs that facilitate secure, hardware-bound storage and use of the private key. The server binds the public key to the browser session by issuing a short-lived, refreshable auth_cookie which is then required to be transmitted in subsequent browser requests to the web server.

As the auth_cookie regularly expires, a mechanism is required for the browser to refresh the auth_cookie asynchronously to primary application web traffic. The refresh process requires signing a new server-issued challenge with the same private key created during DBSC registration, thereby re-proving (regularly) that the client browser is still in possession of the same private key.

Limiting the lifetime of the auth_cookie to short periods of time (for example, a few minutes) disrupts the market for trading long-lived session cookies. An attacker can only use stolen session cookies (including the auth_cookie) for a brief period, and cannot perform a refresh operation, since the private key required to perform a refresh operation is not extractable from the client machine.

DBSC may be introduced into existing deployments with minimal changes to the application. This is important as DBSC could easily be incorporated as a web plugin module in existing server-side technology (for example, Apache module, Servlet Filter, or reverse proxy functionality). This permits enterprises to roll out deployment of DBSC in phases without a complete overhaul of all current infrastructure and companies can prioritize certain critical endpoints or resources first.

DBSC server-side implementations can also be written in a manner that permits semantics, for example: “If the browser supports DBSC, use it, otherwise fallback to regular session characteristics.” This allows users to gain the security advantages of DBSC when they use a browser that supports it without having to require all users to upgrade their browsers first.

Refer to the Device Bound Session Credentials explainer for more details on the DBSC protocol and standard, including a proposal for enterprise-specific extensions that adds attestation to DBSC keypairs.

3.2.1 What makes DBSC a technology complementary to FIDO?

The DBSC draft standard permits the login process to be closely integrated with the DBSC API. While FIDO is a mechanism that makes authentication safer and phishing resistant, DBSC is a mechanism that makes the bearer credential (session cookie) safer post-authentication. They complement each other by reducing the risk of account takeover and abuse, making the entire lifecycle of application sessions safer.

3.2.2 Alternative solutions

DBSC is not the first standard to propose binding session cookies to a client device. Token Binding is an alternative that combines IETF RFCs 8471, 8472, and 8473. Token Binding over HTTP is implemented via a Transport Layer Security (TLS) extension and uses cryptographic certificates to bind tokens to a TLS session. Token Binding has had limited browser adoption and is complex to implement as it requires changes at the application layer and in TLS security stacks. The Token Binding over HTTP standard has not been widely adopted and only one major browser currently offers support.

3.2.3 Advice

The DBSC standard relies on local device security and operating system APIs for storage and use of the private key that is bound to the browser’s session. While these private keys cannot be exported to another device, the key is available on the local system and may be exercisable by malware residing on the user’s device. Similarly, in-browser malware still has complete visibility into both regular session cookies and short-lived auth_cookies. DBSC is not a replacement for client-side malware protection, and the threat model for DBSC does not provide protections from persistent client-side malware. Ultimately, the user must trust the browser.

As browsers start to support DBSC over time, it will be important for servers to be able to work with a mix of browsers that do and do not include support for this technology. Some enterprises may dictate that corporate issued machines include browsers known to support DBSC, but many will not. It will be necessary for server-side implementations to take this into consideration, using DBSC when the browser responds to registration requests, and tolerating unbound session cookies when the browser does not. When building or choosing a commercial solution, ensure you consider this scenario, and include the ability to implement access control policies that strictly require DBSC in highly controlled or regulated environments or for specific applications.

At the time of writing, DBSC is in early evolution. It remains to be seen whether or not it will be widely adopted by browser vendors. The hope is that incubating and developing this standard via the W3C will result in wider adoption than previous proposals, similar to the way that the WebAuthn API has been adopted to bring passkey authentication to all major browser implementations.

4. OAuth grants

The previous section introduced DBSC as a means to protect against session cookie theft in web browsers. Thick application clients, including mobile applications and single-page web applications, typically use stateless API calls leveraging OAuth grants instead of session cookies. An OAuth grant may be established in several ways, with the  recommended pattern for thick clients being to initially use the system browser to authenticate a user, and grant access for an application to act on their behalf. Conceptually this is remarkably similar to browser-based sessions, including the ability and recommendation, to use FIDO authentication for end-user authentication when possible. At the conclusion of the browser-based authentication portion of this flow, control is returned to the thick client application or single-page web application where tokens are established for use in programmatic API calls. 

The challenge that occurs from this point forward is almost identical to that described for browsers – the OAuth tokens are bearer tokens that if exposed to a bad actor can be used to call application APIs from a remote machine instead of from the legitimate application.

This section describes the use of DPoP, a technology for protecting the “lift and shift” of credentials used in OAuth-protected API calls which, just like DBSC, makes use of an asymmetric key pair and ongoing proof of possession of the private key.

4.1 Demonstrate Proof of Possession (DPoP)

OAuth 2.0 Demonstrating Proof of Possession (DPoP) is an extension of the existing OAuth 2.0 standard for implementing device bound (or sender-constrained) OAuth access and refresh tokens. It is an application-level mechanism that allows for the tokens associated with an OAuth grant (that is, refresh tokens and access tokens) to bind with the requested client using a public and private key pair. This requires the client to prove ownership of its private key to the authorization server when performing access token refresh operations and to resource servers when using access tokens to call APIs.

6. OAuth 2.0 for Native Apps https://datatracker.ietf.org/doc/html/rfc8252

High assurance OpenID specifications, such as Financial-grade API (FAPI 2.0), mandate the use of sender-constrained tokens and DPoP is the recommended method for implementing this requirement when Mutual TLS (mTLS) is not available.

At a high level, DPoP requires that:

The client generates a per-grant public/private key pair to be used for constructing DPoP proofs. Best practice implementations should use operating system APIs to ensure the private key is non-extractable. On initial grant establishment (for example, exchanging an OAuth authorization code for the grant’s first access token and refresh token), a DPoP proof (a JWT signed by the client’s private key that contains, among other things, a copy of the public key) is used to bind a public key to the grant. Requests to a resource server using an access token obtained in this manner must also include a DPoP proof header, continuously proving possession of the private key used during grant establishment. This is done for every API request. Resource servers are required to check if an access token is sender-constrained, confirm the public key, and validate the DPoP proof header on each API call. For public clients, subsequent refresh_token flows to the authorization server’s token endpoint must also contain a DPoP proof signed with the same key used during initial grant establishment. This is particularly important as the refresh tokens are often long-lived and are also a type of bearer token (that is, if you have it you can use it). The authorization server must enforce the use of a DPoP proof for these refresh token flows and ensure signature validation occurs via the same public key registered during initial grant establishment.

Unlike a plain bearer access token which can be used by any holder, DPoP based access tokens are bound to the client that initially established the OAuth grant, since only that client can sign DPoP proofs with the private key. This approach minimizes the risks associated with malicious actors trading leaked access tokens.

Refer to DPoP RFC 9449 – OAuth 2.0 Demonstrating Proof of Possession (DPoP) for more information.

4.2 What makes DPoP a complementary technology to FIDO?

FIDO can be leveraged for phishing resistant end-user authentication during establishment of an OAuth grant. Refresh and access tokens obtained by a client following this authentication should be safeguarded against “lift and shift” attacks just like session cookies in browser-based apps. DPoP is a recommended solution for protecting these OAuth tokens from unauthorized post-authentication use. Together, FIDO for end user authentication and DPoP for binding OAuth tokens to a client device complement each other to improve the overall security posture for identities used in thick client applications.

4.2.1 DPoP alternative solutions?

RFC8705 – OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens describes a mechanism that offers a transporter layer solution to bind access tokens to a client certificate. While it has been approved for use in FAPI 2.0 for open banking solutions, it is not particularly suitable for public clients such as native mobile applications.

RFC9421 – HTTP Message Signatures defines an application-level mechanism for signing portions of an HTTP message. Key establishment and sharing between the client and verifier are not defined by this specification, although this could be performed in a trust on first user manner during initial grant establishment in a similar manner to DPoP. There is no known public specification that maps the use of HTTP message signatures to the use case of sender-constrained bearer tokens in an OAuth client application. In the absence of such a public specification, widespread adoption for this use case is unlikely.

4.2.2 Advice

Sender-constrained tokens are a good idea, and, in some deployments, they are a regulatory requirement. For example, use of the FAPI profiles of OAuth is now mandated by many sovereign open banking initiatives. DPoP is a relatively simple way to achieve this requirement and is flexible enough to cover a wide range of application client types. That said, care must still be taken to adhere to the security considerations of DPoP. Pay close attention to section 11 of RFC9449, as well as apply other application security strategies for native or browser based single page applications as your scenario dictates. Remember that DPoP is focused solely on addressing the threats associated with token exfiltration, which include trading and use by malicious actors. It should be considered part of a defense-in-depth strategy for OAuth applications.

5. Conclusion

The intent of this paper is to inspire thinking around how different web security standards fit together and how those standards relate to the use of FIDO authentication for users. There are so many standards and standards bodies that it is often hard to understand which compete in the same space and which augment one another to form part of a comprehensive defense-in-depth strategy for identity fraud protection in online applications.

This paper tackled a specific, prevalent application security problem – the malicious trading and use of stolen cookies and access tokens. This paper also showed how technologies such as DBSC and DPoP mitigate the threats associated with token theft and how these technologies are complementary to FIDO authentication. Paired with FIDO, DBSC and DPoP provide greater overall identity fraud protection for your applications.


We Are Open co-op

Building Power and Making Change

Community strategies and tools

cross-posted on the Amnesty UK blog

Community-driven change is more important than ever. Whether we are advocating for social justice, environmental sustainability, or political reform, collective action is how we create lasting impact. But how do we build a movement that starts with individual curiosity and grows into sustained activism? That’s where the Amnesty International UK (AIUK) community platform project comes in — a digital hub designed to empower individuals, support collaboration, and drive meaningful change.

This blog post outlines how the platform and community strategy work together to guide people from discovery of the AIUK community to becoming activists within it.

1. Discovery

The journey of community-driven change starts with discovery. This is the stage where individuals first come into contact with AIUK. Maybe they learn about an issue, identify it as important, and begin to consider how they might want to get involved. Or maybe they meet someone at a demonstration, and discover the community first-hand.

AIUK social media, through broadcasting, is just one tool that helps people discover Amnesty International UK. AIUK makes complex issues accessible and relatable. We want to do the same as AIUK highlights grassroots efforts and community initiatives.

We want to encourage posts that show:

Our dedicated community and highlight key grassroots initiatives and campaigns. Signposts to find local groups or events based on interests. Digital actions, such as petitions or downloading campaign guides, to help users take their first steps.

Such content ensures that even people who are new can find relevant AIUK communities and take the first steps toward engagement.

2. Intention to Engage

Once someone discovers a cause they care about, the next step is forming an intention to engage. This stage is all about commitment — moving from passive interest to active participation.

By showcasing community on the AIUK website, we both invite people in and celebrate what the community is achieving. We want to present clear pathways for involvement and help community members inspire others to take steps towards action.

We need to figure out processes that help:

Goal-setting: Encouraging community members to set personal milestones, like committing to attend 100 meetings. Sharing success: Telling success stories and finding testimonials that effectively attract new people while celebrating community achievements. Balancing information: Showcasing static information about past successes with dynamic, real-time updates on current campaigns from the community.

By making it easy for people to express their intent and take small but meaningful steps, we build confidence and lay the groundwork for deeper engagement.

3. Taking Action: Turning Intent into Impact

With intention comes action, and this is where real change begins. At this stage, people start to feel a sense of belonging and are ready to contribute to a cause they care about.

A knowledge base can help equip users with actionable tools. We’ll need clear resources and learning pathways that:

Guide people to the right information: Whether it’s organising a protest, writing letters to policymakers, or starting a local campaign, the knowledge hub can provide step-by-step guidance tailored to issues we work on. Help people collaborate: People should be able to connect with others who share their interests and work together on projects — whether virtually or in person. Best practices and community policies may also be at home in the knowledge hub. Show them into the community: Make sure that people feel supported and seen as they take action. Create an architecture of participation that brings them into the community platform.

This stage is about turning isolated actions into collective power, with the support of the community ensuring that every contribution counts.

4. Sustaining Action: Building Lasting Commitment

Sustained action is the key to creating lasting change. Too often, movements fizzle out after an initial burst of energy, but with a strong community strategy and integrated platform, we can keep momentum going.

To sustain engagement, the community platform needs to help people align with others in the AIUK movement. We need to think about:

Feedback loops: Regular check-ins with the community to understand their needs and ensure that we are adapting the community strategy and platform accordingly. A recognition ecosystem: Using digital badges and shoutouts for individuals or groups who demonstrate consistent commitment to help us make activism more visible. Storytelling opportunities: Sharing success stories and lessons learned will inspire others and keep motivation high.

By encouraging a sense of belonging and purpose, we ensure that members find reasons to continue building collective power for human rights.

5. Becoming an Activist: Empowering Future Leaders

The final stage is becoming an activist. At this point, individuals understand that community isn’t one person, but rather all of us. They begin to work on behalf of others, coordinate together and lift people up with their leadership.

These leaders will use other coordination tools and processes and that’s great! We want to empower the development of activist and leadership skills. We’ll need:

Decentralised coordination best practices: For members who are ready to take on larger roles, such as leading groups or campaigns. Mentorship programs: Connecting experienced activists with newcomers to share knowledge and build networks. Advocacy training: Workshops, webinars, and resources focused on effective communication, policy advocacy, and community organising.

Through these efforts, we can go beyond nurturing individual leaders to continue building a movement.

The Power of Community Work in Driving Change

The journey from discovery to becoming an activist is a process of gradual engagement and empowerment. There is a system of platforms, processes and content that help AIUK move people towards becoming an activist. Although we use various digital tools, the journey is an emotional and social one.

We are working hard to make sure the community platform project harnesses the collective strength of our community and makes a difference that lasts.

Friday, 16. May 2025

Project VRM

Four Roads to The Intention Economy

Thirteen years after The Intention Economy was published by Harvard Business Review Press, there are now four clear paths toward making it come true. IEEE P7012, aka MyTerms. This will make individuals first parties in their agreements with companies, completely flipping the status quo that has been with us since industry won the Industrial Revolution […]

Thirteen years after The Intention Economy was published by Harvard Business Review Press, there are now four clear paths toward making it come true.

IEEE P7012, aka MyTerms. This will make individuals first parties in their agreements with companies, completely flipping the status quo that has been with us since industry won the Industrial Revolution and manifests today in those insincere and annoying cookie notices that interrupt your experience every time you visit a new website or open a new app. MyTerms makes each of us first parties in agreements with sites and services, and in full charge of personal privacy online. The First Person Project, or FPP  (website pending). With help on the buy side from Customer Commons and on the sell side by Ayra, we can finally replace “show your ID” with verifiable credentials presented on an as-needed basis by independent and self-sovereign individuals operating inside their own webs of trust. Visa Intelligent Commerce, which will make intentcasting happen in a big way. It will also elevate the roles of Inrupt and the open-source  Solid Project. Personal AI. This is AI that is as much yours as your shoes, your bike, and your PC. Personal, not personalized.

To explain how these will work together, start here:

Not long after The Intention Economy came out in May, 2012, Robert Thomson, Managing Editor of The Wall Street Journal, wanted the book’s opening chapter to serve as the cover essay for the Marketplace section of an upcoming issue. Harvard Business Review Press didn’t like that idea, so I wrote an original piece based on one idea in the book: that shoppers will soon be able to tell the market what they’re looking for, in safe, secure and anonymous ways—a kind of advertising in reverse that the book called “personal RFPs” and has since come to be called “intentcasting.” This became The Customer as a God: The image above was the whole cover of the Marketplace section on Monday,  July 23, 2012. The essay opened with these prophetic words: “It’s a Saturday morning in 2022…”

It is now a Friday morning in 2025, and that godly future for customers is still not here. Yes, we have more market power than in 2012, but we are digital serfs whose powers are limited to those granted by  Amazon, Apple, Facebook, Google, Microsoft, and other feudal overlords. This system is a free market only to the degree that you can choose your captor.  This has led to—

The IONBA (Internet Of Notning But Accounts) is based on a premise: that the best customers are captive ones. In this relic of the industrial age, customers are captive to every entity that requires logins and passwords. Customers also have no ways of their own to globally control what data is collected about them, or how. Or to limit how that data is used.  This is why our digital lives are infected by privacy-killing data-collection viruses living inside our computers, phones, TVs, and cars.

If you didn’t know about those last two, dig:

Consumer Reports says “All smart TVs—from Samsung, LG, you name it—collect personal data.” They also come with lame “privacy” controls, typically buried deep in a settings menu. (Good luck exhuming them. The ones in our TCL and Samsung TVs have all but disappeared.) Mozilla calls new cars “the Worst Product Category We Have Ever Reviewed for Privacy.” There is also nothing you can do to stop your car from reporting on everything your car does—and everything you do, including sexual ativity—to the carmaker, insurance companies, law enforcement, and who knows who else. This data goes out through your car’s cell phone, misleadingly called a telematics control unit. The antenna is hidden in the shark fin on your car’s roof or in an outside mirror.

Businesses are also starting to lose faith in surveillance, for at least eight reasons:

People hate it. They also fight it. By 2015 ad blocking and tracking protection were the biggest boycott in world history. It tarnishes brands. Ad fraud is a gigantic problem, and built into the system. It commits Chrysoogocide (killing golden geese, most notably publishers)Bonus link. Regulatory pressure against it is getting bigger all the time. Advertisers are finally remembering that brands are made by ads aimed at populations, while personalized ads are just digital junk mail. Customers are using AI tools for guidance toward a final purchase, bypassing marketing schemes to bias purchasing decisions along the way. For more on that, see Tom Fishburne’s cartoon, and Bain’s report about it.

So our four roads to The Intention Economy start with the final failings of the systems built to prevent it. Now let’s look at those roads.

1—IEEE P7012 “MyTerms”

MyTerms, the most important standard in development today, will be a keystone service of Customer Commons, the nonprofit spinoff of ProjectVRM. It will do for contract what Creative Commons did for copyright: give individuals a new form of control. With MyTerms, agreements between customers and companies will be far more genuine mutual, and open to new forms of innovation not based on the kind of corporate control that typifies the IONBA. For example, it can open Visa Intelligent Commerce to conversations and relationships that go far past transaction. Take for example Market intelligence that flows both ways. While this has been thinkable for a decade or more (that last link is from 2016), it’s far more do-able when customers and companies have real relationships based on equal power and mutual interests. These are best framed up on agreements that start on the customer’s side, and give customers scale across all the companies with which they have genuine relationships.

2—First Person Project (FPP)

To me, FPP begins with the vision “Big Davy” Sallis came up with while he was working for VISA Europe in 2012, and read the The Intention Economy. At the time, he wanted Visa to make VRM a real category, but assumed that would take too long. So he decided to create a VRM startup called Qredo. Joyce and I consulted Qredo until  Davy died (far too young) in 2015. Qredo went into a different business, but a draft I created for Qredo’s original website survives, and it outlines much of what the  FPP will make possible. That effort is led by Drummond Reed, another friend and collaborator of Davy’s and a participant in ProjectVRM from the start. Drummond says the FPP is inspired by Why We Need First Person Technologies on the Net, a post published here in 2014. That post begins,

We need first person technologies for the same reason we need first person voices: because there are some things only a person can say and do.

Only a person can use the pronouns  “I,” “me,” “my” and “mine.” Likewise, only a person can use tools such as screwdrivers, eyeglasses and pencils. Those things are all first person technologies. They were invented for individual persons to use.

We use first person technologies the same unique ways we use our voices.

Among other things, the First Person Project will fix how identity works on the Internet. With FPI—First Person Identity—interactions with relying parties (the ones wanting “your ID”) don’t need your drivers license, passport, birth certificate, credit card, or account information. You just give them what’s required, on an as-needed basis, in the form of verifiable credentials. The credentials you provide can verify that you are a citizen of a country, licensed to drive, have a ticket to a game, or whatever. In other words, they do what Kim Cameron outlined in his Laws of Identity: disclose minimum information for constrained uses (Law 2) to justifiable parties (Law 3) under your control and consent (Law 1). The credential you present is called a DID: a Decentralized Identifier. No account is required.

Trust in FPI also expands from individual to community. Here is how Phil Windley explains it in Establishing First Person Digital Trust:

When Alice and Bob met at IIW, they didn’t rely on a platform to create their connection. They didn’t upload keys to a server or wait for some central authority to vouch for them. They exchanged DIDs, authenticated each other directly, and established a secure, private communication channel.

That moment wasn’t just a technical handshake—it was a statement of first-person identity. Alice told Bob, “This is who I am, on my terms.” Bob responded in kind. And when they each issued a verifiable relationship credential, they gave that relationship form: a mutual, portable, cryptographically signed artifact of trust. This is the essence of first-person identity—not something granted by an institution, but something expressed and constructed in the context of relationships. It’s identity as narrative, not authority; as connection, not classification.

And because these credentials are issued peer-to-peer, scoped to real interactions, and managed by personal agents, they resist commodification and exploitation. They are not profile pages or social graphs owned by a company to be monetized. They are artifacts of human connection, held and controlled by the people who made them. In this world, Alice and Bob aren’t just users—they’re participants.

This also expands outward into community, and webs of trust. You get personal agency plus community agency.

The FPP covers a lot more ground than identity alone, but that’s where it starts. Also, Customer Commons is a funding source for the FPP, and I’m involved there as well.

3—Visa Intelligent Commerce

The press release is Find and Buy with AI: Visa Unveils New Era of Commerce. Less blah is Enabling AI agents to buy securely and seamlessly. Here’s the opening copy.

Imagine a future where an AI agent can shop and buy for you. AI commerce — commerce powered by an AI agent — is going to transform the way consumers around the world shop.

Introducing Visa Intelligent Commerce, an initiative that will empower AI agents to deliver personalized and secure shopping experiences for consumers – at scale.

From browsing and selection to purchase and post-purchase management, this program will equip AI agents to seamlessly manage key phases of the shopping process.

Visa CEO Ryan McInerney says a lot more in a 1:22 talk at Visa Product Drop 2025. The most relevant part starts about 26 minutes in, with a demo starting at about 31:30. Please watch it. Much of what you see there owes to Inrupt and Solid, which Sir Tim Berners-Lee says were inspired by The Intention Economy. For more about where Inrupt and Solid fit in Visa Intelligent Commerce, see Standards for Agentic Commerce: Visa’s Bold Move and What It Means: Visa’s investment in safe Intelligent Commerce points to a future of standards-forward personal AI, by John Bruce, Inrupt’s CEO. John briefed Joyce and me over Zoom the other day. Very encouraging, with lots to develop on and talk about.

More links:

A tweet appreciative of Inrupt by Visa’s @JackForestell Privacy for Agentic AI, by Bruce Schneier, Inrupt’s CISO (as well as the world’s leading security expert, and an old pal through Harvard’s Berkman Klein Center). Also from Bruce: What Magic Johnson and Bruce Schneier taught us at RSAC 2025 and RSAC 2025: The Pioneers of the Web Want to Give You Back Control of Your Data Visa announces AI Agent Payment APIs – and a pathway to Empowerment Tech, by Jamie Smith, who writes Customer Futures, the most VRooMy newsletter out there.

Some news being made about Visa Intelligent Commerce:

Visa partners with AI giants to streamline online shopping Visa Gives AI Shopping Agents ‘Intelligent Commerce’ Superpowers Visa launches ‘Intelligent Commerce’ platform, letting AI agents swipe your card—safely, it says How major payment companies could soon let AI spend your money for you Visa, Mastercard offer support for AI agents Visa wants to give artificial intelligence ‘agents’ your credit card Visa adds ‘unknown concept’ where AI makes purchases for you – but shoppers suspect more ‘sinister purpose’ Visa Unveils Intelligent Commerce to Power AI-Driven Payments

4—Personal AI

Reza Rassool was also inspired by The Intention Economy when he started Kwaai.ai, a nonprofit community developing open-source personal AI. I now serve Kwaai as its volunteer Chief Intention Officer.

Let’s look at what personal AI will do for this woman:

Looks great, but we’re stuck in IONBA, she has little control over her personal data in all those spaces. For example,

She doesn’t have the digital version of what George Carlin called “a place for my stuff.” (Watch that video. It’s brilliant—and correct.) She has few records of where she’s been, who she’s been with and when—even though apps on her phone know that stuff and are keeping it inside the records of her giant overlords and/or selling it to parties unknown, with no way yet for getting it back for her own use. Her finances are possibly organized, but scattered between the folders she keeps for taxes, plus the ones that live with banks, brokers, and other entities she hardly thinks about. It would be mighty handy to have a place of her own where she could easily see all her obligations, recurring payments, subscriptions, and other stuff her counterparties would rather she not know completely. Her schedules are in Apple, Google, and/or Microsoft calendars, which are well app’d and searchable, but not integrated. She has no digital calendar that is independent and truly her own. Her business and personal relationship records are scattered across her contact apps, her Linkedin page, and piles of notes and business cards. She has no place or way of her own to manage all of them. Her health care records (at least here in the U.S.) are a total mess. Some of them ares inside the MyCharts and patient portals provided by separate (and mostly unconnected) health care specialists and medical systems. Some of it is in piles of printouts she has accumulated (if she’s kept them) from all the different providers she has seen. Some of it is in fitness and wellness apps, all with exclusive ways of dealing with users. None of it is in a unified and coherent form.

So the challenge for personal AI is pulling all that data out of all her accounts, and putting it into forms that give her full agency, with the help of her personal AIs.  Personalized AIs from giants can’t do that. We need our own personal AIs.

And there we have it: Four roads to a world where free customers prove more valuable than captive ones. And we’re making it happen. Now.


FIDO Alliance

Techradar Pro: Millions of Brits to be impacted by UK Gov decision to move away from passwords, 2FA and the replacement is far from perfect

The UK government has said it will roll out passkey technology across its digital services later in 2025, aiming to phase out SMS-based verification in favour of a more secure, user-friendly alternative. […]

The UK government has said it will roll out passkey technology across its digital services later in 2025, aiming to phase out SMS-based verification in favour of a more secure, user-friendly alternative.

Passkeys are unique digital credentials tied to a user’s personal device and offer a way to authenticate identity without the need for traditional passwords or one-time text codes.

Passkeys never leave the device and so cannot be reused across websites, which makes them resistant to phishing and other common attacks.