Last Update 7:11 PM February 18, 2026 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Wednesday, 18. February 2026

OpenID

Nine countries prove OpenID Federation interoperability

Hands-on testing at TIIME 2026 confirms real world readiness of OpenID Federation across nine implementations.   On 13 February 2026, implementers of OpenID Federation – a specification that provides a common framework for organisations to verify and trust one another at scale, without requiring bilateral agreements between every party – gathered in Amsterdam for a hands-on […] The pos
Hands-on testing at TIIME 2026 confirms real world readiness of OpenID Federation across nine implementations.  

On 13 February 2026, implementers of OpenID Federation – a specification that provides a common framework for organisations to verify and trust one another at scale, without requiring bilateral agreements between every party – gathered in Amsterdam for a hands-on interoperability event.

The event was held as part of the Trust and Internet Identity Meeting Europe (TIIME) unconference — a practitioner-led forum bringing together experts across identity management, policy, privacy and trust infrastructure. 

Testing at scale 

Twelve participants representing nine implementations and nine countries came together to test their work against one another in real time. Implementers represented Croatia, Finland, Greece, Italy, Netherlands, Poland, Serbia, Sweden and the US. Nine implementations from nine countries successfully testing together is a meaningful milestone. It demonstrates that OpenID Federation is being adopted across the world, and that those implementations interoperate.

The event was organised by Niels van Dijk, Technical Product Manager for Trust and Security at SURFnet, and Davide Vaghetti, Head of Identity Federation service at GARR. Davide ran the session, assembling and managing the test federation that participants used for testing. 

The OpenID Federation Browser, created by Giuseppe De Marco of the Italian Digital Transformation Department, gave participants a visual way to navigate and understand the federation structure in real time.

Momentum beyond Amsterdam

The test federation assembled for the event has remained active since, with a number of participants continuing to test against one another in the days following the in-person session. This continued engagement shows that not only are the implementations solid, but having a shared environment to test against is clearly valuable to the community.

OpenID Foundation board member and specification editor, Mike Jones, commented: “I was thrilled that the identity community came together and organized a day dedicated to OpenID Federation at TIIME 2026, including the interop event. While the OpenID Foundation organized the previous interop event at SUNET, this time it came about because practitioners organized it themselves.” 

Building on the milestone

The interoperability event coincided with OpenID Federation 1.0 achieving Final status — adding further weight to the community’s work in Amsterdam. OpenID Foundation Executive Director Gail Hodges said: “With OpenID Federation 1.0 having now achieved Final status, the effort by this multinational group of expert implementers not only further proves out the value of the specification, but they are also pressure testing the OpenID Federation open source tests that will launch for self-certification in due course. 

“Ecosystems that are exploring what trust management specifications are right for their needs will stand to benefit from the hard work of these early innovators and adopters.”   

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, OAuth2 – the FAPI standard for interoperable, high security – has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

To learn more about conformance testing and self-certification, please visit the OpenID Foundation’s FAQ section.

The post Nine countries prove OpenID Federation interoperability first appeared on OpenID Foundation.


ResofWorld

Creators are cashing in on a “Facebook renaissance”

And Indonesian creators are leading the charge.
Facebook’s new content monetization program is booming, and Indonesian creators are leading the charge, giving the platform a much-needed boost to compete with YouTube and TikTok. The program has grown from...

AI is giving tech companies power that once belonged to governments

AI companies wield enormous economic, political, and cultural power globally, with states reluctant to regulate them, potentially leading to risks and greater inequity.
Debates over the governance of artificial intelligence tend to assume that it will be important and transformative across many areas of human endeavor. Yet, the question of how those benefits...

China’s rare-earth dominance is hard for EV makers to escape

Western automakers are chasing rare earth-free motors but China’s cost advantage remains difficult to crack.
Global electric-vehicle makers are finding that breaking free from Chinese rare earths is easier said than done. EV makers outside China are accelerating investments in motor technologies that reduce or...

Bhutan’s crypto experiment shows how hard digital money is in the real world

Nearly a year after launching a nationwide crypto payment system for tourists, merchants say hardly anyone is using it — raising questions about who the experiment really serves.
Nine months into its big push for cryptocurrency payments, Bhutan isn’t finding many takers for its plans. Last May, Bhutan became the first country to launch a nationwide crypto payment...

Tuesday, 17. February 2026

OpenID

OpenID Federation 1.0 Final Specification Approved

The OpenID Foundation membership has approved the following OpenID Connect Working Group specification as an OpenID Final Specification: https://openid.net/specs/openid-federation-1_0-final.html A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. The voting results were: Approve – 85 votes Object – 0 vot

The OpenID Foundation membership has approved the following OpenID Connect Working Group specification as an OpenID Final Specification:

https://openid.net/specs/openid-federation-1_0-final.html

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision.

The voting results were:

Approve – 85 votes Object – 0 votes Abstain – 20 votes

Total votes: 105 (out of 425 members = 24.7% > 20% quorum requirement)

 

Marie Jordan, OpenID Foundation Board Secretary

 

About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post OpenID Federation 1.0 Final Specification Approved first appeared on OpenID Foundation.


Public Review Period for Proposed OpenID Federation 1.1 Final Specifications

The OpenID Connect Working Group recommends approval of the following specifications as OpenID Final Specifications: OpenID Federation 1.1 OpenID Federation for OpenID Connect 1.1 A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for these specification

The OpenID Connect Working Group recommends approval of the following specifications as OpenID Final Specifications:

OpenID Federation 1.1 OpenID Federation for OpenID Connect 1.1

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for these specification drafts in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a fourteen-day voting period during which OpenID Foundation members will vote on whether to approve these drafts as an OpenID Final Specifications.

The relevant dates are:

Final Specifications public review period: Tuesday, February 17, 2026 to Saturday, April 18, 2026 (60 days) Final Specifications vote announcement: Monday, April 6, 2026 (14 days prior to voting) Final Specifications voting period: Monday, April 20, 2026 to Monday, May 4, 2026 (14 days)

The OpenID Connect working group page is https://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB/Connect” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.

Marie Jordan – OpenID Foundation Board Secretary

 

About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.



The post Public Review Period for Proposed OpenID Federation 1.1 Final Specifications first appeared on OpenID Foundation.


ResofWorld

Polymarket courts Chinese users despite strict online gambling ban

The crypto-based prediction platform is hiring Mandarin-speaking staff and adding Lunar New Year bets, even as it remains blocked in China.
Cryptocurrency prediction site Polymarket is trying to tap into the Chinese market by hiring Mandarin-speaking staff and listing bets related to the Lunar New Year, even though it has little...

DIF Blog

MOSIP Hot Takes with Juan — February 17, 9:00 AM UTC

MOSIP Connect may have wrapped in Rabat, Morocco — but the conversations are just beginning. Join Juan Caballero for a focused reflection on key insights from the event: the signals that stood out, the themes emerging across sessions, and what they mean for the broader identity ecosystem. Beyond the formal

MOSIP Connect may have wrapped in Rabat, Morocco — but the conversations are just beginning.

Join Juan Caballero for a focused reflection on key insights from the event: the signals that stood out, the themes emerging across sessions, and what they mean for the broader identity ecosystem.

Beyond the formal presentations, this year’s gathering once again highlighted the power of community-driven dialogue — from structured panels to hallway discussions and small group exchanges.

This DIF Hot Takes session continues that conversation, surfacing insights for builders, policymakers, and standards contributors working across digital public infrastructure and decentralized identity.

🎥 Join live here

Monday, 16. February 2026

DIF Blog

DIF Newsletter #58

February 2026 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News 2026

February 2026

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News

2026 kicked off with lots of DIF action:

Welcome to new Associate Member Kyndryl! Kyndryl is joining with an eye on the Trusted AI Agents Working Group. Check out the blog post welcoming them to DIF. Welcome to our new Operations Manager, Gracezel Luis! Gracezel is an experienced project manager, content marketing professional, and operations expert. The addition of Gracezel will allow DIF to continue to focus on Operational Excellence in 2026. She will be handling all membership and working group administration. Look out for a blog post next month where Grace and Gracezel interview one another in an AMA format!
“I’m glad to be joining DIF at this stage of its growth. I’ve spent the last decade working across operations, project management, and cross-functional teams, with a strong focus on clarity and follow-through. In this role, I’ll be supporting the Executive Director, Steering Committee, membership, and working groups—particularly across membership and working group administration by strengthening processes and helping ensure initiatives move forward in a steady and well-coordinated way.”
DIF Hot Takes launch February 17, 9 am UTC : Join Bumblefudge as he gives you the lowdown on MOSIP Connect. See the DIF calendar or join here. The Steering Committee approved two CAWG specifications: User experience guidance 1.0 and the Organizational identity profile 1.0 Trusted AI Agents WG created a Delegated Authority Task Force, started formalizing the use cases discussed so far as a common resource across Task Forces, discussed and experimented with Threat Modeling as a cross-Task Force exercise, began a Report on Delegated Authority, and started transitioning Vouched ID's MCP-I protocol to DIF governance within the WG. CAWG announces two new Task Forces to explore concrete integration of attestation platforms: one specific to ACDC envelopes, and another for more generic VC/VP tooling. DIF will be on the screening committee for the programming at TDI.  Submit a proposal to the Call for Talks before the 22nd if you'd like to participate.

We can't believe we're only 6 weeks into the year and so much has already happened. More to follow in soon-to-be-released blog posts, so stay turned!

🛠️ Working Group Updates

Browse our working groups here

Creator Assertions Working Group

This month's highlight was the approval by the Steering Committee of the two specifications: User experience guidance 1.0 and the Organizational identity profile 1.0.

CAWG announced two new Work Items: an ACDC Task Force and a more general VC/VP Task Force. Times for these meetings are being scheduled.

The Creator Assertions Working Group had several guests this month, presenting collaborative projects, with two presentations about the Ayra registry, one with Drummond Reed and another with Darryl O'Donnell. The Ayra provides registry services to groups that don't necessarily fall into the "legal entity" status.
During their regular working meetings, CAWG focused on reviewing and discussing several PRs related to the metadata and identity assertion specifications. The group addressed concerns about handling multiple participants in the same role and agreed to research and clarify the language for this scenario. They also discussed a new status code for network traffic validation and debated the naming of sections in the verifiable credentials specification. The conversation ended with a discussion about the need for user education materials and tools to validate compliance with the CAWG specification.

👉 Learn more and get involved

Trusted AI Agents Working Group

The TAAWG has moved to weekly half-hour meetings for the main WG and weekly meetings for the Delegated Authority Task Force.

Pull Request 30 was reviewed, finalizing documentation for the first 4 use cases for the TAAWG working group. The PR is still open for comments. TAAWG launched the Delegated Authorization Task Force, which will be working on an overview of existing work in the area of delegated authorization, data models, and protocols, and then performing a gap analysis to determine where TAAWG can contribute to this rapidly-evolving area. Tom Jones shared a Threat Modeling Report which was reviewed by the working group. The proposed use case being threat modeled conceptualizes an Agentic and policy-enforcing local AI model, tasked with maximmizing privacy and security vis-a-vis remote agents and services. Commments are welcome, specifically on the first 2 pages (the rest should be considered as an appendix). The Delegated Authority Task Force has made progress on a Delegation Report for submission through the working group. Alex Keisner, owner of the Know-Your-Agent product at Vouched, introduced MCP-I as a proposed identity extension to the popular MCP server/client protocol, addressing the need for a standard identity handshake between agents and MCP servers guarding resources. MCP-I may become a separate work item/task force and those interested will be choosing a regular meeting time to discuss DIF governance of the extension.

👉 Learn more and get involved

Hospitality and Travel Working Group

The technical H&TWG have continued their work on Schemas for various common industry-wide use cases, focusing on food taxonomy and preferences, quote systems, and edge computing use cases. The WG and User Group (non-IP-protected) are continuing to find IP-safe ways to share ideas, and encourage members of the latter to join DIF if they want to participate in technical design and implementations.

👉 Learn more and get involved

DID Methods Working Group DMWG has been working on making a number of the DID methods viable for becoming DIF-recommended methods. In addition to discussing what it means to become a DIF-recommended method, the group has been selecting different methods and working on incrementally improving the methods to submit as candidates for recommendation. Anyone who has a particular DID method that they want to work on bringing to candidate-quality level should fill out the proposal and schedule with the WG chairs to go through the requirements. The did:webplus method was updated to reflect support for additional hash methods (SHA-256, CHAR256/CHAR512) and has completed the 60 day review period, and the chairs are completing the work to make it a formal recommendation and submit the PR. The second deep dive on DIDwebs, signifying the start of the 60-day review period, was last Wednesday. Since the first deep dive, the did:webs team had an update on the pull request to register the ConditionalProof2022 signature suite for JSON-LD verification purposes. More info can be found on Pull Request for v9.017 did:webs prevents BOLA attacks with pre-rotation keys, provides multisig issuance of the initial identifier, and multi-threshold proofs. The full presentation can be found here The formal proposal at W3C to propose a W3C WG complementary to DIF's is stalled because no W3C member organization has stepped up to champion and chair the working group.

👉 Learn more and get involved

Identifiers and Discovery Working Group The BCVH instance of the Traction Sandbox is now live. The sandbox can be used to test DID:WebVH implementation Initial workshop for using the Traction Sandbox Approval for adding an optional Heartbeat parameter which would deactivate DIDs which have no updates for the amount of time defined in the Heartbeat parameter. IDWG is seeking feedback on the witness proof system and DidWebVH, particularly regarding file storage and size optimization. More info here. The group discussed making domain names optional for DIDs.This would moves did:webvh to align with did:scid The group is moving forward with a number of PRs and updates

👉 Learn more and get involved

🪪 Claims & Credentials Working Group Exploratory work on DIF "active contributor" credentialing is underway, in discussions with the First Person Project and others. Reach out to chair Otto Mora on DIF Slack if interested. Further work on the Credential Schemas is on hold, but evaluators/users of the credentials collected to date or proposers of additional credentials are encouraged to open an issue or reach out on DIF Slack.

👉 Learn more and get involved

Applied Crypto Working Group

The ACWG continues discussion on BBS and ZKPs and collaboration with IETF's CFRG. They are proceeding to resolve the GitHub issues regarding BBS generator standardization and provide comments, especially addressing concerns from the Privacy Pass team and other cryptographers.

👉 Learn more and get involved

DIDComm User Working Group

Pull requests under discussion

The DIDcomm Users Group saw demos of two main protocol developments presented by Vinay: a chess game protocol with move verification through cryptographic hashing, and a mesh networking protocol for offline communication using Bluetooth and potentially LoRa technology. Vinay demonstrated a mobile app featuring various cryptographic and verification capabilities, including peer-to-peer calls, credential issuance, and document signing, with plans to present some features to the Open Wallet Foundation. He will be proceeding to develop an FFI (Foreign Function Interface) for Cradle/CredOTS to allow interoperability with the Rust mesh protocol implementation.

👉 Learn more and get involved

🌎 DIF Special Interest Group Updates

Browse our special interest groups here


DIF Hospitality & Travel SIG

The team advanced work on a standardized travel profile schema, focusing on multilingual support and international data handling requirements. A major highlight was the January 30th session featuring presentations from SITA and Indicio, who demonstrated successful implementation of verifiable credentials in travel, including a pilot program in Aruba.

Key developments included:

Progress on JSON schema development for standardized travel profiles Advancement of multilingual and localization capabilities Refinement of terminology and glossary for industry standardization Demo of successful verifiable credentials implementation in live travel environment

👉 Learn more and get involved

DIF China SIG

The China SIG is growing to a vibrant community, with over 140 people in the discussion group. In 2024 they organized 9 online meetings and invited different DID experts for discussions, including experts from GLEIF, DIF, and TrustOverIP.

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC / ASEAN group discussed membership and how to increase participation in the region.

👉 Learn more and get involved

DIF Africa SIG

This month the DID Unconference is being hosted in South Africa with DIF as a sponsor.

👉 Learn more and get involved

📖 DIF User Group Updates
DIDComm User Group

The DIDComm User Group established additional meeting times to accommodate global participation. They worked on expanding their reach and planned engagement with Trust Spanning Protocol representatives, while also focusing on improving documentation and accessibility.

👉 Learn more and get involved

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

📢 Upcoming Events

Will you be attending any upcoming Identity events? Let us know so other DIF members can find you!

DID Unconference Africa 24-26 February, 2026 (South Africa)

DID:UNCONF AFRICA brings together local and international innovators, leaders, and activists to reshape the future of digital identity. This event fosters innovation, collaboration, and interoperability, making a significant impact on the inclusive development of digital identity in Africa. For the second year running, DIF will be sponsoring the event. Expect to see Steering Committee Member and CAWG Co-chair Eric Scouten in attendance. Eric and Africa SIG Chair Gideon Lobard will be giving us their Hot Takes in March. Watch the February newsletter and DIF calendar for exact time and date.

ITB Berlin, 3-5 March, 2026 (Berlin)

DIF Member Alex Bainbridge (Autoura) will be speaking about identity at the world's largest travel conference.

IETF 125 Shenzhen, 14-20 March, 2026 (Shenzhen)

Our new Executive Director, Grace Rachmany, will be attending IETF125 this year in APAC.

4th International Workshop on Trends in Digital Identity (TDI)

📅 April 20-21, 2026
📍 Verona, Italy
Learn more

Internet Identity Workshop IIWXLII #42

📅 April 28–30, 2026
📍 Mountain View, CA
Registration and details

Agentic Internet Workshop #2

📅 May 1, 2026
📍 Mountain View, CA
Learn more

Identiverse 2026

📅 June 15–18, 2026
📍 Las Vegas, NV
Conference details

Identity Week Europe 2026

📅 June 9–10, 2026
📍 Amsterdam
Event information

Call for Co-organizers: GDC 2026

The 2026 Global Digital Collaboration Conference has been announced for September 1-2, 2026, in Geneva. DIF is on the co-organizing committee

🗓️ ️DIF Members 👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details. 🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

Friday, 13. February 2026

FIDO Alliance

Wired: How Passkeys Work—and How to Use Them

Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game of cybersecurity whack-a-mole with your most important accounts. That’s where passkeys come into play. The so-called “war on passwords” has taken […]

Passwords suck. They’re hard to remember, but worse is playing the ever-evolving game of cybersecurity whack-a-mole with your most important accounts. That’s where passkeys come into play. The so-called “war on passwords” has taken off over the past two years, with titans like Google, Microsoft, and Apple pushing for a password-less future that the FIDO Alliance (a consortium made to “help reduce the world’s over-reliance on passwords”) has been trying to realise for over a decade.

Like it or not, you’ll be prompted to create a passkey at some point, and you likely already have. That’s a good thing, as passkeys aren’t only much easier to use than a traditional password, they’re also a lot safer. Here’s everything you need to know about using them.


Mastercard: Unlock your key to a more secure checkout

Unlock your key to a more secure checkout. Use your unique payment passkey to secure your purchases.

Unlock your key to a more secure checkout. Use your unique payment passkey to secure your purchases.


ResofWorld

I let Alibaba’s AI agent plan my holiday. I ended up doing more work

For reassurance, experimentation, and low-pressure decisions, AI agents can feel helpful. For bigger choices, I still trust myself more.
Like most people, I first heard the term “AI agent” early last year, when a Chinese startup called Manus announced it had built the world’s first general-purpose one. Unlike chatbots...

DIF Blog

Case Study: Designing for a Regulated Messaging Community

Member post from LedgerDomain, by Alex Colgan and Victor Dods When LedgerDomain began building identity infrastructure for the U.S. pharmaceutical supply chain, we understood immediately that this was an atypical decentralized identity environment. Hundreds of thousands of independent, publicly-registered organizations exchange billions of sensitive, confidential and regulated messages each

Member post from LedgerDomain, by Alex Colgan and Victor Dods

When LedgerDomain began building identity infrastructure for the U.S. pharmaceutical supply chain, we understood immediately that this was an atypical decentralized identity environment. Hundreds of thousands of independent, publicly-registered organizations exchange billions of sensitive, confidential and regulated messages each year, under direct and ongoing government oversight. Any one of those messages can later become evidence in a regulatory investigation, sometimes many years after the original transaction took place.

Our primary challenge was messaging: enabling trading partners to communicate confidentially and at scale, without leaking network metadata or trade terms, while preserving cryptographic proof that every interaction was legitimately authorized at the time it occurred. That proof needed to remain verifiable even as companies changed vendors, churned employees, rotated keys and pseudonyms, migrated infrastructure, or exited the ecosystem altogether. We decided early to take an approach that paired cryptographic verifiability with standard web technologies in order to achieve a balance of security, scale, and performance. Additionally, we needed to achieve a measure of security parity with DID methods on a stronger cryptographic basis than did:web also in use within the messaging community.

Starting from Messaging Requirements

This messaging system had to support high-volume, point-to-point communication without broadcasting metadata or relying on centralized intermediaries. Participants needed strong assurances about counterparties, and regulators needed the ability to reconstruct chains of custody long after transactions were completed. Identity therefore had to support historical verification. It wasn’t sufficient to resolve a DID to its current state; verifiers had to be able to determine which keys and policies were valid at a specific moment in time.

We adopted did:web early because it aligned with the operational realities of the ecosystem. It was web-native, easy to deploy, and compatible with existing infrastructure and organizational boundaries. It allowed historical resolution as an optional affordance, albeit somewhat trustfully. As the system matured toward production use, however, it became clear that regulated environments require stronger guarantees than static web resolution alone can provide, and better handling of portability and other corner-cases. Without a verifiable history, long-term auditability and non-repudiation were difficult to support.

Rather than change deployment models or governance assumptions, we focused on addressing this gap directly and extending the did:web model.

Designing did:webplus from Operational Reality

did:webplus emerged as an implementation of did:web that extended the original by adds a cryptographically verifiable history to each DID. Every DID maintains an append-only microledger of DID documents; parallel efforts also led to roughly analogous microledger-based methods like did:webvh and did:oyd. A core goal of our microledger design was very expressive and explicit rulesets for updates: each update is self-hashed, linked to its predecessor, and authorized according to explicit rules defined in the prior state. This enables any verifier – now or in the future – to reconstruct the exact state (and exact current update policy) of a given DID at a specific point in time with cryptographic certainty.

This design was driven by the same guarantees our messaging system already required: durability, non-repudiation, and auditability at scale. Using standard web hosting and “incremental retrieval” patterns allowed these guarantees to be achieved efficiently and predictably over HTTPS. The DID method and the messaging substrate evolved together, reinforcing one another rather than being designed in isolation.

From a security perspective, this approach delivers the same assurances commonly associated with distributed-ledger-anchored identity systems, without every query having to be routed directly to a live blockchain node or indirectly through a SaaS middleman. From an operational perspective, it remains deployable using familiar web infrastructure and governance models already accepted in regulated industries.

Grounded in a Real Community

did:webplus is now being prepared for production use within the Open Credentialing Initiative (OCI) ecosystem. Adopters already operate did:web-based systems and expect to be able to extend them with verifiable history without redesigning their workflows or architectures. That continuity is intentional. did:webplus was designed for incremental adoption, respecting the timelines and boundaries under which regulated organizations operate.

The broader governance context also shaped the design. Messaging standards in the pharmaceutical supply chain are stewarded by GS1, a long-time member of the DID working group at W3C. OCI-harmonized identity assurance processes align with NIST guidelines, making secure pseudonymity easier to operationalize. Participation is governed through a public-private partnership with the FDA. did:webplus encodes these realities on boring and compliant web rails rather than attempting to replace them.

A Pattern for Requirements-Driven DID Design

This experience reinforced a lesson that we believe is broadly relevant to the decentralized identity community. Durable identity infrastructure emerges most effectively when it is designed from concrete ecosystem requirements outward. In our case, the demands of regulated, confidential, high-volume messaging shaped the DID method itself. We see this as a useful pattern for other high-assurance environments such as regulated finance, healthcare, and cross-border trade.

Of course, the design was not done in isolation from the broader decentralized identity community. Across GitHub issues and mailing messages, our Chief Software Architect Victor Dods investigated how other implementers understood the under-documented `?versionTime=` reserve word in the W3C DID specification, and how they had implemented it in blockchain-based systems. Victor presented an early draft of the design in the DIF Identifiers & Discovery Working Group for review, and has been grateful to receive extensive ergonomics and interoperability testing feedback from fellow DIF member Spherity in the OCI context. 

LedgerDomain has also been a driving member of the DID Methods Working Group at DIF, which seeks to establish a common evaluation framework for maturity and market-readiness through structured peer-review and consolidation of various pre-existing self-documentation processes. In that context, LedgerDomain has sought to lobby for benchmarking production deployments and challenging other evaluators and evaluees to focus more on practical implementation and deployment experience, rather than simply the design process.

Join the conversation:

In the spirit of open source, everything you need to evaluate, test, or analyze the did:webplus method is available on github. If you are interested in the design of DID methods, the Identifiers and Discovery Working Group is an open forum which Ledger Domain availed itself of early in the process If you are interested in productionization, benchmarking, and implementer experience, the DID Methods Working Group is steadily building up a registry of production-ready DID methods evaluated collectively and rigorously.

MyData

From AI Hype to Medical Practice: Fixing Trust and Reproducibility in Nordic Healthcare AI 

Healthcare AI is scaling faster than our ability to trust it. We need to avoid slop to continue the journey of Evidence-based medicine (EBM).  Across medicine and life sciences, we […]
Healthcare AI is scaling faster than our ability to trust it. We need to avoid slop to continue the journey of Evidence-based medicine (EBM).  Across medicine and life sciences, we […]

ResofWorld

Chinese boxing robots win fans in San Francisco

American companies are staging boxing matches with VR-controlled Chinese humanoids to enthusiastic fans. A researcher says it is just robot theater.
The Super Bowl wasn’t the only sporting event that drew excited fans to the Bay Area last weekend. An event at a much smaller venue in San Francisco was hailed...

OpenAI accuses DeepSeek of “free-riding” on American R&D

The ChatGPT maker says the Chinese firm may be distilling its models, underscoring rising tech tensions between Washington and Beijing.
OpenAI has accused DeepSeek of malpractice in developing the next version of its artificial intelligence model — even before any official launch. “DeepSeek’s next model (whatever its form) should be...

Thursday, 12. February 2026

Origin Trail

Passport, please! AI agents are becoming first-class citizens with ERC-8004 & OriginTrail

AI agents are exploding in use across industries, but they’re roaming a digital world with no shared identity or trust framework. Today, an agent can claim “I can code” or “I can trade” (“trust me, bro, I’m an AI agent”), yet there’s no standard way to verify if any of it is true. You wouldn’t trust strangers operating like that, and neither can AI agents truly trust each other under these c

AI agents are exploding in use across industries, but they’re roaming a digital world with no shared identity or trust framework. Today, an agent can claim “I can code” or “I can trade” (“trust me, bro, I’m an AI agent”), yet there’s no standard way to verify if any of it is true.

You wouldn’t trust strangers operating like that, and neither can AI agents truly trust each other under these conditions. This “trust gap” is a major roadblock to an open agent economy. Agents need a way to carry their identity, context, and track record with them — something akin to a passport — so they can be discovered and trusted by others at machine speed.

Giving AI agents a Digital Passport with ERC‑8004 and Decentralized Knowledge Graph

Combining the ERC‑8004 standard with the OriginTrail Decentralized Knowledge Graph (DKG) creates a powerful synergy akin to giving AI agents a digital passport from day one. ERC‑8004 establishes an agent’s on-chain identity and structure — essentially issuing a standardized passport number and “photo page” for the AI — while the OriginTrail DKG fills that passport with dynamic, verifiable context, i.e., the stamps, visas, certificates, and travel history that accumulate as the agent interacts and learns. Together, these technologies ensure each AI agent has both a trusted identity and a rich, evolving track record of its accomplishments, all secured by blockchain and cryptographic proofs.

The ERC‑8004 Ethereum standard gives every AI agent a unique on-chain identity. Each agent is issued an ERC-721 NFT as its “passport document,” providing a portable, censorship-resistant identifier on Ethereum. This identity token (the agent’s “passport number”) links to a registration file describing the agent’s core info — for example, its capabilities, endpoints (how to communicate with it), and even aspects of its “social graph” or affiliations. In other words, ERC‑8004 standardizes how an AI agent presents itself, ensuring that anyone, anywhere, can verify who the agent is and what skills it claims to have. Just as a real passport is issued by a trusted authority, the ERC‑8004 identity is anchored on Ethereum, making it globally verifiable and hard to forge. This on-chain identity layer also includes built-in trust anchors: ERC‑8004 defines reputation and validation registries that record an agent’s on-chain feedback and certifications, functioning like official seals or endorsements on a passport.

Thanks to ERC-8004, AI agents now have a basic passport — a way to present who they are and what they’ve done in a standard, verifiable format. An agent that wants to be hired for a job can show their ERC-8004 credentials: “Here’s my ID and resume, here are my reviews, and here are proofs of my capabilities.” In fact, the standard explicitly frames the identity NFT as the agent’s passport.

However, like a freshly issued real-world passport, this is just the beginning. The passport, by itself (an NFT plus a static JSON file), is necessary but not sufficient for rich trust. It tells you the basics, but imagine if we could stuff that passport with far more context — every stamp, visa, reference letter, and credential an agent earns over time, in a way that’s trusted and queryable. This is where OriginTrail Decentralized Knowledge Graph comes in, turning the passport into something much more powerful.

Decentralized Knowledge Graph: Turning the passport into a living context graph

OriginTrail Decentralized Knowledge Graph (DKG) steps in to supercharge ERC-8004’s static records, effectively transforming an agent’s passport into a living, verifiable context graph. Think of ERC-8004 as issuing the agent a blank passport and a basic ID card; the DKG is what brings that passport to life with data, continuously updated with verified stamps and stories of the agent’s journey. In OriginTrail’s own words, the DKG serves as a “constantly evolving digital passport for agents,” essentially an agent-specific context graph that grows over time with each interaction

How does it work?

The DKG is a decentralized network designed to store and publish structured knowledge (using semantic web standards) with verifiable provenance. In the DKG, information is not just dumped in JSON files or logs — it’s represented as a knowledge graph: a web of facts and relationships that machines can easily query and trust. Each data point in the graph is accompanied by cryptographic proof (such as a fingerprint anchored on-chain) that guarantees its integrity. And just like ERC-8004’s identity, each “thing” in the DKG is ownable via an NFT. In fact, the core unit of the DKG is called a Knowledge Asset, which is essentially an NFT + knowledge graph bundled together. You can represent anything as a Knowledge Asset — an AI agent, a dataset, a certificate — and give it a verifiable, evolving record on the graph.

So, let’s map an AI agent to a DKG Knowledge Asset. The agent’s ERC-8004 NFT can double as a DKG asset identifier (the DKG uses a concept called a Uniform Asset Locator, which extends DIDs, often implemented by an NFT token). That covers the identity/ownership part. Now attach the agent’s knowledge: Instead of a single JSON file with a few fields, we can have an entire graph of data describing the agent.

This graph might include: Agent profile & attributes: The same basics from the JSON (name, description, endpoints) but in a semantic format (RDF triples) so they’re machine-readable and linkable. For example, an agent could be linked to a category (“TradingBot”) or a skill ontology, enabling more precise discovery. Decision traces & activity logs: Every significant action the agent takes could be logged as an assertion in its knowledge graph. Did the agent complete a task? You can add a node for that event, linked to the date it occurred and its outcome. Over time, this creates a timeline of verifiable events — a history far richer than a single aggregate reputation score. These are the “stamps” in the passport, each one independently verifiable via its on-chain fingerprint. If someone questions why an agent made a decision, they could inspect its DKG log (with appropriate permissions) to trace the reasoning or data that led to it. Essentially, the agent builds up a memory in the graph that can be audited. In her thesis, Jaya Gupta of Foundation Capital explicitly includes AI agents’ decision-making processes and the importance of capturing decision traces to understand why decisions were made, which then become part of evolving context graphs. For context, graphs must become the real source of truth; DKG plays an essential role. Verifiable credentials & references: DKG can integrate W3C Verifiable Credentials (VCs) and decentralized identifiers. Suppose a trusted organization certifies an agent (e.g., “This trading bot passed a rigorous test” or “This agent is compliant with X regulation”); that credential can be added to the agent’s knowledge graph as a signed assertion. OriginTrail DKG is built to support standards such as VCs and DIDs, ensuring these credentials are stored in an interoperable format. It’s like adding visas or reference letters to the passport — e.g., “Certified by Authority Y” — which anyone can cryptographically verify. Semantic relationships: Knowledge graphs excel at capturing relationships between entities. An agent’s context isn’t just about the agent in isolation; it’s also about how it connects to others. With DKG, we can link the agent to other agents it has worked with, to datasets it frequently uses, or to domains of expertise. For example, if Agent A has collaborated with Agent B on a project, their knowledge graphs can reference each other (Agent A’s passport might say “worked with Agent B on Supply Chain Optimization, see project P”). These semantic links enrich discoverability — one could query the graph for “agents who have worked on supply chain tasks with verified outcomes” and find Agent A because of those relationships. OriginTrail’s design enables Knowledge Assets to connect with other assets, creating a world model of relationships. Provenance and data anchoring: Perhaps most importantly, every fact or credential added to the agent’s context graph comes with provable provenance. The DKG uses cryptographic proofs (Merkle roots of the graph data) anchored on-chain to ensure that the knowledge hasn’t been tampered with. If the agent’s passport states “Completed 50 successful deliveries,” the raw data backing that (the 50 delivery events) each have a hash on the chain that can be verified. This is analogous to a passport office stamping and sealing each visa — it can’t be faked without detection. The OriginTrail network’s nodes replicate and store these assertions, especially the public ones, so the data is always available and secure in a decentralized way. No single party can forge or hide the agent’s records. The result is a trustworthy, tamper-evident ledger of an agent’s life that complements the on-chain registries. OriginTrail DKG represents an agent’s profile as a Knowledge Asset, combining on-chain identity with off-chain knowledge. The diagram illustrates how an AI agent’s “passport” gains a chip: it contains semantic graph data (RDF) and vector embeddings for AI context, anchored by cryptographic on-chain proofs, all tied to a unique NFT identifier. This makes the agent’s profile a dynamic, queryable knowledge graph rather than a static file. Conclusion

In summary, integrating OriginTrail DKG with ERC-8004 gives each agent a “smart passport”: not just an ID document, but an entire personal knowledge graph that is securely stored, constantly updated, and universally queryable. The passport isn’t just carried by the agent — it lives on the decentralized network, where anyone (or any other agent) can validate its stamps and even learn from its contents (with permission). This dramatically amplifies trust: an agent’s identity isn’t a static entry in a registry; it’s the center of a web of trust data that grows richer over time.

The journey is just starting. ERC-8004 has effectively set the rules for issuing and stamping agent passports. OriginTrail DKG offers a global registry and database where those passports are maintained and enriched over time. As this integration matures, we could see the emergence of a true Web3 agent commons—a space where AI agents from any project or company can work together trustlessly, discover one another through shared context, and carry their reputation across any single platform.

In the long run, this passport and knowledge graph approach may become an essential component of AI infrastructure, much like human identity standards. It lays the foundation for an interoperable, trustworthy agent economy.

Passport, please! AI agents are becoming first-class citizens with ERC-8004 & OriginTrail was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


FIDO Alliance

Integrating FIDO Standards into Secure OT Connectivity — A Practical Path to Resilience

Operational Technology (OT) environments — from industrial control systems to critical infrastructure networks — have traditionally prioritized safety and availability. The newly published Secure Connectivity Principles for Operational Technology (OT) […]

Operational Technology (OT) environments — from industrial control systems to critical infrastructure networks — have traditionally prioritized safety and availability. The newly published Secure Connectivity Principles for Operational Technology (OT) guidance produced by the UK National Cyber Security Centre (NCSC) in partnership with agencies from Australia, Canada, US, Germany, Netherlands, and New Zealand underscores how evolving connectivity demands require a modern security posture that does not compromise operational integrity while facing an expanding threat landscape. 

At the FIDO Alliance, our mission has always been to champion open, scalable, and trusted identity and authentication standards that are simple to use. Today those same principles, originally forged to eliminate the weak link of shared secrets on the web, are directly applicable to securing OT connectivity and distributed device environments.

Below I’ll outline how FIDO phishing-resistant authentication (passkeys), FIDO Device Onboard (FDO) and emerging work in Bare Metal Onboarding (BMO) support these secure connectivity principles, enabling organizations to achieve strong authentication, trusted connectivity, secure supply chains and secure update of software at scale.

Phishing-Resistant Authentication Is Now Table Stakes for OT

The OT guidance emphasizes strong authentication at network boundaries, remote access points, and management planes. This is exactly the problem FIDO set out to solve with passkeys. Passkeys replace passwords and shared secrets with device-bound cryptographic credentials that are phishing-resistant, replay-resistant, and built on open standards.

For OT operators, engineers, and vendors accessing jump hosts, DMZ gateways, or privileged access workstations, this removes the most common root cause of breaches: stolen credentials. That simple shift from shared secrets to cryptography dramatically reduces risk at OT boundaries.

Practically speaking, this enables organizations to:

Enforce phishing-resistant MFA for all remote/vendor access Secure privileged admin workflows Reduce helpdesk overhead from tokens/password resets Strengthen auditability and attribution of actions

This aligns directly with the guidance’s goals of minimizing exposure and hardening connectivity with modern, standardized controls.

Securing Vendor and Remote Access Without Increasing Complexity

OT environments frequently require third-party maintenance and specialized engineering support. Historically, that has meant VPN accounts, shared credentials, or brittle remote access solutions. The guidance recommends organizations move to centralized, controlled connectivity and brokered access patterns. FIDO authentication fits naturally into the recommended control framework:

FIDO authentication-secured jump hosts, remote workstations, and more Privileged access gateways Just-in-time access provisioning Device-verified operator identity

This approach delivers both least privilege and strong non-repudiation — two capabilities that are increasingly important for regulated industries. Most importantly, it does so without adding friction for operators, which is critical in environments where uptime and usability are non-negotiable.

Establishing Trust in Devices with FIDO Device Onboard (FDO)

Users aren’t the only identities that matter in OT. Devices — gateways, sensors, controllers, and edge systems — must also prove they are trusted before joining operational networks. This is where FIDO Device Onboard (FDO) comes in. FDO provides:

Zero-touch onboarding Cryptographic device attestation Secure ownership transfer Encrypted provisioning channels “Late binding” to the correct management platform at deployment time

Rather than shipping devices with default passwords or manual configuration steps, FDO allows them to securely authenticate and receive credentials automatically. For OT environments, this:

Eliminates weak factory credentials Reduces field provisioning errors Supports standardized onboarding across diverse hardware Strengthens supply-chain assurance

In other words, devices join the network only after cryptographically proving who they are. This satisfies a foundational requirement for segmentation and isolation strategies described in the guidance, delivering value today for industrial IoT, gateways, and modern edge infrastructure.

But secure onboarding is only the first step.

Bare Metal Onboarding and Lifecycle Resilience

One of the most important, and often overlooked, requirements in the OT guidance is the need to keep systems securely updated and maintain a known-good state over time. This has historically been difficult in OT. Devices may be deployed in remote locations, managed by non-IT personnel, or running outdated software because rebuilding them is complex and risky.

This is exactly the challenge that FIDO Bare Metal Onboarding (BMO) addresses. Building on FDO’s trusted foundation, BMO extends late binding beyond ownership to the entire software stack:

Operating system Applications Configuration Credentials

With BMO, a device can be powered on with no preinstalled OS and securely receive:

Authorized OS images Approved software packages Policy-defined configurations Verified updates

All cryptographically validated and delivered through the same attested, encrypted control plane established by FDO. 

In doing so, BMO unlocks several capabilities that are particularly powerful for OT operators:

Zero-touch secure deployment: Devices can be installed by non-technical personnel and automatically provision themselves safely. Secure rebuilds and recovery: If compromise or corruption is suspected, systems can be wiped and reinstalled to a known-good state. Reliable patching and upgrades: Organizations can keep software current (a key expectation in the UK guidance) without manual intervention. Standardization across vendors: A consistent, open, interoperable approach replaces fragmented proprietary tooling.

In short, BMO transforms onboarding into lifecycle assurance. Where FDO answers “Can I trust this device?”, BMO answers “Can I trust exactly what is running on it, not just today but after every update?”

That’s a critical step forward for OT resilience.

[For more information on BMO, check out this webinar]

A Clear Roadmap to go from Principles to Practice

Organizations aligning with the OT secure connectivity principles can take concrete action today, while preparing for what’s next:

Now Require phishing-resistant FIDO passkeys for all OT remote and privileged access Standardize FIDO authentication at gateways and management interfaces Adopt FDO for zero-touch, secure onboarding of new edge and industrial devices 2026 and beyond Incorporate FIDO Bare Metal Onboarding into procurement requirements Enable secure OS/app provisioning and automated rebuilds Maintain known-good state and rapid recovery across distributed OT estates Identity as the Foundation of OT Security

The OT threat landscape has changed permanently. Connectivity is no longer optional, and security can’t rely on isolation alone. The future is identity-first: verifiable users, verifiable devices, and verifiable software state. FIDO standards provide open, scalable building blocks for all three, turning the guidance principles into something actionable:

Passkeys secure the people. FDO secures the devices. BMO secures the software lifecycle.

FIDO technologies already deliver meaningful protection today. And with Bare Metal Onboarding, they will enable an even more resilient, zero-touch, secure-by-design OT ecosystem in the years ahead.


ResofWorld

How cheap Chinese phones catapulted Kenya into the global digital economy

In his new book “Silicon Elsewhere: Nairobi, Global China, and the Promise of Techno-Capital,” writer Andrea Pollio charts the growth of Chinese investment and companies in the Kenyan capital.
When I first embarked on the project to write about the encounters between Chinese digital capital and Nairobi’s booming innovation scene, I imagined myself rubbing shoulders with software developers, venture...

Blockchain Commons

Musings of a Trust Architect: Progress toward a State-Endorsed Identity (SEDI) in Utah

Most of my identity advocacy work in the United States has been in Wyoming. They’ve been very open to the goals of self-sovereignty and as a result we’ve passed laws such as private key protection and we’ve defined digital identity as being controlled by an individual’s principal authority. So it’s great to see another jurisdiction, which I have been less directly involved with, progressing on thei

Most of my identity advocacy work in the United States has been in Wyoming. They’ve been very open to the goals of self-sovereignty and as a result we’ve passed laws such as private key protection and we’ve defined digital identity as being controlled by an individual’s principal authority.

So it’s great to see another jurisdiction, which I have been less directly involved with, progressing on their own vision of self-sovereignty. That’s the whole purpose of advocacy: to seed the ideas so that they spread.

I’m talking about Utah, whose State-Endorsed Digital Identity (SEDI) has been moving in a great direction for a while. The newest bill introduced for it, S.B. 275, the “State-Endorsed Digital Identity Program Amendments”, which does all the heavy lifting of establishing SEDI, solidifies it as a privacy-first, decentralized design.

A Bill of Identity Rights

The first thing that caught my eye in the SEDI update was a new Bill of Rights. It immediately presents the digital-identity user not just as a digital serf, but someone who can claim privileges.

The lead right was surprising:

(1) “An individual possesses an individual identity innate to the individual’s existence and independent of the state, which identity is fundamental and inalienable.”

That’s pretty close to my first principle of self-sovereign identity, existence:

Existence. Users must have an independent existence. Any self-sovereign identity is ultimately based on the ineffable “I” that’s at the heart of identity. It can never exist wholly in digital form. This must be the kernel of self that is upheld and supported. A self-sovereign identity simply makes public and accessible some limited aspects of the “I” that already exists.

It was great to see an understanding that any digital identity is founded in a real person. That lays the foundation for its importance in the digital world.

There was tons more in the bill of rights that was amazing.

This is pretty close to self-sovereignty:

(2) An individual has a right to the management and control of the individual’s digital identity to protect individual privacy.

This requires transparent architecture:

(7) An individual has a right to transparency in the design and operation of a state digital identity, including the right to access, read, and review the standards and technical specifications upon which the state digital identity is built and operates.

This is a little wobbly (because of the “except as authorized by law”), but is a strike against the surveillance state:

(10) An individual has a right to be free from surveillance, profiling, tracking, or persistent monitoring of the individual’s assertions of digital identity by the state, except as authorized by law.

But this may be my favorite:

(8) An individual has the right to choose what identity attributes are disclosed by the individual’s state digital identity in accordance with standards established by the Legislature.

This is potentially full empowerment of selective disclosure, depending on what the standards are. I wrote recently that one of the big failures of the SSI community is the fact that they stepped away from a holder being able to determine what they can redact in their identity. That a state legislature may beat them to the punch is shocking.

I think a digital-identity bill of rights is a great thing. It’s what I was thinking about when I put together the original principle of self-sovereign identity. I’m now revisiting those principles for the SSI 10th anniversary, and this looks like another great source to consider.

The Duty of Loyalty

There’s a ton to like in the bill, including anti-correlation, selective disclosure, minimal disclosure, and a variety of requirements for digital-wallet providers, verifiers, and relying parties that all tend to protect the holder of the identity. It’s clear that someone was involved who really knew what they were doing and also undestood the importance of a user controlling their own identity.

But the other one that I thought was of particular note was the “Duty of Loyalty”

63A-20-701. Duty of loyalty. The department, a digital wallet provider, a verifier, a relying party, and a digital guardian shall refrain from practices or activities related to the processing of an individual’s identity attributes that:

(1)conflict with the best interests of an individual;

(2)take advantage of or otherwise exploit an individual;

(3)result in a disproportionate risk to an individual;

(4)are to an individual’s detriment; or

(5)cause harm to an individual.

This is a critical right, tying into the Principal Authority work that I did with the Wyoming Blockchain Select Committee. It similarly evokes agency law to say that when other entities are using your digital identity, they can only do so to support your best interests. Compare that to the modern-day ecosystem of surveillance capitalism and extraction and the difference is obvious. Many modern-day digital services are built on allowing you to create an identity (on Facebook, on Google, whatever) and then mercilessly extracting from that, stealing your attention, your creativity, and everything else.

There are obviously questions with how this will be managed. For one, I can’t see how “related to the processing of an individual’s identity attributes” will be interpreted. Obviously, it’ll protect you from hi-jinks on the part of your verifier (which is a huge win) but it’s unclear whether it’ll provide any protection for someone who is enabling you to interact online with your identity (e.g., Facebook).

The other question is whether entities can coerce users to give up this right, which is a common modern-day tactic (c.f. “clickwrap”). From the research I’ve done so far (IANAL), it looks like these aren’t rights that could be signed away in a contract—they represent minimum statutory requirements, and that as long as carve-outs aren’t created in the law, this Duty of Loyalty will be protected.

That’s what we need to fight against here. We need to “beware platforms bearing gifts”: we have to watch for Googles and Facebooks using regulatory capture to ask for carve-outs in this law that will steal away the rights from us and give it to them by making them optional. And that’s unfortunately a pretty big task in the modern world.

Make a Difference

I haven’t analyzed every line in S.B. 275. I wouldn’t be surprised if I find some things I don’t agree with as I explore it further. But in the big picture, this is a big win for self sovereignty and for user agency and autonomy in digital identity. Adding it on to Wyoming’s work creates another model for how digital identity that maintains human dignity could spread across the United States.

If you want to help in this effort:

If you’re in Utah, call up your state representatives and tell them that you support the bill. Maybe even express concerns about regulatory capture. If you’re in another state, call up your state representative and tell them of your interest in self-sovereign identity, offering Utah SB275, Wyoming SF39, and maybe Wyoming HB86 as model legislation. If you want to support our advocacy, become a GitHub sponsor or talk to me directly about supporting advocacy at a larger scale.

The work going on in Utah is great. But it’s just a start in supporting our digital rights!

Wednesday, 11. February 2026

OpenID

Public Review Period for Proposed Implementer’s Draft of International Government Assurance (iGov) Profile for OAuth 2.0

The OpenID iGov Working Group recommends the following OpenID Final Specification: International Government Assurance (iGov) profile for OAuth 2.0: https://openid.net/specs/openid-igov-oauth2-1_0-09.html https://openid.net/specs/openid-igov-oauth2-1_0-09.xml https://openid.net/specs/openid-igov-oauth2-1_0-09.txt   This would be the first Implementer’s Draft of this specification. An

The OpenID iGov Working Group recommends the following OpenID Final Specification:

International Government Assurance (iGov) profile for OAuth 2.0: https://openid.net/specs/openid-igov-oauth2-1_0-09.html https://openid.net/specs/openid-igov-oauth2-1_0-09.xml https://openid.net/specs/openid-igov-oauth2-1_0-09.txt

 

This would be the first Implementer’s Draft of this specification.

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members who have completed their reviews by then, voting will actually begin a week before the start of the official voting period.

The relevant dates are:

Implementer’s Draft public review period: Wednesday February 11, 2026 to Saturday, March 28, 2026 (45 days) Implementer’s Draft vote announcement: Sunday, March 22, 2026 Implementer’s Draft official voting period: Sunday, March 29, 2026 to Sunday, April 12, 2025

The iGov working group page:  https://openid.net/wg/igov/.

Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group, (2) joining the working group mailing list at openid-specs-igov@lists.openid.net, and (3) sending your feedback to the list. 

 

Marie Jordan – OpenID Foundation Board Secretary

 

About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.



The post Public Review Period for Proposed Implementer’s Draft of International Government Assurance (iGov) Profile for OAuth 2.0 first appeared on OpenID Foundation.


ResofWorld

Can India be a “third way” AI alternative to the U.S. and China?

The AI Impact Summit in India, the first in a developing country, proposes an option that focuses on public good and development. But can it deliver without capitulating to Big Tech?
India is the first among developing countries to host the AI Impact Summit. Official messaging emphasizes the summit as an opportunity to “give voice to the Global South” and democratize...

Google backs African push to reclaim AI language data

A new 21-language data set gives African institutions ownership and control in a field long dominated by Big Tech.
If you speak to an artificial-intelligence bot in an African language, it will most likely not understand you. If it does manage to muster a response, it will be rife...

Next Level Supply Chain Podcast with GS1

Taking Checkout to the Next Level with 2D Barcodes

The Universal Product Code (UPC) has powered retail for 50 years, but it was never designed to handle the data demands of today's complex supply chains and consumer expectations. In this episode, Reid Jackson and Liz Sertl sit down with Ned Mears, Senior Director of Global Standards at GS1 US, to explore how 2D barcodes will revolutionize the retail landscape, starting with Sunrise 2027. Ned ex

The Universal Product Code (UPC) has powered retail for 50 years, but it was never designed to handle the data demands of today's complex supply chains and consumer expectations. In this episode, Reid Jackson and Liz Sertl sit down with Ned Mears, Senior Director of Global Standards at GS1 US, to explore how 2D barcodes will revolutionize the retail landscape, starting with Sunrise 2027. Ned explains how 2D barcodes go beyond simple price lookups, enabling enhanced traceability, connected packaging, and compliance with future regulations. He also offers practical advice on barcode placement, point-of-sale readiness, and why delaying action increases risk.

In this episode, you'll learn:

How 2D barcodes change what's possible at checkout and across the supply chain

What brands and retailers need to do now to prepare for Sunrise 2027

How early planning reduces risk, cost, and operational disruption

Things to listen for: (00:00) Introducing Next Level Supply Chain (04:01) How 2D barcodes differ from traditional UPCs (07:41) Measuring industry progress toward Sunrise 2027 (12:46) How brands can prepare to implement 2D barcodes (19:22) What retailers need to assess to be ready for 2D at the point of sale (21:26) The risks of waiting until 2027 before preparing for 2D barcodes (27:00) Ned's favorite tech

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn Register now for this year's GS1 Connect and get an early bird discount of 10% when you register by March 31 at connect.gs1us.org.

Connect with the guest: Ned Mears on LinkedIn


Blockchain Commons

Musings of a Trust Architect: How XIDs Demonstrate a True Self-Sovereign Identity

It’s been more than ten years since I founded the Rebooting the Web of Trust workshop with the goal of reimagining the peer-to-peer web of trust first popularized by Pretty Good Privacy (PGP). The idea of decentralized identity was at the heart of the workshop from the start, but work on it accelerated following the first workshop when I wrote “The Path to Self-Sovereign Identity” to provide a foun

It’s been more than ten years since I founded the Rebooting the Web of Trust workshop with the goal of reimagining the peer-to-peer web of trust first popularized by Pretty Good Privacy (PGP). The idea of decentralized identity was at the heart of the workshop from the start, but work on it accelerated following the first workshop when I wrote “The Path to Self-Sovereign Identity” to provide a foundation for the continuing discussion of the topic. It was then at the second workshop that we really rolled up our sleeves and began developing what would eventually become W3C’s DID standard.

So I’ve been there from the start. I have a solid basis in our original intent for self-sovereign identity (SSI), and I know where we went from there. I’m the co-editor of the W3C Amira Engagement Model, which demonstrated many of our desires with decentralized identity, and also a co-author of the W3C DID standard and of BTCR, the first DID method.

Unfortunately, in the ten years since I wrote “The Path to Self-Sovereign Identity,” I feel that SSI, as implemented, has become indistinguishable from the very systems we set out to disrupt. Worse, it has legitimized architectural choices that align more closely with centralized identity systems, such as the mDL/mDoc-style solutions now appearing in emerging EU digital wallet deployments.

I first wrote about these concerns in “Has Our SSI Ecosystem Become Morally Bankrupt?”. In short, SSI was meant to be private, decentralized identity. You were meant to be able to decide what you revealed about your identity, and you were meant to not be beholden to any outside control or gatekeepers.

But compromise after compromise ate away at those ideals. Early discussions of DID- and VC-based identity treated issuer, holder, and verifier as equal roles that any participant might play. When the W3C Verifiable Credentials standard was ratified, that symmetry was narrowed to a “three-party ecosystem,” and when the DID standard followed, even that framing disappeared. That’s what made DID issuers rarefied powers within the DID ecosystem rather than your peers; it’s why DID wallets are largely incapable of acting as issuers themselves.

To be honest, I saw the potential for these problems while the DID specification was under review for Recommendation, but as an Invited Expert at the W3C, I approved the draft without raising a formal objection. That’s because I didn’t have an alternative at the time. Since I didn’t have that alternative, I didn’t feel it was right to say that DIDs as proposed were wrong. There was certainly the hope that they would develop in the right way, that even with the compromises in the standard, the deployers of mass-market DIDs would stick with our goals of decentralization and privacy.

But, they didn’t.

As a result, we have self-sovereign identity in name only. Though a true decentralized identity could be built within the DID spec, that’s not how the ecosystem has evolved. Instead, more often than not, a small, de-facto-centralized set of issuers structurally controls what you can do with your DID. They limit what you can redact and require phone-home behavior that not only keeps you tied to them, but also negates your privacy.

It’s now been almost four years since the ratification of DID v1.0. While the working group has been drafting DID v1.1, I’ve been focused more on doing my own work to create a technology stack that better exemplifies what we first started back in May 2016, in the shadow of the United Nations and the inaugural ID2020 summit: a truly decentralized and self-sovereign identity.

I call it the XID or eXtensible IDentifier. XIDs deliberately trade ecosystem convenience and institutional alignment for architectural clarity, holder control, and long-term privacy. They’re not intended to replace every use of DIDs or VCs, but to demonstrate a holder-centric identifier model that can coexist with, wrap, or inform future decentralized identity systems. They’re an exemplar of what is possible.

A lot of my work on XIDs goes back to the Amira Engagement Model. At Rebooting the Web of Trust 5, in Boston, we imagined a programmer with a politically sensitive background who wanted to contribute her skills to social causes, but was afraid of repercussions for herself and her family. We then laid out a self-sovereign pseudonymous identity system that would allow her to do so by protecting her real identity while allowing her pseudonymous identity to gain reputation over time, all under her tight control. Amira was my North Star when I began work on XIDs: I wanted to create a new decentralized identifier that supported her use case in a way that the evolving DID ecosystem did not.

Here’s the quick overview of the architecture:

XIDs can be created by anyone. They function as holder-controlled identifiers that can carry signed assertions and credentials without requiring issuer-controlled disclosure or online resolution. They are autonomous cryptographic objects that are self-contained and do not inherently phone home; network interaction occurs only if a viewer explicitly chooses to dereference a URL or other correlatable pointer, rather than using one of the more secure alternative resolution methods available. Most importantly, the holder of a XID can themselves choose to redact any of the information it contains, without invalidating any signatures that have been made to authenticate the XID or any credentials it might include.

Blockchain Commons has developed a body of work around XIDs, including specifications, working code, and supporting materials. We’ve also used XIDs within Blockchain Commons for experimental identity workflows and as a testbed for privacy-preserving identity research.

If that all sounds intriguing, please jump to our developer page, which links all of our specfications, code, CLI apps, and other material. If you want to evaluate XIDs more concretely, instead start with the Quickstart tutorial and concepts to see how these ideas work in practice. It’s the fastest way to begin working with XIDs (though note that only the first two tutorials have been fully released at this point).

But if you want more details about why XIDs might be of interest to you, I’ve written a number of discussions that I think address why a variety of people might be interested in XIDs or why they might be reluctant to use them. See if one of them addresses your situation. (And if none of them do, let me know your concern or issue, and I’ll be happy to add another discussion.)

Concerns that DIDs address:

“I have self-sovereignty concerns” “I have privacy concerns”

issues you might want addressed before you adopt DIDs:

“I already use DIDs” “I need a new technology to be novel” “I don’t need another standard”

After you’ve read any sections that interest you, you should jump to the conclusion.

“I have self-sovereignty concerns”; or
“I already use DIDs”

I’ve already written about the general issue with DIDs: that the standard allows developers to sidestep decentralization. Most specifically, the biggest problem I have with DIDs is that holder control never happened. That’s the prime thing that XIDs are meant to rectify.

Now “holder control” is a pretty bloodless term. But, it’s the heart of self-sovereignty, so when I say that DIDs don’t have holder control, I mean that they don’t have self-sovereignty either: you don’t control your identity (which was the whole point of SSI!).

A lack of personal control pervades the SSI ecosystem, starting with the fact that you typically can’t issue DIDs or VCs yourself. However, limitations on disclosure may be the most problematic. When an issuer produces a DID full of VC credentials and assertions, the issuers ultimately decide how disclosure of that information is managed. They might say that their VCs are all-or-nothing: that you have to disclose them in its entirety. Or they might allow you to redact parts of the information, but only specific elements that they allow. So, if you have an identifier that contains your name, age, and address, they might allow you to redact your age or your address when you publish the identifier, but say that your name always has to be there! If the word “allow” in there pisses you off, it should. They decide, not you.

XIDs flip all of this. They allow a decentralized identifier to be created and filled with signed assertions and other credentials. The holder then decides what’s shown and what’s not, to the specific granularity they want. This allows one person to have multiple identity contexts and an infinity of faces. These facades show different parts of a XID to different people, but it’s still all one XID: one identifier that the holder truly controls.

So why would you, as a DID adopter or someone concerned about self-sovereignty, choose XIDs? Because they hand control back to the holder of the identifier, as was always intended for self-sovereign identity.

“I have privacy concerns”

Great! XIDs are definitely the thing for you then!

The problem with DIDs as they’ve evolved is that they don’t prevent correlation. In fact, they often make it worse.

This is because your DID can be packed with correlatable information, possibly even including credentials and other assertions. It’s a big honeypot of data that’s been helpfully collected together, and that makes it easy to link it up to other things you’ve done online or even in the physical world, creating an even bigger honeypot of data to define you.

The solution to this problem is redaction: removing some of the information from the DID so that only the minimum necessary is shown (minimal disclosure). But DIDs usually limit the ability to determine what can be redacted to the issuer—and ultimately they’re only going to allow redaction if it runs parallel to their business interests. So, batches of information that are too big are often sent out with DIDs, and that makes it easier to collect them all together, and that makes it easier to create even bigger honeypots by correlation with other information. That’s the opposite of a privacy concern.

XIDs are designed under the fundamental assumption that issuers, verifiers, networks, and infrastructure providers may all act adversarially with respect to correlation and control. As a result, they support radical elision. The holder has the ability at any time to redact information down to the atomic level: any singular datum, or even any atomic part of that datum (e.g., a subject, a predicate or an object) can be redacted. Because privacy is a central concern for me, I’ve also introduced additional technologies to make this radical elision more powerful: salted hashes can disguise redacted elements that might be easy to guess, while Garner and Hubert are new transport technologies that make it even harder to correlate by providing support for hidden and offline use of XIDs. Though the current implementation doesn’t support BBS and other anti-signature correlation zk-proofs, the architecture is designed with them in mind. We already support quantum-resistant signatures and encryption.

I should note that privacy is always a bit of hard sell. You’re presumably reading this section because privacy is important to you, but if you’re on the fence, I suggest a new lens for looking at privacy: coercion-resistance. One of the big problems with loss of privacy is that it can lead to you being coerced. You might be forced to withdraw your political opinion online due to death threats if your online identifier was correlated with your real-life identity; or you might be forced to hand over your Bitcoins if your Bitcoin addresses became correlated with your home address or the home addresses of your loved ones. Protecting privacy protects you against coercion, and that’s a whole additional level of self-sovereignty: it’s literally sovereigny over yourself.

So why would you, as someone concerned about privacy, choose XIDs? Because they fight against the correlation that is the biggest threat to the privacy of not just your information but your actual self.

“I need a new technology to be novel.”

It is entirely fair to say that a new technology needs to bring something new to the table, especially if it runs straight against an existing technology (or in this case an existing standard!).

XIDs are.

Their novel elements include:

XIDs support deterministic encoding. Because XIDs build on my dCBOR and Envelope technologies, the data in XIDs is encoded deterministically: given the same input, it will be serialized in the same way across platforms and implementations. This enables reliable comparison, hashing, and signing and ensures the stability and reproducibility of the encoded form of the data across various platform ecosystems and implementations of code. This approach was explicitly abandoned in the IETF’s JWT ecosystem and was only partially reintroduced in JSON-LD through complex, and often layer-violating, mechanisms.

XIDs support radical elision. This deterministic encoding enables any element of data in an XID to be redacted or encrypted (what we call elided), down to the atomic level of individual entities within its system of semantic triples. This makes minimal disclosure practical by allowing you to provide differently redacted or encrypted versions of the same XID to different parties, preserving privacy by design. By contrast, redaction support in other technologies has been limited or optional to date (the IETF, for example, treats it as optional in SD-JWT), and this level of fine-grained elision represents a substantial step beyond that.

XIDs support progressive trust. Progressive trust is the ability to slowly expand what you reveal about yourself to other people over time. It’s how identity works in real life, but it hasn’t been the way digital identity systems work, because they’re often built on all-or-nothing data disclosures: a binary “trust or not trust” model. Progressive trust was built into XIDs from the start: it’s the only technology where gradients of trust are fundamental to the architecture.

XIDs are supported by radically private communication methods. Blockchain Commons has so far released two radically private communication methods that are closely linked to XIDs. Hubert supports communication through decentralized storage services such as BitTorrent and IPFS. Garner links XIDs to Tor communication. These are both radical new ways to communicate privately, without the need for centralization. We’ve built proof-of-concepts for each to show how they can be easily applied.

So why would you, who wants to see innovation, adopt XIDs? Because they’re indeed innovative. Deterministic encoding, radical elision, progressive trust, and the support of radically private communication are some of the most progressive ideas contained within.

“I don’t need another standard”

I hear you! You’re already committed to a standard, or you say the standards process is too slow so you don’t want to fight through the introduction of something new.

XIDs do not have to be a new standard because they’re a functional proof of concept. Certainly, I invite you to adopt XIDs if they fit your needs. They’re robust, they’re well supported, and we’ll continue to support them in the future. But that’s not the only way to see the advancement of the technological goals such as minimal disclosure and coercion resistance that are exemplified in XIDs.

Even if XIDs themselves are never widely adopted, they would be a success if their holder-centric capabilities become unavoidable requirements in future identity standards. In other words, XIDs demonstrate what future iterations of DID or mDoc could evolve to become, but only if we are willing escape the shackles of legacy and older architectures. If we are willing to do something major, then standards bodies can look toward XIDs for what is possible.

Ultimately, if the only way to ensure self-sovereignty and to protect privacy is to push for wider adoption of XIDs, I’ll do that, but I’d be even happier to see major standards pick up the core features of XIDs, which are also the ideals of our original self-sovereign identity movement that have been left by the wayside.

So why would you, who doesn’t need another standard, adopt XIDs? Because they can be a stepping stone. They’re a proof of concept meant to push their ideals into wider discussion and ultimately wider adoption. For now, adoption of XIDs as an alternative moves us along that path.

Conclusion

I am pushing XIDs because there are historic stakes. Maintaining the self-sovereignty of identity is important because centralized identity repositories can be horribly misused. DIDs were supposed to accomplish that, but they’ve compromised by failing to unequivocally hold the line on the core values of decentralization.

As a result, we need to point out their flaws, offer alternative technologies that can do what they fail to, and use that as the foundation of a new revolution in self-sovereign identity. XIDs (along with several related Blockchain Commons technologies) are my Declaration of Independence from the current DID standard. I invite you to join that declaration so that we can together argue for the true self-sovereignty and true privacy that are missing from today’s DIDs.

Here’s what you can do to support the revolution:

Join the Gordian Developer Community to talk about these technologies. Join the SSI 10th Anniversary Community to talk about the future of self-sovereign identity. Read more about XIDs on our developer pages. Try out the XID Tutorial on GitHub and submit issues for any questions you have.

Tuesday, 10. February 2026

Hyperledger Foundation

The 2026 LFDT Mentorship Program is officially open!

Maintainers and active contributors: We invite you to propose a mentorship project! The mentorship program is a structured opportunity to get additional help and resources for your project, guide and groom new talent into active contributors and future maintainers and to grow and hone your own teaching and leadership capabilities.
Maintainers and active contributors: We invite you to propose a mentorship project!

The mentorship program is a structured opportunity to get additional help and resources for your project, guide and groom new talent into active contributors and future maintainers and to grow and hone your own teaching and leadership capabilities.


DIF Blog

DIF Welcomes Kyndryl as Associate Member to Advance Secure, Decentralized AI Ecosystems

February 10, 2026 — The Decentralized Identity Foundation (DIF) is pleased to announce that Kyndryl, the world’s largest IT infrastructure services provider, has joined DIF as an Associate Member.  Active participation in the DIF working groups underscores Kyndryl’s commitment to shaping the future of secure,

February 10, 2026 — The Decentralized Identity Foundation (DIF) is pleased to announce that Kyndryl, the world’s largest IT infrastructure services provider, has joined DIF as an Associate Member.  Active participation in the DIF working groups underscores Kyndryl’s commitment to shaping the future of secure, interoperable, and decentralized digital ecosystems.

As enterprises embrace Agentic AI, identity and verifiable trust have become critical enablers for autonomous agents to operate securely and collaboratively. By joining DIF, Kyndryl will help accelerate the development of standards and infrastructure that make this vision possible.

“As our Trusted AI Agents and Creator Assertions Working Groups gain momentum, leaders such as Kyndryl are essential to the development of standards that serve the industry. DIF is designed to bring together organizations who have shared interests but different perspectives. Kyndryl brings expertise in enterprise infrastructure and commitment to decentralized identity means that the standards coming out of DIF are deployable in large-scale implementations,” said Grace Rachmany, Executive Director of DIF. 
Why This Matters: Advancing Trust in Agentic AI: Kyndryl’s Agentic AI Framework envisions a future where autonomous AI agents interact seamlessly across open, cross-domain ecosystems. Through DIF’s Trusted AI Agent Working Group, Kyndryl will contribute to defining standards that enable agents to establish strong trust. The Working Group addresses authentication, authorization, and delegation, allowing industry players to collaborate safely and reliably. Leadership in Decentralized AI Governance: By aligning with DIF and supporting DIF’s lightweight process for developing and releasing international standards, Kyndryl positions itself at the forefront of defining ethical, secure, and scalable AI agent interactions, without being bogged down in bureaucratic processes.  Empowering Enterprise Innovation: This collaboration is coauthoring the playbook for delivering next-generation identity and trust frameworks for enterprises, unlocking new automation and collaboration opportunities.

Sachio Iwamoto, Director and Principal Architect at Kyndryl, writes

“As enterprises accelerate toward AI‑native operating models, they need a trusted foundation for secure autonomy. By advancing interoperable identity standards, runtime guardrails, and end‑to‑end governance, we enable explainable, policy‑aligned multi‑agent collaboration, allowing organizations to innovate confidently, scale faster, and meet regulatory expectations.”

For more information about DIF and its working groups,
visit identity.foundation

Learn More

If you would like to get in touch with us or become a member of the DIF community, please visit our website.

Can't get enough of DIF?
| Follow us on Twitter
| Join us on GitHub
| Subscribe on YouTube
| Read our DIF blog
| Read the archives


ResofWorld

Immigrant women founders face double hurdle in Silicon Valley

Persistent racism and sexism in the venture capital industry are denying immigrant women founders an opportunity to participate in the AI boom.
Pitching her AI-powered insurance-claims product to venture capitalists in the Bay Area, Sri Ramaswamy expected questions about the technology, and her growth plan. Instead, she was asked: “Who else is...

Monday, 09. February 2026

Oasis Open

Cisco Donates Project CodeGuard to Coalition for Secure AI

Boston, MA – 9 February 2026 – OASIS Open, the global open source and standards consortium, announced that Cisco has donated Project CodeGuard, an AI model-agnostic security coding agent skills framework and ruleset, to the Coalition for Secure AI (CoSAI), an OASIS Open Project. The framework embeds security best practices directly into AI-assisted software development, […] The post Cisco Donate

Framework Strengthens Secure-by-Default Practices in AI Coding Workflows

Boston, MA – 9 February 2026 – OASIS Open, the global open source and standards consortium, announced that Cisco has donated Project CodeGuard, an AI model-agnostic security coding agent skills framework and ruleset, to the Coalition for Secure AI (CoSAI), an OASIS Open Project. The framework embeds security best practices directly into AI-assisted software development, helping to prevent vulnerabilities introduced by AI coding agents and generating more secure code automatically. 

Addressing AI Coding Security Risks

As AI coding agents rapidly transform software engineering, the speed and efficiency they provide can inadvertently introduce security risks, including skipped input validation, hardcoded secrets, weak cryptography, unsafe functions, and missing authentication or authorization checks. 

Project CodeGuard addresses these challenges across the full development lifecycle: guiding design before code is written, preventing vulnerabilities during code generation, and supporting AI-assisted code review afterward.

“Project CodeGuard represents Cisco’s commitment to advancing security at the scale and speed of AI,” said Anthony Grieco, Chief Security & Trust Officer, Cisco. “While this is a major step forward, we are just getting started. By contributing this framework to CoSAI’s open ecosystem, together, we are building security into AI coding from the start. Making these practices freely available will elevate security across the industry and protect the software that powers our collective world.”

For more details on the donation and technical capabilities, read more in the blog post, “Cisco’s Donation of Project CodeGuard to CoSAI: A New Chapter in Securing AI-Generated Code.”

“Project CodeGuard exemplifies CoSAI’s vision of bringing together industry expertise to solve real-world AI security challenges,” said David LaBianca, CoSAI Co-Chair, Google. “This framework empowers developers with the tools they need to create secure code. Through our open collaboration model, we’ll work with the community to expand these capabilities and drive adoption across the industry, advancing our shared mission of making AI systems more secure and trustworthy.”

Comprehensive Security Coverage

Project CodeGuard provides multi-layered security coverage across several domains: cryptography, input validation, authentication, authorization, access control, supply chain security, cloud and platform security, and data protection. This approach ensures that security considerations are woven throughout the development process.

The framework integrates seamlessly with AI assistants including Cursor, GitHub Copilot, Windsurf, Claude Code, and others, using a unified markdown format that translates easily to integrated development environment (IDE)-specific formats.

Development Through Special Interest Group

The ongoing development and extension of Project CodeGuard will be conducted through a dedicated Special Interest Group (SIG) within CoSAI’s AI Security Risk Governance Workstream. The collaborative structure will enable technical contributors, researchers, and organizations to work together on expanding the framework’s capabilities and driving its adoption across the AI development community.

Get Involved

CoSAI brings together more than 40 industry partners to advance secure AI, share guidance for deployment, and collaborate on AI security research and tools. Its Premier Sponsors, including EY, Google, IBM, Meta, Microsoft, NVIDIA, PayPal, Snyk, Trend Micro, and Zscaler, are leading the way in advancing secure AI initiatives. Technical contributors, researchers, and organizations are welcome to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide.

CoSAI operates under OASIS Open, an international standards and open-source consortium. www.coalitionforsecureai.org

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries: communications@oasis-open.org

The post Cisco Donates Project CodeGuard to Coalition for Secure AI appeared first on OASIS Open.


FIDO Alliance

Enterprise IT News: Why APAC can lead the world in FIDO and passkey adoption

Asia-Pacific (APAC) is one of the most-attacked regions globally — accounting for 34 per cent of incidents in 2024, with valid-account abuse as the leading entry vector, according to the IBM […]

Asia-Pacific (APAC) is one of the most-attacked regions globally — accounting for 34 per cent of incidents in 2024, with valid-account abuse as the leading entry vector, according to the IBM X-Force 2025 Threat Intelligence Index — making strong identity protection a business imperative. Across business process outsourcing (BPO) operations, manufacturing floors, healthcare environments, and both SMEs and large enterprises, workers rely heavily on continuous access to applications and sensitive digital data, meaning the digital identity of every employee has effectively become the new perimeter.


ID Tech: Better Identity Coalition Circulates Draft Voluntary Code of Conduct for Verifiable Credentials

The Better Identity Coalition has circulated a draft voluntary code of conduct it describes as “rules of the road” for how organizations request and use data from verifiable digital credentials. […]

The Better Identity Coalition has circulated a draft voluntary code of conduct it describes as “rules of the road” for how organizations request and use data from verifiable digital credentials. The effort offers an early framework for limiting overly broad or invasive data requests as verifiable credentials move closer to real-world deployment.


Biometric Update: Passkeys offer potential solution to increased deepfake attacks on financial services

Among sectors vulnerable to AI-assisted fraud attacks, the financial industry is perhaps the ripest. With high-stakes remote transactions occurring at scale, increasingly involving AI agents, there are countless attack surfaces, […]

Among sectors vulnerable to AI-assisted fraud attacks, the financial industry is perhaps the ripest. With high-stakes remote transactions occurring at scale, increasingly involving AI agents, there are countless attack surfaces, and potentially massive payoffs.

At the FIDO Alliance’s Identity Policy Forum, a panel led by the Better Identity Coalition unpacks a paper it drafted with the American Bankers Association within the Financial Services Sector Coordinating Commission (FSSCC), focusing on the threat of generative AI to the financial services digital identity system.


Biometric Update: Calling Utah: SEDI offers template for fast-tracking digital identity schemes

A presentation from Chief Privacy Officer for the State of Utah Christopher Bramwell at the FIDO Identity Policy Forum looks at how the state’s unique culture has influenced its leadership on digital […]

A presentation from Chief Privacy Officer for the State of Utah Christopher Bramwell at the FIDO Identity Policy Forum looks at how the state’s unique culture has influenced its leadership on digital identity in the U.S., in the form of its State Endorsed Digital Identity (SEDI) initiative.

Sunday, 08. February 2026

Velocity Network

Webinar Recording: Accelerate Your Healthcare Workforce with Verifiable Credentials

The post Webinar Recording: Accelerate Your Healthcare Workforce with Verifiable Credentials appeared first on Velocity.

Thursday, 05. February 2026

FIDO Alliance

Biometric Update: FIDO’s Andrew Shikiar predicts the triumph of wallets in 2026

Passkey champions to develop certification profile as focus turns to digital credentials At the annual Identity Identity & Policy Forum, it’s a tradition for Andrew Shikiar, CEO of the FIDO […]

Passkey champions to develop certification profile as focus turns to digital credentials

At the annual Identity Identity & Policy Forum, it’s a tradition for Andrew Shikiar, CEO of the FIDO Alliance, to reflect on his predictions from the previous year and offer predictions for the coming one. 2025 was a pivotal year for FIDO: passkeys – FIDO’s raison d’etre in recent years – finally became a mainstream authentication method, marking a long-term win for the Alliance.

In his keynote, FIDO Alliance CEO Andrew Shikiar estimates over 4 billion passkeys are now being used to secure sign-ins around the world. “That’s a massive number considering we introduced passkeys in 2022.”

Shikiar’s speech runs through his record on predictions he made at the beginning of 2025, and comes out looking pretty clairvoyant. Major banks have deployed passkeys. “I stood here last year and said 2025 will be the year of passkeys and banking,” Shikiar says. “I was kind of eating my socks on that until around Q4, when all of a sudden basically every major bank in the U.S. passkeys for sign-up.”


Hyperledger Foundation

Building Forward, Together

As I reflect on last week’s LF Decentralized Trust Member Summit and Maintainer Days, I want to extend a sincere thank you to every member and maintainer who showed up, participated, and contributed. The conversations, collaboration, and momentum over our days together were a powerful reminder of what makes this community so special. This was a week shaped by members who have been build

As I reflect on last week’s LF Decentralized Trust Member Summit and Maintainer Days, I want to extend a sincere thank you to every member and maintainer who showed up, participated, and contributed. The conversations, collaboration, and momentum over our days together were a powerful reminder of what makes this community so special. This was a week shaped by members who have been building with us for years and strengthened by those who are newer to the community and already leaning in.


FIDO Alliance

Meta Engineering: No Display? No Problem: Cross-Device Passkey Authentication for XR Devices

Meta shares a novel approach to enabling cross-device passkey authentication for devices with inaccessible displays (like XR devices).

Meta shares a novel approach to enabling cross-device passkey authentication for devices with inaccessible displays (like XR devices).

We’re sharing a novel approach to enabling cross-device passkey authentication for devices with inaccessible displays (like XR devices). Our approach bypasses the use of QR codes and enables cross-device authentication without the need for an on-device display, while still complying with all trust and proximity requirements. This approach builds on work done by the FIDO Alliance and we hope it will open the door to bring secure, passwordless authentication to a whole new ecosystem of devices and platforms.

Wednesday, 04. February 2026

Hyperledger Foundation

A Deep Dive into Besu Milestones: 2025 Highlights and 2026 Goals

Introduction Looking back to the beginning of the year, we had ambitious goals for Besu, and we packed a lot into 2025 - including shipping 3 hardforks (2 named hardforks plus BPO1), 15 releases, 811 commits - oh and Ethereum’s 10-Year Anniversary. This post will unpack some of Besu’s 2025 highlights.
Introduction

Looking back to the beginning of the year, we had ambitious goals for Besu, and we packed a lot into 2025 - including shipping 3 hardforks (2 named hardforks plus BPO1), 15 releases, 811 commits - oh and Ethereum’s 10-Year Anniversary. This post will unpack some of Besu’s 2025 highlights.

Tuesday, 03. February 2026

Digital ID for Canadians

Case Study In Success – Treefort Technologies

Treefort Technologies Prevents Millions in Title Fraud While Making Identity Verification a 5-Minute Task for Canadian Lawyers 

Download this Case Study In Success PDF

Fast Facts Organization: Treefort Technologies Sector: Legal Technology / Identity Verification Headquarters: Edmonton, Alberta DIACC Certification: PCTF Verified Person (January 2025) Key Achievement: Detected tens of millions of dollars in potential title fraud The Challenge

Title fraud losses in Canada have exceeded $100 million over three years, with one major insurer reporting a 300%+ increase in fraud claims year-over-year. Sophisticated criminals now create fake identification documents in under 30 seconds using generative AI and digital injection attacks, including forgeries visually impossible to detect.

For Canadian lawyers, the stakes are enormous. They must verify client identities under FINTRAC regulations and Law Society rules, yet regulators cannot endorse specific technologies. Legal professionals faced an impossible choice: trust verification tools without independent validation, or expose their practices and their clients to increasingly sophisticated fraud.

The Solution

Treefort Technologies, founded by commercial lending lawyer Jay Krushell, built Canada’s most comprehensive identity verification platform specifically for legal professionals. What sets Treefort apart is its layered, multi-factor approach: rather than relying on a single verification method, the platform triangulates data across 1,750+ authentication data points from trusted sources.

Pursuing PCTF certification through DIACC gave Treefort independent, third-party validation that its platform meets rigorous standards for privacy, security, and interoperability. The certification process aligned with ISO/IEC standards and involved meticulous evaluation by DIACC-accredited auditors.

As a leader in the industry, Jay Krushell also co-leads the DIACC Adoption Committee and drives the development of the PCTF Legal Professionals Profile, a new auditable criteria that specifically address how digital identity verification technologies comply with Law Society client ID rules. This collaborative work ensures lawyers have objective confirmation when selecting verification technology.

The Results Fraud Prevention: Detected tens of millions of dollars in potential title fraud Speed: Clients complete verification in 5 minutes using their smartphone Certifications: PCTF Verified Person certification (one of only two IDV providers) plus SOC 2 Type II compliance Coverage: Compliant with Law Society Client ID Rules in every province in Canada Strategic Partnership: Stewart Title Canada acquired a majority stake (2021), validating market position Integrations: Available through BC Land Title and Survey Authority (LTSA), Appara, Closer, Prolegis, Quintlink and other leading platforms

“Securing digital identities is our passion, and Treefort’s industry leadership and our PCTF certification are a testament to our unwavering commitment to excellence. In today’s world, where the authenticity of information is routinely challenged, our track record and our certification provide unparalleled confidence.

— Jay Krushell, Chief Legal Officer & Co-Founder, Treefort Technologies

Future Outlook

Treefort continues expanding its platform capabilities, including banking verification features and enhanced fraud risk indicators. As Canada moves toward Open Banking, Treefort is positioned to integrate new data sources that further strengthen identity verification. The company remains committed to working with DIACC on evolving trust framework standards that protect Canadians while enabling innovation.

Ready to Protect Your Practice

Legal professionals across Canada trust Treefort to verify client identities and prevent fraud.With PCTF certification validating their commitment to security and compliance, Treefort offers the confidence you need in today’s high-risk environment.

Visit treeforttech.com to see how 5-minute verification can transform your practice while protecting your clients and reputation.


Oasis Open

Meta Joins Coalition for Secure AI as Premier Sponsor to Advance Industry Security Standards 

Boston, MA – 3 February 2026 – The Coalition for Secure AI (CoSAI), an OASIS Open Project advancing AI security, announced that Meta has joined the coalition as a Premier Sponsor. With extensive experience developing and deploying AI technologies and a longstanding commitment to open research and collaboration, Meta brings invaluable expertise to CoSAI’s mission […] The post Meta Joins Coalition

Collaboration Supports Commitment to Secure, Trustworthy, and Responsible AI Systems

Boston, MA – 3 February 2026 – The Coalition for Secure AI (CoSAI), an OASIS Open Project advancing AI security, announced that Meta has joined the coalition as a Premier Sponsor. With extensive experience developing and deploying AI technologies and a longstanding commitment to open research and collaboration, Meta brings invaluable expertise to CoSAI’s mission of establishing industry-wide AI security standards and best practices. 

The addition of a global AI leader further strengthens CoSAI’s rapidly expanding community of more than 40 partner organizations working together to address emerging security challenges and build secure and trustworthy AI systems. 

“We welcome Meta as a CoSAI Premier Sponsor. Their commitment reflects what the industry increasingly recognizes: AI security is best addressed through open collaboration,” said Omar Santos, CoSAI Project Governing Board (PGB) co-chair, Cisco. “Meta’s deep knowledge in AI development and deployment will strengthen our mission to create open, actionable frameworks that help organizations build secure AI systems.”

“Meta is deeply committed to advancing AI security through collaboration with industry partners to benefit the entire AI ecosystem,” said Scott Bratsman, Meta’s Senior Director of Product Management for Security. “By joining CoSAI, we’re proud to help develop practical security standards to ensure AI systems remain secure at scale.”

Meta joins CoSAI’s distinguished group of Premier Sponsors, including EY, Google, IBM, Microsoft, NVIDIA, PayPal, Snyk, Trend Micro, and Zscaler. Together, these organizations are united in accelerating the development of secure and responsible AI.

Get Involved

CoSAI welcomes technical contributors, researchers, and organizations to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide. CoSAI operates under OASIS Open, an international standards and open-source consortium. www.coalitionforsecureai.org

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries: communications@oasis-open.org

The post Meta Joins Coalition for Secure AI as Premier Sponsor to Advance Industry Security Standards  appeared first on OASIS Open.

Friday, 30. January 2026

Internet Safety Labs (Me2B)

TikTok’s Real Privacy Risks

In 2025, in light of the US TikTok ban, ISL conducted an investigation into TikTok and the inherent privacy and safety risks in the app. In light of the recent announcement rehoming certain TikTok assets into the mostly (but not entirely) US-backed TikTok USDS Joint Venture LLC, we offer this updated analysis of the overall […] The post TikTok’s Real Privacy Risks appeared first on Internet Safe

In 2025, in light of the US TikTok ban, ISL conducted an investigation into TikTok and the inherent privacy and safety risks in the app. In light of the recent announcement rehoming certain TikTok assets into the mostly (but not entirely) US-backed TikTok USDS Joint Venture LLC, we offer this updated analysis of the overall privacy risks for US citizens who use TikTok.

Understanding the programmatic privacy risks related to TikTok is more complex than just analyzing the TikTok iOS and Android apps. “TikTok” is more than a couple social media apps; it’s a portfolio of products owned and developed by a complicated network of US, Singapore, Chinese, and other entities. TikTok and its related entities have 66 mobile apps available on app stores worldwide and more than 16 on US app stores, including TV app stores (Google, Amazon, Samsung, and LG).

Additionally, our latest research shows that nearly 48,000 mobile apps share data with TikTok via TikTok’s published Software Development Kits (SDKs)[1]. Little has been discussed about the other TikTok apps, the 48,000 app developers’ duties or SDKs in general, vis a vis the Protecting Americans from Foreign Adversary Controlled Application Act.

The case of TikTok also exposes the confusion of complicated multinational corporate structures and issues of parent company access to digital assets. In this case, TikTok maintained several US companies such as ByteDance Inc., TikTok Byte Dance LLC, and TikTok Inc., all registered in California. But ByteDance Ltd. headquartered in China and registered in the Cayman Islands, is understood to be the ultimate parent of all the organizations[2]. According to CrunchBase, ByteDance Ltd. has 58 companies[3].

Now that TikTok’s corporate ownership is “safely” in the hands of a majority US-based owners, we revisit the reality of TikTok privacy risks under ownership of the new TikTok USDS Joint Venture LLC. We find very little has changed, and privacy risk has increased. Most strikingly, Bytedance Ltd. retains 19.9% ownership in the new venture. The new ownership also includes a UAE equity firm (15%) among mostly US-based others, two of which were (and remain?) stakeholders in Bytedance, Ltd. Oracle remains the cloud backend which it already was. Perhaps the most substantive change is that the US government has achieved its own kind of “golden share” of unfettered TikTok user data. In short, it might be the worst of all possible worlds.

Safety Risks of Social Media Platforms

Social media platforms pose several safety risks to users such as privacy risks, misinformation and disinformation risks, and risk of technology addiction. Privacy risk is multifaceted, with concerns about user data privacy (data sharing) as well as the platform’s AI-based translation of personal profiles into [ever smaller] micro-segments used for targeting, recommendations, and other predictive and decision-making functions. The latter behavior isn’t always recognized as a privacy risk, but it is perhaps the larger safety risk: pigeonholing people into undisclosed and unchangeable sensitive categories as designated and used (and shared?) secretly by the system. This risk is compounded by the fact that platform providers fail to disclose the extent to which the system can be manipulated either internally or by third parties to achieve political ends.

What is independently measurable when it comes to privacy risks? Data privacy with particular attention to data flow is readily measurable by independent auditors. Without access to the source code and architecture documents, it is extremely difficult to measure the inner logic of a machine learning system based on observable app and system behaviors.

What follows in this paper is a deep dive into measurable data privacy risks in TikTok apps as observed by the ISL team and an assessment of the overall safety risks

The original privacy concern unique to TikTok was that the Chinese government might have a 1% “golden share” in its parent company, ByteDance Ltd. thus enabling access to US Tiktok user data. But in 2023, TikTok claimed the Chinese government does not have access to it, and that it cannot compel ByteDance Inc. (a US company) to share data. It also denied that any user data was stored in China:

“Myth: Under its 2017 National Intelligence law, the Chinese government can compel ByteDance to share American TikTok user data.

Fact: TikTok Inc., which offers the TikTok app in the United States, is incorporated in California and Delaware, and is subject to U.S. laws and regulations governing privacy and data security. Under Project Texas, all protected U.S. data will be stored exclusively in the U.S. and under the control of the U.S.-led security team. This eliminates the concern that some have shared that TikTok US user data could be subject to Chinese law.

Myth: TikTok stores U.S. user data in China, where multiple Chinese nationals, including possible members of the CCP, have access to it.

Fact: As of June 2022, 100% of U.S. traffic is routed to Oracle and USDS infrastructure in the United States, and today all access to that environment is managed exclusively by TikTok U.S. Data Security, a team led by Americans, in America. We have begun the process of deleting historic protected user data in non-Oracle servers; once that process is complete, it will effectively end all access to protected U.S. user data outside of TikTok USDS except under limited circumstances.”[4]

In our privacy risk assessments of companies, ISL assumes that parent companies do indeed have access to the data of their child companies—just like they have access to other assets of child companies. Thus, parent company ByteDance Ltd. may have access to ByteDance, Inc. [the parent company of TikTok Inc.] and thus TikTok user data. But this remains an open—and larger—question regarding whether parent companies are entitled to access (read? monetize?) the data of the platforms that they own in whole or in part.

The US government, however, also has a record of attempting to gain access to personal data repositories through requests to corporations[5], through data brokers[6], and of course, through direct collection as performed by the NSA’s Prism program, surfaced by Edward Snowden in 2013[7].  The second Trump administration has exhibited an even more overt thirst for amassing and aggregating personal information[8]  and weaponization[9] of it, including Attorney General Pam Bondi’s most recent demand for Minnesota’s voter rolls and welfare data as a condition for removing ICE from the state.[10]

Perhaps an even greater risk of TikTok and other social media platforms is the ungoverned proliferation of misinformation and disinformation that is deployed at scale, which can influence elections and threaten democracy, witness the social media platform X’s role in the 2024 US election.[11] An analysis of the impacts of misinformation and disinformation is beyond the scope of this paper but is mentioned due to its importance.

Privacy Risks in Social Media Platforms Measurable Privacy Risks: How social media platforms collect and share personal information

When ISL assesses the overall privacy risk of an app, we use the traditional impact * likelihood calculation. Impact is based on the sensitivity of the information that an app collects. Likelihood is based on the amount personal information sharing and monetization performed by the app. It’s important to have a baseline understanding of the ways that social media platforms collect and share personal data.

Data Collection

Social media platforms collect data in several ways—note that these methods are used by a wide variety of apps and platforms; these are not unique to social media platforms:

From first party apps: Volunteered by the user while using the app, Observed user behaviors recorded by the app. Through the integration of Software Development Kits (SDKs) provided by the social media platform into third party apps. Through the social media platform’s tracking pixels incorporated on many third-party websites.

Data Collection by First Party Apps

Social media platforms collect extensive personal information by design and therefore build vast longitudinal records of so-called “personally identifiable information” (PII) about people and their social relationships, including family. TikTok (like most social media platforms) collects a great deal of sensitive information including location, unique identifiers for the device and the person, access to camera and photo library, access to microphone, browser history, and more.

The most sensitive information shared in social media platforms is personal location data, which can be obtained by the platform via multiple methods including directly from the device, volunteered by users either by their tagging locations or communicated textually. Precise geolocation is also provided in the metadata of photographs and videos unless it is disabled or removed.[12]

Some of the most sensitive information shared through social media platforms is photographs, and video and audio recordings. The information gleaned from photographs and videos conveys far more personal information than people may realize. Beyond the metadata automatically captured with photos and videos, information in the visual scene can include sufficient detail to indicate location and identify other people. TV shows playing in the background can convey preferences, as can books and magazines. Background photographs convey additional personal relationships and information. AI image recognition tools can automatically catalog all this information, allowing social media platforms to store it in ever deepening knowledge graphs.

But this isn’t the greatest risk: audio and video recording and photographs capture biometric data such as voice recordings. These files are increasingly risky also due to AI-forged voice and video recordings, and photographs.

Given that TikTok is a video sharing platform, one should assume that all video content and its metadata are being analyzed, catalogued, and datafied. Note that this isn’t unique to TikTok; all social media platforms are likely to be doing this.

We were interested to see how TikTok compared to Facebook when it comes to data collection. We used the iOS versions of the two apps to compare the permissions accessed by each app. As can be seen in Appendix B, assuming the Apple Privacy Labels are accurate, Facebook collects significantly more personally connected data, including health information.

Data Collection Through Software Development Kits

Every social media platform leverages the power of mobile app software developer kits (SDKs) to allow people to easily read and write from/to their network from other [non-TikTok] apps. SDKs are modules of code that can be integrated directly into any mobile app. Social media platforms usually have at least three freely available SDKs for app developers to use serving three distinct functions:

Login SDK: Allows people to log into an app using their social media credentials (typically called “federated ID”). From a privacy perspective, this functionality allows the social media platform to collect information about the variety of other apps a person uses. People may not realize how much sensitive information is gleaned from simply knowing the names of the apps—examples such as period tracker apps, dating apps, and other health care apps. Share SDK: Allows people to share to their social media network from a myriad of other apps—not just the social media app. Display Social Media Content SDK: Allows apps to display content from social media platforms directly in their apps. In the case of TikTok, this takes the form of an SDK that allows app developers to embed TikTok videos in their apps.

Since many social media platforms have their own ad networks, two other common SDKs are used to display ads from the social media company’s ad network and track advertising “events” to measure ad efficacy, etc.

It is important to highlight the privacy risk that SDKs have access to any data and permissions that the app has access to; examining the permissions of the apps including these SDKs is for future study.

TikTok: TikTok has all of these kinds of SDKs. According to AppFigures (a mobile analytics service), nearly 48,000 apps include TikTok SDKs. More than 47,500 of these apps are for use in the US (among other many other countries). Little has been mentioned of these 47,500 apps for use in the US. Presumably they were also governed by the US TikTok ban, but there were no reports of these apps disabling TikTok functions.

When we contemplate the privacy risks posed by TikTok, we must also consider that nearly 48,000 other apps are feeding personal information into TikTok servers. TikTok is vast personal data harvesting machine.[13]

Most apps that use the TikTok SDK are Android apps (47.3K) with only 521 iOS apps using it. The apps that include the TikTok SDK have, in total, billions of downloads. The apps are written by developers from all over the world. The apps include education, sports, finance, weather, health and fitness, dating, communication and business apps. 1,755 (3.67%) of the apps are educational apps. Nearly 10,000 of the apps that include the TikTok SDK are for children under the age of 18.[14] 6,560 apps are rated either for “Everyone”, “4+” or “9+”.14

Data Collection Through Tracking Pixels and Cookies

TikTok has 7 tracking pixels[15] designed to be added to third-party websites. Per TikTok developer documentation, their pixels mainly collect data to understand ad performance, but the pixels collect more than just ad data.[16] The following data collected by TikTok tracking pixels are personally identifiable information:

IP address, User agent, Cookies – TikTok has first- and third-party cookies; the latter are on by default, Metadata & button clicks, which, per the documentation, “can also be used to personalize ad campaigns for people on TikTok”, meaning the data is correlated and saved with TikTok user records.

Compared to Facebook which has 6 tracking pixels, TikTok pixels are found much less frequently in third-party sites (Table 1).

Table 1: TikTok and Facebook Trackers

Data Sharing

There are at least four channels for sharing data with and by social media platforms.

From the social media mobile app directly. Compared to other types of apps audited by ISL, social media apps like TikTok are generally less “leaky”, meaning that they mostly share collected personal information from the app with domains controlled by their corporate owners (also known as “first parties”). This is mainly because platforms like Facebook and TikTok have their own ad networks, so they don’t communicate with other adtech and martech platforms directly from the mobile app.

TikTok:  ISL recently took a closer look at the communication in and out of the main TikTok mobile apps (iOS and Android versions). We tested using two profiles: one of an adult, and one of a child. Key questions were:

How much data “leakage” was happening, i.e. how much data was being shared with unexpected third parties? What TikTok or ByteDance servers were involved, and what can we determine about the ownership and location of the servers?

In both test profiles, there was very little data “leakage” to spurious third parties. We observed only two third-party advertising related domains: (1) a cross-site Facebook tracker, and (2) a domain associated with the AppsFlyer SDK. Both of these make sense, as the app integrates Facebook SDKs for login and sharing, and also the AppsFlyer SDK (among others).

The communication was nearly 100% between the app and TikTok owned servers or obvious data processors for TikTok like Akamai. In terms of data sharing from the app, TikTok is very much like Facebook.

There were two TikTok and ByteDance domains in the network traffic, that do appear to be owned by the US based companies. Note that, despite multiple mentions that TikTok uses Oracle, we observed only one domain observed that was owned by Oracle.

Details on the communication to/from the app can be seen in the ISL Safety Labels for the apps:

Android TikTok app: https://appmicroscope.org/app/1729/ iOS TikTok app: https://appmicroscope.org/app/1728/

But Wait, There’s More

People may think that there are only a few TikTok apps available. This is not the case[17]. According to AppFigures, there are 66 apps provided by TikTok or its related entities, worldwide, up from 47 in 2025 [18]. Of those, 32 TikTok apps are available in the US. In the US there are TikTok apps available for Android TV, Amazon, LG TVs and Samsung TVs as well. There are also:

a wallpaper app called “TickTock – TikTok Live Wallpaper” with an estimated 100M downloads worldwide5 a “lifestyle/social” app called Lemon8 – Lifestyle Community with an estimated 10M downloads worldwide5 and a shopping app called “TikTok Shop Seller Center” with an estimated 10M downloads worldwide.

The TV apps present a serious privacy concern as they may collect viewing history; it would be prudent to assume they do. The TikTok for Android TV app has an estimated 10M downloads.

Also of particular concern are two VPN apps, which would have access to all network traffic to and from the phone. Neither appear to be available in the US, and if they were ISL would recommend staying away from them.

See Appendix A for a list of the TikTok apps available in various app stores worldwide.

Via Software Developer Kits (SDKs): As mentioned earlier, SDKs have access to all the data the encompassing app has access to. Since these apps may read data from the social media platform to present to the users of their app, they also have access to the user’s social media content. The apps are bound, however, by the developer agreement terms established by the SDK provider which prohibit such use, as TikTok’s does[19]. From the social media company [servers] to Customer Data Platforms and Identity Resolution Platforms: Social media platforms often share bulk personal data with marketing and advertising platforms called Customer Data Platforms (CDPs) and Identity Resolution Platforms (IDRPs). This is accomplished using the CDP’s or IDRP’s published application programming interfaces (APIs), which communicate server-to-server, not from the mobile app. Customer Data Platforms and Identity Resolution Platforms are platforms designed to ingest and share personal information at scale. ISL has identified over 300 such companies[20] worldwide and continues to research them to determine risks to consumers of this commercial surveillance infrastructure. For instance, nearly 40% of identity resolution platforms are registered data brokers and it’s likely more of them should be registered as data brokers. One of the key objectives of this commercial surveillance infrastructure is to personalize the experience of all “visitors”—i.e. not just existing customers but anyone who visits a website[21].

TikTok: TikTok does indeed have at least one such integration with an identity resolution platform: LiveRamp, one of the largest identity resolution platforms. TikTok likely has other integrations that would be performed by back-end servers.

Through advertising: Since TikTok has its own ad network, it is also capable of receiving personal information directly from the real-time bidstream (RTB).

How Large of a Privacy Risk is TikTok, Used as Intended?

From a commercial surveillance perspective, TikTok may be less risky than Facebook, for example, which has access to and uses much more personal and sensitive information for 3rd party advertising and its own purposes (see Appendix B).

From a government surveillance perspective, there is a non-zero risk that user data was combined across all ByteDance Ltd. properties. There was also a non-zero risk that TikTok user data was accessible by the Chinese Communist Party (CCP). ISL is unable to further quantify the risk based on publicly available information. These risks now apply to the US government.

TikTok in Education & Social Engineering Data Breaches

TikTok is widely used in classrooms in the US and around the world as a teaching and engagement tool. As noted earlier, the TikTok SDK is also integrated into more than 1700 education apps. Teachers find the platform to be a good way to communicate with students, and to inspire creativity[22]. At the time of this writing there were 5.9M TikTok posts tagged #teachersoftiktok, illustrating the level of teacher use of TikTok[23]. TikTok also funded a “Creative Learning Fund” with $50M in 2020.[24]

Social Engineering-Facilitated Data Breaches

Note that the use of TikTok has been connected to at least one school district data breach[25]. Any accessible data on social media platforms can be used in social engineering-based security attacks.

Digital Assets and Corporate Ownership

There are three key questions with respect to corporate ownership of a restructured TikTok: (1) where are the servers located and which country’s data privacy laws apply, (2) will ByteDance Ltd. retain access to the TikTok data/ training set, and (3) how much access do all owners have to the personal data repository?

Corporate Ownership

Corporate ownership—in part or whole—means having access to the assets of the company including access to the digital assets, such as databases of personal information amassed by the company’s products and services.

In 2023 there was considerable confusion over both the corporate ownership of TikTok and the Chinese government’s access to TikTok user data. Sources including TikTok[26] and Poynter[27] explained that ByteDance Ltd., a Cayman Islands registered company based in China, was majority owned (60%) by a worldwide consortium of equity partners, including large at least two named US private equity firms (Susquehanna International Group and General Atlantic); 20% was owned by the founder, Zhan Yiming, a private individual living in China, and 20% was employee owned by TikTok employees around the world. What was less clear, however, was whether the Chinese government owned a 1% “golden share”. TikTok’s explainer9 indicates that the Chinese government did not own a 1% in ByteDance, but in ByteDance subsidiary, Douyin Information Service Co., Ltd. Presumably, Douyin Information Service Co., Ltd was not included in the sale of TikTok to newly formed TikTok USDS Joint Venture LLC and remains with ByteDance Ltd. While the US Digital Services (USDS) is part of the entity name, the US is reported to not have a stake in the venture. It is disturbing that a government agency is named in the new venture. We have every reason to assume that data sharing with the US government was a condition of the sale.

Figure 1 below shows the estimated ownership by country location in 2023 and 2026[28]. Of note, Chinese-based ownership appears unchanged in size, but importantly, in 2023, 20% was owned by Zhang Yiming and now ByteDance Ltd. owns about 20% of the company. The authors are not legal experts but this sustaining partial ByteDance Ltd. ownership leaves open the original concerns about the CCP’s access to US citizens’ data.

The chart makes some assumptions about the distribution of ownership based on publicly available information. We don’t know how much of the 40% of worldwide owners (in 2023) were equity firms based in the US. It is possible that the 2023 US ownership was higher than 40%, possibly even higher than 50%.

Figure 1: Estimated TikTok ownership 2023 and 2026

Digital Assets

Based on our analysis it appears that TikTok is a relatively separate infrastructure, including at least three separate TikTok corporate entities noted earlier, but we can’t be sure that there isn’t a centralized or even decentralized but networked user database.

Cloud Storage

Ostensibly, the original TikTok-related executive order was largely fueled by concern over where user data was stored and who had access to it. Reports vary, but TikTok has been using Akamai, a US company, as a content delivery network for TikTok since about 2020 and possibly as early as 2016. Our research confirms network traffic to Akamai servers.

Oracle began providing secure cloud storage technology to TikTok Global in 2020[29] including a 12.5% stake in TikTok Global. We do not know however if data was shared extraterritorially.

The most important privacy issue here, however, is that Oracle is a registered data broker in states that require such registration, meaning that it monetizes the sale of personal information, including location information. Companies that monetize personal information have an incentive to harvest as much personal information as possible and there is an open class action suit against Oracle’s data collection and selling practices.[30]

Can a data broker be the “trusted” entity to ensure the integrity of vast amounts of sensitive personal information? Perhaps this has never been about the safety and privacy of the data.

Whose data do TikTok owners have access to?

The TikTok sale press release implies that the sale only covers the US market, the “more than 200 million Americans and 7.5 million businesses” that use TikTok[31]. Presumably, the non-US markets’ data and technology assets stay with ByteDance Ltd. Recent TikTok user numbers show Chinese TikTok (called Douyin) users comprise nearly half of the 1.5B monthly TikTok users. https://www.statista.com/statistics/1299807/number-of-monthly-unique-tiktok-users/

Table 2: TikTok Users (Source: Statista[32])

Conclusions

Troves of personal information are risky for consumers in any corporation’s hands. No current regulation prevents this from happening. Additionally, no regulation adequately prevents the commercial trading of personal information. Until these are addressed, consumers and nation states face unreasonable risks of sensitive information such as location data being systematically shared with data brokers and thus, being available for sale to anyone who wants it, including adversary countries. Social media platforms of all kinds should hold a duty of loyalty to the data subject for the personal information they retain, but they currently do not. Therefore, social media platforms present unique risks due to the volume of highly sensitive personally identifiable information they hold.

Is it reasonable to suggest that a US company is a better corporate custodian of a massive trove of personal information? Certainly, ensuring that US citizens’ data stays out of the hands of any foreign government is worthwhile. Whether or not that’s possible in the current reality of hundreds of networked, global commercial surveillance entities is an open question with a likely depressing answer of, “No.” Additionally, the US doesn’t currently have a federal privacy law, and the US-based technology industry continues to operate in a general spirit of lawlessness (e.g. training machine learning models with scraped data).

Is the new ownership a safer configuration for people as both consumers and citizens? No. With nearly 20% ByteDance Ltd. ownership in the joint venture, the new structure fails to eliminate concern over the Chinese government having access to the data of US citizens. Moreover, now a UAE based private equity firm also has ownership (15%)  and access to the assets. It’s unclear precisely how connected to the US government the new venture is, but it seems clear given this administration’s behaviors over the past 12 months that the new joint venture facilitates use of citizen data by the US government. Finally, Oracle, a data broker with a spotty privacy reputation is now the primary authority for data privacy over the personal information of hundreds of millions of US citizens. TikTok has already changed its privacy policy to collect precise (instead of just coarse) location data[33], and to collect “AI interactions”.[34] 

Overall, what has changed with the sale of select TikTok assets to the new TikTok USDS Joint Venture LLC? If the previous ownership by ByteDance Ltd. was truly the reason for fearing CCP access to US TikTok users’ data, that threat remains with ByteDance Ltd. retaining 19.9% ownership in the joint venture. It is hard to not regard this years’ long campaign as anything other than a money, personal data, and power enrichment strategy for select US tech oligarchs and the US government. This administration’s unrestrained thirst for personal data—from DOGE, to Palantir, to Flock—is strong evidence that TikTok data will be used in support of the administration’s militarized “remigration” and government propaganda purposes, further destabilizing the US’s withering democracy.

Appendix A – TikTok Apps Available Worldwide

                                                                           Table 3: TikTok Apps in 2025                                                                         

 

      Table 4: TikTok Apps in 2026

Appendix B TikTok vs. Facebook iOS App Permission B.1  DATA USED TO TRACK YOU

B.2  DATA LINKED TO YOU

B.2   DATA NOT LINKED TO YOU

[1] Data obtained from AppFigures https://appfigures.com/

[2] Laura He, “Wait, is TikTok really Chinese?”, CNN, March 28, 2024, https://www.cnn.com/2024/03/18/tech/tiktok-bytedance-china-ownership-intl-hnk/index.html
Note that TikTok asserts that Bytedance Ltd. is not strictly based in China:
“Myth: TikTok’s parent company, ByteDance Ltd., is Chinese owned.

Fact: TikTok’s parent company ByteDance Ltd. was founded by Chinese entrepreneurs, but today, roughly sixty percent of the company is beneficially owned by global institutional investors such as Carlyle Group, General Atlantic, and Susquehanna International Group. An additional twenty percent of the company is owned by ByteDance employees around the world, including nearly seven thousand Americans. The remaining twenty percent is owned by the company’s founder, who is a private individual and is not part of any state or government entity.”  “Myth vs Facts”, TikTok U.S. Data Security, https://usds.tiktok.com/usds-myths-vs-facts/ , accessed on 2/2/25.

[3] https://www.crunchbase.com/hub/bytedance-portfolio-companies

[4] “Myths vs Facts”, TikTok U.S. Data Security, https://usds.tiktok.com/usds-myths-vs-facts/, accessed on 1/26/26.

[5] https://www.forbes.com/sites/emmawoollacott/2024/08/28/us-government-requests-most-user-data-from-big-tech-firms/

[6] Elizabeth Goitein, “The Government Can’t Seize Your Digital Data. Except by Buying It.” Brennan Center for Justice, April 28, 2021, https://www.brennancenter.org/our-work/analysis-opinion/government-cant-seize-your-digital-data-except-buying-it

[7] Glenn Greenwald and Ewen MacAskill, “NSA Prism program taps into user data of Apple, Google and others”, The Guardian,  June 8, 2013, https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data

[8] https://www.brookings.edu/articles/privacy-under-siege-doges-one-big-beautiful-database/

[9] https://www.404media.co/ice-taps-into-nationwide-ai-enabled-camera-network-data-shows/

[10] https://www.democracynow.org/2026/1/26/headlines/attorney_general_bondi_demands_access_to_minnesotas_voter_rolls_and_welfare_data

[11] Kanishka Singh and Sheila Dang, “Musk and X are epicenter of US election misinformation, experts say”, Reuters November 4, 2024, https://www.reuters.com/world/us/wrong-claims-by-musk-us-election-got-2-billion-views-x-2024-report-says-2024-11-04/

[12] This can and should be disabled. Instructions are readily found through an internet search.

[13] For comparison purposes: more than 414,000 apps include Facebooks four SDKs: 311.7K apps include Facebook Login, 275.5K apps include Facebook Share, 139.9K apps include Facebook Ads. Source: Appfigures, accessed 1/26/26.

[14] Per the app store content rating.

[15] https://www.ghostery.com/whotracksme/search

[16] https://ads.tiktok.com/help/article/tiktok-pixel

[17] Jake Peterson, “All the Apps ByteDance Operates in the US: It’s not just TikTok”, LifeHacker, January 29, 2025. https://lifehacker.com/tech/apps-bytedance-operates-in-united-states

[18] Includes TikTok Ltd., TikTok Pte. Ltd, and Tsingtao TikTok Information Technology Company Limited; AppFigures, accessed on February 2, 2025.

[19] “TikTok Developer Terms of Service”, Last modified March 21, 2024, TikTok, https://www.tiktok.com/legal/page/global/tik-tok-developer-terms-of-service/en

[20] https://internetsafetylabs.org/resources/references/identity-resolution-and-customer-data-platform-companies/

[21] Lisa LeVasseur, “Worldwide Web of Human Surveillance: Identity Resolution and Customer Data Platforms”, Internet Safety Labs, https://internetsafetylabs.org/wp-content/uploads/2024/07/Worldwide-Web-of-Human-Surveillance-Identity-Resolution-and-Customer-Data-Platforms.pdf

[22] Deidre Olsen, “TikTok in the Classroom: The Good, The Bad, and the In-Between”, TEACH Magazine, May/June 2023 Issue, https://teachmag.com/archives/22904  ; “How to Use TikTok to Engage Students in Learning”, Children’s Health Council, https://dev.chconline.org/resourcelibrary/how-to-use-tiktok-to-engage-students-in-learning/

[23] https://www.tiktok.com/tag/teachersoftiktok

[24] https://newsroom.tiktok.com/en-us/investing-to-help-our-community-learn-on-tiktok

[25] Internet Safety Labs, “Another School District Hacked”, Internet Safety Labs, November 16, 2023, https://internetsafetylabs.org/blog/research/another-school-district-hacked/

[26] https://newsroom.tiktok.com/the-truth-about-tiktok?lang=en-AU

[27] https://www.poynter.org/fact-checking/2024/who-owns-tiktok-bytedance-china-ban/

[28] https://www.nytimes.com/2026/01/22/business/media/tiktok-investors-oracle-mgx-silver-lake-bytedance.html#:~:text=What’s%20notable?,to%20comment%20on%20its%20investment.

[29] https://www.oracle.com/news/announcement/oracle-chosen-as-tiktok-secure-cloud-provider-091920/

[30] In 2024, Oracle settled a class action suit alleging that the company “improperly captured, compiled and sold individuals’ online and offline data to third parties without obtaining their consent.”[30] The settlement has been appealed and the case is ongoing.

[31] https://usdsjv.tiktok.com/

[32] https://www.statista.com/statistics/1299807/number-of-monthly-unique-tiktok-users/

[33] Note that the app nutrition label in the Apple store still only notes coarse location data collection.

[34] https://www.wired.com/story/tiktok-new-privacy-policy/

The post TikTok’s Real Privacy Risks appeared first on Internet Safety Labs.


FIDO Alliance

Payments Journal: Why the Future of Financial Fraud Prevention Is Passwordless

Fraud is evolving faster than ever, with AI-powered scams, deepfake-enabled identity theft, and a surge in account takeovers putting financial institutions on high alert and accountholders at risk. As the […]

Fraud is evolving faster than ever, with AI-powered scams, deepfake-enabled identity theft, and a surge in account takeovers putting financial institutions on high alert and accountholders at risk. As the most visible safeguard of the past few decades, the humble password is coming under increasing scrutiny.

In a PaymentsJournal podcast, Dr. Adam Lowe, Chief Product and Innovation Officer at CompoSecure and Arculus, and Suzanne Sando, Lead Analyst of Fraud Management at Javelin Strategy & Research, explored the rising fraud challenges facing financial institutions and how some of the latest solutions may be inspired by innovations in retail.


Payments Dive: Charting 2026 payments trends

For our 2026 outlook, we picked six trends to better acquaint you with what to expect this year in the payments arena, but then we went a step further in […]

For our 2026 outlook, we picked six trends to better acquaint you with what to expect this year in the payments arena, but then we went a step further in selecting three worthy of a deeper dive. See all four stories below, and a brief explanation of why we focused where we did.

Our senior reporter, Justin Bachman, dug deeper into the AI-driven agentic payments topic to help readers better understand when and how this tech tool really becomes a reality. Spoiler: not in 2026. Still, his story describes the challenges being tackled this year that are likely to lead to bot payments as early as next year.


CNBC: Data breaches climbed to a record high in 2025. How to protect your personal information

It’s the letter most consumers dread receiving — the notification that your personal information has been involved in a data breach. About 80% of respondents to a new survey said they […]

It’s the letter most consumers dread receiving — the notification that your personal information has been involved in a data breach.

About 80% of respondents to a new survey said they received at least one data breach notice in the prior 12 months, according to the Identity Theft Resource Center.

Nearly 40% of respondents received three to five separate notices over that period. The survey polled 1,040 individuals in November.

Of those who recently received a data breach notice, 88% reported at least one negative consequence, such as increased phishing or other scam attempts, more spam emails or robocalls or an attempted account takeover, the survey found.


Cybersecurity Dive: Top 3 factors for selecting an identity access management tool

It’s not like forgetting the milk at the grocery store. No big deal, just add it to the list for next time. But that kind of oversight in identity management […]

It’s not like forgetting the milk at the grocery store. No big deal, just add it to the list for next time. But that kind of oversight in identity management isn’t as simple to fix, and organizations that adopt a solution later may find it becomes an expensive add-on to their security to-do list.

It’s a situation many organizations find themselves in. The Cisco Duo 2025 State of Identity Security reports that 74% of IT leaders admit identity security is often an afterthought in infrastructure planning. As a result, businesses scramble to tack on an identity solution, often too late to assess whether it’s the right fit for their architecture, compliance, and scalability goals. ’Cause unlike the milk, it’s harder to swing back later and grab the right solution.

Thursday, 29. January 2026

FIDO Alliance

Recap: FIDO Tokyo Seminar 2025 – Toward a Passwordless World: Deepening Japan’s Leadership and Deployment 

On December 5, 2025, the digital identity community gathered at Tokyo Port City Takeshiba for the 12th FIDO Tokyo Seminar. Under the theme “Towards a Passwordless World”, the event brought […]

On December 5, 2025, the digital identity community gathered at Tokyo Port City Takeshiba for the 12th FIDO Tokyo Seminar. Under the theme “Towards a Passwordless World”, the event brought together 300+  industry leaders, government officials, and engineers to discuss the effectiveness of passkeys as a countermeasure against phishing and to explore the future landscape of digital identity.

Global Momentum: Local Leadership Driving Adoption

The seminar kicked off by highlighting the rapid adoption of FIDO standards and the strong commitment shown by the Japanese market.

Andrew Shikiar, CEO & Executive Director of the FIDO Alliance, shared the latest metrics: over 7 billion accounts worldwide are now protected by passkeys, with more than 3 billion passkeys saved by users. Data from the newly introduced “Passkey Index” further demonstrated the technology’s impact, revealing a 93% authentication success rate and a 73% reduction in login times.

In the Japanese market, Koichi Moriyama (NTT DOCOMO), Chair of the FIDO Japan Working Group (FJWG), reported on the growth of the local community as it celebrates its 10th anniversary and 111th monthly meeting. The day also marked a notable announcement: the FIDO Alliance has signed a liaison partnership with the Japan Securities Dealers Association (JSDA). This partnership is expected to accelerate security improvements and FIDO adoption across the entire securities industry.

Policy & Security: From Recommended to Essential

In 2025, Japan’s policy and security strategies are upgrading phishing-resistant authentication from “recommended” to “essential.”

Digital Agency: Masanori Kusunoki addressed the revision of the guidelines for online identity verification in administrative procedures (DS-500 to DS-511). He expressed the view that for Assurance Level 2 or higher, phishing-resistant methods like the My Number Card or passkeys will effectively become mandatory. NPA & FSA: Takahide Sannomiya (National Police Agency) and Motoshi Matsunaga (Financial Services Agency) emphasized the importance of passkeys in countering cyber threats. In the financial sector specifically, policies are advancing to default to phishing-resistant Multi-Factor Authentication (MFA) for critical operations such as logins and fund transfers.

Proven Success & Next Frontier: Account Recovery

A highlight of the seminar was the consensus that passkeys have moved beyond “early adoption” to become mainstream in Japan’s major services.

The “Passkey Index Japan” panel session (Mercari, NTT DOCOMO, KDDI, FIDO Alliance) revealed that passkey authentication usage has exceeded 50% among smartphone users at these three companies. It was disclosed that 50.4% of all monthly active users (MAU) for authentication services are already utilizing passkeys.

This widespread usage, spanning all ages and demographics, suggests that passkeys are a realistic solution that balances convenience with security.

The discussion also focused on “Account Recovery” as one of the key challenges following widespread passkey adoption. Tatsuya Karino (Mercari), Masao Kubo (NTT DOCOMO), and Hideki Sawada (KDDI) emphasized the importance of secure recovery processes utilizing My Number Cards (JPKI) and eKYC, as well as designing for device changes. This is poised to be a cross-industry theme for 2026.

Securities Transformation: Advancing Passkey Deployment

The transformation within the securities industry is noteworthy. Shinobu Hirayama of Rakuten Securities reported that the company completed the rollout of passkey authentication (FIDO2) across all channels in October 2025. According to Hirayama, five securities firms have already implemented FIDO2, with that number expected to rise to seven by the end of the year. He emphasized that passkeys play a central role in building a technology-based “layered defense” against evolving fraud attacks.

Deep Dive into Tech: Platforms & Security

Technical sessions for developers and security experts explored the latest features supporting passkey implementation.

Google Platform Evolution: Eiji Kitamura shared the latest updates based on Credential Manager. Of particular note was the “Restore Credentials API,” which promises to improve the developer experience by enabling seamless sign-ins when users migrate to new devices. Session Protection: In the “All About Passkeys” session (Eiji Kitamura, Kosuke Koiwai, Masaru Kurabayashi), the discussion turned to the risks of “session hijacking” that remain even after passkey adoption. Speakers argued for the necessity of risk-based session protection and new specifications like Device Bound Session Credentials (DBSC) to counter malware-based cookie theft.

Ecosystem & Innovation: Expanding Use Cases

Presentations from sponsor companies demonstrated a mature ecosystem capable of supporting diverse use cases.

Regulated Industries & Finance: Gim Leng Koh (OneSpan) presented a dual-key approach for financial institutions, enabling device health assessment and transaction signing (WYSIWYS). Scale & Performance: Eugene Lee (RaonSecure) introduced their FIDO solution’s high processing performance, supporting over 10 million monthly users. Solving B2B Challenges: Kazuhito Shibata (ISR) addressed the barriers hindering MFA adoption in corporate environments. Device Security in the AI Era: Everett Hiroshi Shiina (Yubico) explained the importance of hardware-attested Single Device Passkeys in the face of rising AI threats. Lifecycle Protection: Takashi Yoshii (Daon) introduced the integration of FIDO authentication with Deepfake detection-enabled eKYC via the IdentityX platform. Customer Engagement: Mitsuharu Nakamura (Twilio) proposed a seamless authentication experience using Twilio Verify, which supports passkeys alongside SMS and TOT

Beyond Authentication: Digital Credentials & Identity

The conversation extended beyond authentication to the entire identity lifecycle.

In a video message, Lee Campbell (Google/FIDO Alliance Digital Credential WG Co-Chair) shared the vision of extending the trust and interoperability established by passkeys to “Digital Credentials,” defining ecosystem standards for wallets and identity verification.

The final panel session, featuring members from the FIDO Alliance, OpenID Foundation, OpenID Foundation Japan, and the Digital Agency, deepened the discussion on managing the entire identity lifecycle—from account creation to recovery.

Looking Forward: Building Japan’s Digital Identity Future

The 12th FIDO Tokyo Seminar served as a testament that passkeys are becoming firmly established as part of Japan’s digital social infrastructure. As we look toward 2026, the FIDO Alliance’s initiatives will continue to expand from authentication to the entire identity lifecycle and into the realm of digital credentials.

We would like to express our sincere gratitude to the sponsor companies who supported this event, as well as to all the speakers and attendees. We look forward to seeing you at our next event!

Wednesday, 28. January 2026

Next Level Supply Chain Podcast with GS1

(Replay) Spicing up Success: How Traceability Helped Hank Sauce Scale National Distribution

A hot sauce brand doesn't scale on taste alone. It scales on consistency, traceability, and trust. Matt Pittaluga, Co-Founder of Hank Sauce, joins hosts Reid Jackson and Liz Sertl to share how a college project became a product sold in more than 5,000 stores. He breaks down the early hustle, the jump into retail, and what it takes to keep quality tight as demand grows. You'll also hear how Ha

A hot sauce brand doesn't scale on taste alone. It scales on consistency, traceability, and trust.

Matt Pittaluga, Co-Founder of Hank Sauce, joins hosts Reid Jackson and Liz Sertl to share how a college project became a product sold in more than 5,000 stores. He breaks down the early hustle, the jump into retail, and what it takes to keep quality tight as demand grows.

You'll also hear how Hank Sauce approaches traceability through product codes to track ingredients from batch creation to store shelves.

This episode is a replay of our conversation with Matt, brought back for anyone building, scaling, or supplying food brands. In this episode, you'll learn:

How Hank Sauce scaled its distribution to national retailers

The importance of traceability in ensuring food safety and product quality

Strategies for building networks to expand brand reach

Jump into the conversation: (00:00) Introducing Next Level Supply Chain

(01:34) The Hank Sauce story

(06:38) Grassroots marketing and early sales strategies

(13:34) Scaling up distribution to large retailers

(16:38) The importance of traceability and food safety

(19:51) Building a brand with a limited marketing budget

(23:43) Advice for new entrepreneurs

(31:22) Matt Pittaluga's favorite tech

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

Register now for this year's GS1 Connect and get an early bird discount of 10% when you register by March 31 at connect.gs1us.org.

Connect with the guest:

Matt Pittaluga on LinkedIn

Check out Hank Sauce

Tuesday, 27. January 2026

Oasis Open

Coalition for Secure AI Releases Extensive Taxonomy for Model Context Protocol Security

Boston, MA – 27 January 2026 – OASIS Open, the international open source and standards consortium, announced the release of the “Model Context Protocol (MCP) Security” white paper from the Coalition for Secure AI (CoSAI), an OASIS Open Project. This framework equips security professionals and developers to identify, assess, and mitigate risks in MCP-based AI […] The post Coalition for Secure AI

Collaborative Industry Effort Delivers Identity Management, Supply Chain Integrity, and Protocol Security for AI Agent Deployments

Boston, MA – 27 January 2026 – OASIS Open, the international open source and standards consortium, announced the release of the “Model Context Protocol (MCP) Security” white paper from the Coalition for Secure AI (CoSAI), an OASIS Open Project. This framework equips security professionals and developers to identify, assess, and mitigate risks in MCP-based AI agents, addressing the urgent need for standardized security practices as AI increasingly connects to external tools and services.

Securing the Bridge Between AI and the Real World

MCP, developed by Anthropic, a CoSAI Sponsor, together with a growing open source community, has emerged as a leading protocol for connecting AI agents to external tools, databases, APIs, and services. However, like any integration protocol, MCP deployments face active and evolving threats.

This security framework presents a well-defined taxonomy of nearly forty threats and concrete mitigation strategies across twelve distinct categories, spanning identity and access control, input validation, data protection, network security, supply chain integrity, and operational visibility. The framework distinguishes between traditional security concerns amplified by AI mediation and novel attack vectors unique to LLM-tool interactions, enabling security teams to implement defense-in-depth strategies tailored to their specific deployment patterns.

“As AI moves beyond chat models to agents, gaining the ability to take actions and interact with their environments and the real world through tool calling, the security implications and potential consequences are much more severe,” said Ian Molloy, IBM, and Sarah Novotny, CoSAI’s Workstream 4 Co-Leads. “This framework represents the expertise of CoSAI’s members and contributors who understand that protecting agentic systems requires addressing everything from protocol-level authentication and supply chain integrity to guardrails, systems security and enforcement.”

Collaborative Industry Effort

The MCP Security paper was developed by CoSAI’s Workstream 4: Secure Design Patterns for Agentic Systems, drawing on contributions across CoSAI’s Sponsors and partner organizations, including Premier Sponsors EY, Google, IBM, Meta, Microsoft, NVIDIA, PayPal, Snyk, Trend Micro, and Zscaler. Additional CoSAI AI Security Guidance Publications can be found on GitHub.

Technical contributors, researchers, and organizations are welcome to participate in CoSAI’s open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in AI security. Contact join@oasis-open.org for more information.

About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide. CoSAI operates under OASIS Open, an international standards and open-source consortium. www.coalitionforsecureai.org

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries: communications@oasis-open.org

The post Coalition for Secure AI Releases Extensive Taxonomy for Model Context Protocol Security appeared first on OASIS Open.

Monday, 26. January 2026

Trust over IP

ToIP EGWG 2026-01-22 Keith Jansa and Digital Interoperability and Mutual Recognition

The post ToIP EGWG 2026-01-22 Keith Jansa and Digital Interoperability and Mutual Recognition appeared first on Trust Over IP.

TOIP EGWG 2025-12-11 Andy Woodruff and Using SSI to Champion Creator Governed Content

Is the creator economy broken? The Open Commercial Media Ecosystem (OCME) is restructuring digital media by shifting from User-Generated Content (UGC) to Creator-Governed Content (CGC) The post TOIP EGWG 2025-12-11 Andy Woodruff and Using SSI to Champion Creator Governed Content appeared first on Trust Over IP.
Play Video

Watch the full recording on YouTube.

Status: Verification Pending by Presenter

Please note that ToIP used Google NotebookLM to generate the following content and will update the status to “Verified by Presenter” accordingly.

Google NotebookLM Podcast

https://trustoverip.org/wp-content/uploads/OCME_Pays_Creators_60__Gross_Revenue.m4a

Here is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:

A 42,000% Growth Spurt: 3 Radical Ideas from a New Creator-Governed Ecosystem

When a digital media ecosystem reports 42,000% year-over-year growth in viewership, it’s time to pay attention. The Open Commercial Media Ecosystem (OCME), a non-profit, members-based organization founded in 2024, is demonstrating a powerful new model for digital content. Its success isn’t just about numbers; it’s a direct response to a core industry problem. In a global creator market estimated at $205 billion in 2024, the relationship between creators and platforms remains fundamentally broken. Platforms exercise unilateral control over monetization, content policies, and account access, treating creators as users to be monetized, not partners to be empowered.

OCME proposes a new blueprint built on the principle of creator governance. We sat down with Executive Director Andy Woodruff’s recent deep-dive to extract the three core principles that make this ecosystem not just successful, but potentially revolutionary.

1. Beyond ‘Users’: Why Governing Your Content is the New Power Play

The first and most fundamental concept OCME introduces is the shift from User-Generated Content (UGC) to Creator-Governed Content (CGC). This isn’t just a change in terminology; it’s a fundamental restructuring of the power dynamics.

In the traditional UGC model:

The platform owns your digital identity. The platform controls all policies, which can change at any time without your input. The platform sets revenue rates to benefit its shareholders, not the community.

In OCME’s CGC model:

The user controls their own identity. Users, as voting members, participate in setting policies. Users set revenue rates through democratic processes to benefit stakeholders—the people creating the value.

This transition moves the creator from a passive participant, subject to the whims of an algorithm or corporate policy change, to an active governor of their own digital space. The philosophical foundation for this shift is captured perfectly in the principle of self-sovereign identity:

A person’s identity that is neither dependent on nor subject to any other power or state.

2. Rewriting the Rules of Revenue: Inside a Radically Fair Financial Model

OCME’s second game-changing idea is a financial model designed for maximum transparency and efficiency, ensuring more value flows directly to creators. Three innovations stand out:

Clear Revenue Share: Creators receive 60% of gross revenue. This is a critical distinction. While platforms like YouTube and Spotify offer seemingly comparable rates, they are calculated from net revenue, after the platform has deducted its own costs. OCME’s top-line split is noticeably higher and far more transparent. Drastically Lower Fees: By using USDC stablecoins for global payments, the ecosystem slashes transaction fees from the 2-4% charged by traditional rails like Stripe or PayPal to mere “pennies.” The impact is substantial: for a creator earning $10,000 a month, avoiding those 2-4% fees means keeping an extra $5,000 a year that would otherwise vanish to middlemen. Eliminating ‘Black Box’ Funds: The presentation highlighted a major problem in the record label industry: “black box funds.” This is where an estimated one billion dollars in unclaimed royalties goes each year, money that labels often simply keep because they don’t know who to pay. OCME solves this by cryptographically linking every piece of content to its creator, ensuring that value can always be traced and paid out to its rightful owner.

These innovations are not isolated perks; they form a holistic financial engine. By combining a higher gross revenue share with radically lower transaction costs and guaranteed attribution, OCME ensures that value created by artists is not just recognized, but captured by them.

3. Pragmatism Over Purity: The Counterintuitive Genius of a Unified Ecosystem

In a world trending toward decentralized, interlocking systems, OCME makes a compelling case for a different approach: a single, unified, and hierarchical ecosystem. Instead of building an “ecosystem of ecosystems”—a flat, or “heterarchical,” structure where separate systems require transition layers and bridges—OCME has built one coherent framework containing different content verticals called “colonies” (e.g., music videos, gaming, news).

In an industry often obsessed with pure decentralization, OCME’s choice of a hierarchical model is a pragmatic masterstroke. It makes a deliberate trade-off, sacrificing architectural heterogeneity for a vastly superior creator experience—one identity, one ecosystem, zero friction. A creator joins the ecosystem once, receives one DID (digital identity), and can participate across any colony. Their identity, credentials, and reputation are seamless and portable throughout the entire system, putting the creator’s experience first.

Conclusion: The Future is Governed by Creators

Together, these three takeaways—creator governance, financial transparency, and a unified architecture—present a cohesive vision for a more equitable and efficient creator economy. This is no longer just a theory. With 600 members having already registered over 6,900 pieces of content, OCME has demonstrated a working, scalable model that is attracting creators at an astonishing rate.

The platform has moved beyond simply giving creators a space to upload content; it’s giving them the tools to build and govern their own digital destiny. This raises a final, exciting question: As creators gain more power to shape their own ecosystems, what new forms of collaboration and content will they build that we can’t even imagine today?

For more details, including the slides,  meeting recording and transcript, please see our wiki 2025-12-11 Andy Woodruff and Using SSI to Champion Creator Governed Content

https://www.linkedin.com/in/andrew-woodruff-72b70075/

The post TOIP EGWG 2025-12-11 Andy Woodruff and Using SSI to Champion Creator Governed Content appeared first on Trust Over IP.


FIDO Alliance

Payment Industry Intelligence: Agentic Commerce and the quiet return of Guest Checkout

Agentic commerce is steadily rewiring how digital transactions occur. Instead of shoppers manually navigating screens, entering credentials and approving each step, intelligent software agents are beginning to select products, optimise […]

Agentic commerce is steadily rewiring how digital transactions occur. Instead of shoppers manually navigating screens, entering credentials and approving each step, intelligent software agents are beginning to select products, optimise pricing and initiate payment on the user’s behalf.

In that environment, the long-maligned guest checkout flow is gaining fresh relevance—not as a stopgap, but as a structurally efficient payment model.


WSJ: Out With the Old: Is Ending Passwords the Start of Improved Identity Security?

From friction to fluidity: Why passkeys, biometrics, and magic links are poised to end the password era and increase privacy As cyber threats intensify and user frustration with passwords seemingly […]

From friction to fluidity: Why passkeys, biometrics, and magic links are poised to end the password era and increase privacy

As cyber threats intensify and user frustration with passwords seemingly grows, enterprises are turning to passwordless authentication for improvement in both security and customer experience. This shift—led by passkeys, biometrics, and magic links—promises not just stronger defenses but simpler, faster, and more imaginative identity journeys.


PCWorld: 1Password review: A password manager designed for the Apple crowd

1Password started as a macOS app, way back in 2006—and you can still feel that influence in its design. Even though the service now works across all major operating systems, […]

1Password started as a macOS app, way back in 2006—and you can still feel that influence in its design. Even though the service now works across all major operating systems, the team still leans into a particular approach. This password manager is streamlined and runs smoothly, but users shouldn’t expect to see behind the veil.

1Password allows you to import passwords via CSV from other password managers. If coming from Bitwarden, you can import more securely through an encrypted .json file. 1Password will also support the FIDO Alliance’s Credential Exchange Protocol (CXP) starting in early 2026, which allows secure transfer of passkeys in addition to passwords between apps and services with CXP enabled.


Hyperledger Foundation

Mentorship Spotlight: Building a Confidential Digital Asset Escrow with Hyperledger Fabric

Project Goals and Motivation The main goal of this mentorship project, Fabric Private Chaincode and CC-Tools for privacy-sensitive applications, was to explore how a confidential digital asset system with programmable escrow can be built on Hyperledger Fabric, without exposing sensitive transaction details.
Project Goals and Motivation

The main goal of this mentorship project, Fabric Private Chaincode and CC-Tools for privacy-sensitive applications, was to explore how a confidential digital asset system with programmable escrow can be built on Hyperledger Fabric, without exposing sensitive transaction details.


Velocity Network

Robin Weninger joins the Board of Directors of Velocity Network Foundation

The post Robin Weninger joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Thursday, 22. January 2026

EdgeSecure

EdgeCon Winter 2026

Technology Across the Student Lifecycle: Strategy, Process, and Transformation On January 15, 2026, EdgeCon Winter, hosted in partnership with Princeton University, convened higher education leaders and technology professionals for an… The post EdgeCon Winter 2026 appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Technology Across the Student Lifecycle: Strategy, Process, and Transformation

On January 15, 2026, EdgeCon Winter, hosted in partnership with Princeton University, convened higher education leaders and technology professionals for an intensive exploration of how technology, data analytics, and artificial intelligence are reshaping institutional operations and the student experience. The conference examined the growing impact of technology across every phase of the student lifecycle, from recruitment and enrollment to retention and career outcomes. Through a thought-provoking keynote panel and comprehensive breakout sessions, attendees explored strategic frameworks, practical implementations, and real-world case studies that demonstrated how institutions can leverage technology to enhance educational experiences, improve operational efficiency, and build long-term institutional resilience.

Combating Fraud in the Digital Age

As institutions increasingly face sophisticated fraud attacks in the admissions process, Thomas Edison State University took decisive action to protect their community and maintain institutional integrity. In Protecting Your Institution From Fraudulent Applications with Identity Verification, Christine Carter, Director Graduate Admissions & Recruitment, Enrollment Technology, and Jeff Butera, Lead Analytics Consultant, Voyatek, shared their comprehensive approach to detecting and preventing admission fraud.

The presentation walked attendees through the institution's decision-making process, from recognizing the scale of the problem to selecting and implementing an identity verification solution. Carter and Butera discussed the practical challenges of preparing internal stakeholders and external constituents for new verification processes, as well as the critical support considerations that ensured a successful launch. Attendees gained insight into major considerations for any fraud detection solution, best practices for managing change across departments, and lessons learned that can help other institutions protect themselves from increasingly sophisticated fraud attempts.

Authentic Storytelling for Enrollment and Engagement

Manor College launched an innovative approach to student recruitment and retention through "The Nest," a dynamic podcast showcasing diverse alumni journeys. In From Blue Jays to Bright Futures: Hatching a Podcast for Enrollment & Engagement, Kelly Peiffer, MA, Vice President of Marketing Communications, and Anthony Machcinski, Director of Marketing, Content and Photography, shared the strategic development and behind-the-scenes execution of this storytelling initiative.

The session went beyond theory to offer a practical roadmap for creating impactful audio content. Peiffer and Machcinski covered the podcast's conceptualization, content strategy for identifying compelling alumni stories, production workflow leveraging campus resources and student involvement, and dissemination planning for maximum reach. They also shared key lessons learned, including unexpected challenges and effective solutions, along with preliminary metrics on prospective student engagement and current student feedback. Attendees gained actionable insights into developing storytelling strategies, engaging alumni as advocates, and utilizing podcasting as an innovative communication tool that strengthens recruitment efforts and cultivates lifelong connections with students.

“I appreciate these events and the Edge team that puts them together. They are always welcome opportunities to connect with colleagues and hear about important and innovative work being done across our institutions.”

– Jeff Berliner
Chief Information Officer
Institute for Advanced Study

Understanding the NSP Difference

Network Service Providers and commercial Internet Service Providers may appear similar, but they are fundamentally different in design, governance, and purpose. In Why NSPs Are Not ISPs: Architecture, Intent, and Outcomes, Christopher Henderson, Senior Network Engineer, drew on years of experience designing and operating large-scale enterprise networks to explain these critical distinctions.

Henderson explored how NSPs like EdgeNet are purpose-built to support teaching, research, healthcare, and public mission in ways that traditional ISPs cannot. The session examined NSP architectures that prioritize resilience, predictability, scalability, and long-term institutional outcomes through high-capacity fiber backbones, optical transport, and packet-based services that enable advanced research workflows and large-scale data movement. Attendees gained insight into why these design choices matter from both operational and strategic standpoints, and why understanding the distinction between NSPs and ISPs leads to better decisions about campus connectivity, digital strategy, and future-ready infrastructure.

AI Implementation Roundtable

Artificial intelligence is fundamentally transforming higher education operations and pedagogy, but implementation challenges vary widely across institutions. Implementing AI in Higher Education - A Roundtable brought together technology and academic leaders to explore current applications, emerging challenges, and future directions. Panelists John Bruggeman, Consulting CISO, CBTS; Chris Treib, CIO, Geneva College; Moe Rahman, Associate Vice President/CIO, Community College of Philadelphia; and Patricia Clay, MBA, Associate Vice President and Chief Information Officer, Hudson County Community College, shared diverse perspectives on AI adoption across different institutional contexts.

The roundtable examined how AI is reshaping teaching through personalized instruction and adaptive learning platforms, supporting faculty with analytics that identify learning gaps, and enabling administrators to employ predictive models for improved retention and resource allocation. The discussion highlighted both the opportunities and the complexities of implementing AI at scale, from policy development and training to measuring impact and managing risk. Participants engaged in an open dialogue about preparing students for an increasingly AI-driven world while maintaining the human elements central to higher education.

“It was a great first experience to have with this organization's conference.”

– Michael La Fountaine
Associate Dean
Seton Hall University

From Spend to Strategy: The Value Partnership

As financial pressure and complexity increase across higher education, institutions are reexamining what "value" truly means from their technology partnerships and investments. The keynote panel, From Spend to Strategy: How Institutions and Tech Partners Deliver Measurable Value in Higher Ed, brought together organizational leaders and technology partners to explore how value is created—or lost—across the technology lifecycle.

The discussion examined how expectations around ROI have evolved, what boards and senior leaders look for when assessing technology investments, and how institutional teams and vendors share responsibility for delivering measurable results. Designed for both executives and practitioners, the panel explored organizational and cultural factors that shape outcomes and practical ways front-line managers and staff can influence success. The conversation emphasized that success today is defined not by deployment alone, but by outcomes in student experience, operational capacity, risk reduction, and long-term institutional resilience.

Building AI-Powered Student Support Systems

Meeting students where they are—anytime, anywhere—is a growing challenge for institutions with limited advising resources. In Advancing Artificial Intelligence to Enhance Education and Learning Outcomes, Paul Wang, Director, Chair, and Professor, Morgan State University, introduced iNavigator, an innovative agentic AI application designed to provide 24/7 student advising and support.

The presentation demonstrated how Wang developed this system using Vertex-AI and Google Gemini models to create agents that provide localized departmental resources not available on general generative AI platforms like ChatGPT. Attendees learned how to develop and apply agentic AI models at their institutions using a Retrieval-Augmented Generation (RAG) pipeline to build Small Language Models that address specific departmental, school, or university needs. Wang emphasized that this approach extends beyond higher education to corporate and organizational applications, and generously made the code available on GitHub for free access to all attendees.

Strategic Alignment Before Technology Implementation

Organizations across higher education continually launch complex, high-stakes initiatives, but many fall short of expectations not because of poor execution, but because leadership teams were never fully aligned on why the initiative mattered. In WHY Before HOW: Aligning Strategic Initiatives to What Actually Matters, Dan Miller, AVP EdgeMarket and Solution Strategy, Edge, introduced Business Value Story, an emerging strategy-to-execution alignment approach.

The framework helps organizations define and quantify the business value an initiative must deliver before determining how it will be implemented. Rather than starting with solutions, structures, or technologies, Business Value Story establishes a shared language of business value, translates strategy into specific and measurable business outcomes, and aligns cross-functional leaders around a common definition of success. Using real-world scenarios from higher education, Miller explored how institutions can reduce misalignment, improve decision-making, and accelerate execution across both technical and non-technical strategic efforts.

Conversational AI for Course Discovery

Complex course catalogs and enrollment data can overwhelm students, faculty, and advisors seeking quick answers to scheduling questions. In Ask the Course Catalog: Building a Grounded, Hallucination-Free AI for Course Discovery, Bharathwaj Vijayakumar, AVP Institutional Data & Analytics, and Jaress Loo, Director, Software Development, Rowan University, showcased Section Tally AI, a conversational agent that transforms how users interact with course information.

The presentation demonstrated how student information system data was modeled into a star schema, combining course catalog details, section schedules, enrollment counts, capacity, and instructor metadata into a unified semantic layer. This preparation allows the agent to reliably answer both lookup questions ("Which sections of college composition 1 are still open?") and aggregate questions ("How many courses are full this term in CS department?"). Vijayakumar and Loo explained how they designed the AI layer to remain grounded in data using few-shot prompting, curated examples, and controlled query patterns that prevent hallucinations. The session concluded with lessons learned, adoption strategies, and guidance for institutions seeking to move beyond static catalogs toward reliable, AI-powered course discovery at scale.

Cybersecurity for Everyone

Cyber risk extends far beyond IT departments—faculty, staff, and student-facing offices routinely face phishing, fraud, and increasingly convincing AI-enabled deception. In Cybersecurity Awareness: A Refresher and Best-Practices for Non-Security Personnel, Demetrios Roubos, Ed.D., M.S., CISSP, Information Security Officer, Stockton University, provided practical, university-relevant guidance designed for non-cybersecurity personnel.

This interactive refresher surveyed today's most common threat patterns and simple habits that prevent most incidents. Topics included spotting phishing and strengthening email security, online safety tips for parents and families, recognizing common scams such as elder fraud and fake job offers, detecting AI-generated content and deepfakes, and everyday protective tools like password managers and VPNs. Participants left with actionable checklists, reporting pathways, and a shared baseline of security behaviors that reduce risk across the university community.

“Best ROI of any conference I attend through the year.”

– Scott Huston
Vice President for Information Technology Services & CIO
Stockton University

Scaling Vendor Risk Management with AI

When 75% of Rowan University's VRM student analysts accepted internships with Lockheed Martin, the institution faced a sudden operational challenge. In VRM AI-Driven Compliance Playbooks: Scaling Vendor Risk Reviews with Human-in-the-Loop Assurance, Lou Belsito, Manager, Information Security Risk Management, and Mahmudul Siddiquee, Enterprise Application Architect, Software Development, shared how this challenge inspired the creation of the VRM AI-Driven Compliance Program.

At the heart of this solution is the VRM Unified Compliance Repository, a dynamic catalog of regulations, policies, standards, and procedures. Using AI, the team compares ServiceNow ticket data against this repository to generate customized Due-Diligence Intelligence Playbooks for each technology request. Each playbook includes an initial inherent risk assessment and step-by-step guidance for both student analysts and senior security managers, ensuring consistency and surfacing nuanced requirements often missed in manual workflows. This innovation enabled analysts to work asynchronously without supervision while senior managers could quickly validate outputs, transforming a bottleneck into a streamlined, repeatable process.

“The topics were very relevant. Thank you.”

– Charles Wachira
Sr. Director, Teaching & Learning
Johns Hopkins Carey Business School

Business Intelligence for Student Success

Data transparency and literacy can be catalysts for cultural transformation. In From Insight to Impact: How Business Intelligence Transforms Student Success, Moe Rahman, Associate Vice President/CIO, and Vishal Shah, Dean, Math Science & Health Careers, Community College of Philadelphia, explored their institution's successful data-driven evolution within an academic division.

Building on the high-level architecture and strategic deployment methods presented at EdgeCon Autumn 2025, this session offered a real-world use case showcasing the tangible impact of business intelligence solutions on student success. Rahman and Shah jointly presented how they confronted significant demographic shifts by leveraging data transparency to drive measurable improvements in enrollment and retention through faculty empowerment, robust interdepartmental collaboration, student-centric practices, and a strong culture of accountability. The presentation highlighted the alignment and intersections between academic and technology strategies, demonstrating that sustainable cultural change is achievable when institutions commit to putting student outcomes at the center of every decision.

The Challenge of AI Transparency

As higher education institutions adopt AI for financial aid decisions, scholarship allocation, admissions, and advising, a critical question emerges: Can anyone actually explain how these systems make decisions? In The AI Explainability Problem: Why Transparency Is Harder Than You Think, Erica Attoe, Graduate Fellow, Schaefer Center for Public Policy, University of Baltimore, examined this challenge head-on.

Drawing from systematic analysis of over 5,500 academic articles, Attoe shared emerging consensus from scholarship: 71% of research frames explainability as an enabler of AI adoption, not a barrier. Yet despite this consensus, achieving transparency remains an open question. The presentation provided a plain-language walkthrough of current explainability tools like SHAP and LIME, and emerging approaches like blockchain audit trails. Attoe highlighted what she called the "Alice in Wonderland problem": explainability models themselves require explanation, often just shifting complexity rather than resolving it. Attendees left with a realistic understanding of the explainability landscape, what questions to ask vendors, and why this remains an unsolved problem with real consequences for students.

Transforming Academic Operations

As colleges navigate increasing complexity in scheduling, curriculum management, and student support, building an integrated Academic Operations structure has become essential. In Success by Design: Transforming Academic Operations at MCCC, Adelina Marini, Assistant Director of Academic Operations, and Dr. James H. Whitney III, Associate Provost, Mercer County Community College, explored how their institution strategically combined multiple administrative and academic support units into one cohesive department.

The presentation highlighted how the team leveraged Coursedog to streamline scheduling, improve data accuracy, and enhance cross-departmental collaboration, ultimately reducing barriers for students. Marini and Whitney discussed the development of a unified mission rooted in access, efficiency, and student-centered decision-making, as well as the cultural and structural shifts required to bring diverse units together under a shared purpose. Participants left with practical strategies for designing an Academic Operations model that strengthens institutional effectiveness and meaningfully supports student success.

Thank you VIP Sponsors Thank you Exhibitor Sponsors

Abnormal

Anthology

Blackboard

CBTS

Checkpoint

Coursedog

datto - Vancord

eplus

Form Assembly

Ivy Ocelot Gravyty

NetApp

Nokia

Paloalto

PKA

Purestorage

Ring Central

Sailpoint

SHI

Softdocs

Velocity Tech

Voyatek

Watermark

Thank you Lanyard Sponsor

The post EdgeCon Winter 2026 appeared first on Edge, the Nation's Nonprofit Technology Consortium.


Velocity Network

Glen Cathey is re-elected to third term on the Board of Directors of Velocity Network Foundation

The post Glen Cathey is re-elected to third term on the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Wednesday, 21. January 2026

FIDO Alliance

Passkey Ecosystem Upgrades and Improvements

As passkeys move rapidly from a promising new technology to the clear industry standard for simple and secure authentication, the passkey ecosystem continues to evolve. Read about six new capabilities […]

As passkeys move rapidly from a promising new technology to the clear industry standard for simple and secure authentication, the passkey ecosystem continues to evolve. Read about six new capabilities implementers should know about.

Read the Article

MIXI Promotes a “Safe and Seamless Login Experience” with Passkey Deployment Across Both Consumer and Enterprise Environments

Corporate Overview MIXI, Inc. (hereafter MIXI) is one of Japan’s leading internet companies, best known for its popular mobile game MONSTER STRIKE, among other entertainment services, with tens of millions […]
Corporate Overview

MIXI, Inc. (hereafter MIXI) is one of Japan’s leading internet companies, best known for its popular mobile game MONSTER STRIKE, among other entertainment services, with tens of millions of users. The company has also expanded into sports and lifestyle businesses, providing services that enrich the daily lives of a broad range of generations.

The company’s MIXI ID serves as a common account platform enabling users to access multiple services seamlessly. In recent years, it has also been adopted by flagship titles, continuing to grow its user base.

The Business Challenge

From the outset, MIXI ID pursued a passwordless approach, adopting an email-based one-time password (OTP) method. However, this proved insufficient against the rising threat of real-time phishing attacks, while the flow of opening an email app, retrieving a code, and entering it was cumbersome for users. For services that involve payment functions in particular, there was a strong need for a mechanism that could deliver both high authentication strength and excellent user experience.

Internally, the company also faced the challenge of balancing enhanced security with operational efficiency, while accommodating shared PC usage and continuously evolving OS environments.

Decision to deploy Passkeys

To address these challenges, MIXI introduced FIDO2-compliant passkey authentication to MIXI ID in 2024. Leveraging the WebAuthn API offered by web applications and browsers, users can now log in smoothly and password-free using the biometric authentication built into their smartphones and PCs.

In addition, passkey authentication was made mandatory for administrative tools in the payment system, enabling stronger security operations without reliance on passwords.

MIXI also advanced its internal enterprise security environment by adopting YubiOn Portal, provided by SoftGiken (a FIDO Alliance member), together with YubiKey from Yubico (a FIDO Alliance board member). This strengthened physical security for shared PCs and logon authentication, creating a unified, cloud-managed two-factor authentication environment for both Windows and macOS. As a result, MIXI achieved both stronger authentication for shared terminal logons and greater operational efficiency.

Why FIDO was chosen

While the company also utilizes Apple and Google social logins, there were clear reasons for adopting FIDO authentication as one of its primary methods:

Trust in security and interoperability based on international standards Smooth and practical user experience enabled by platform-provided Passkey Autofill Strong security with biometrics combined with the convenience of passwordless login Impact of adoption

Currently, more than 25% of MIXI ID users have registered a passkey, and adoption is steadily expanding. Helpdesk enquiries caused by issues with OTPs —such as “delays/resending of authentication codes” and “input errors”—have decreased, helping to reduce support costs.

For users, the experience of being able to log in safely and quickly is spreading, further reinforcing trust in MIXI’s authentication infrastructure.

Within the enterprise environment, the introduction of YubiOn Portal enabled a shift from ledger-based authentication management to cloud-based management, ensuring real-time visibility into the latest authentication status. It also supports Windows Remote Desktop usage and has been highly praised by employees.

Overcoming Implementation Challenges

In some early deployments at other companies, confusing error messages such as “Passkey not found” created user difficulties. MIXI avoided this issue by timing its rollout to coincide with the point at which Passkey Autofill had become sufficiently mature across major OS platforms, successfully preventing user confusion.

The adoption of YubiOn Portal required detailed policy settings, but thanks to extensive documentation and f lexible configuration features, the IT team was able to implement and operate the system smoothly.

Looking ahead

MIXI expects passkey authentication to become widely adopted across services and evolve from its current optional status into a primary authentication method. The company intends to expand its use across more service areas, contributing to the realization of a passwordless society.

Finally, Ryo Ito of MIXI, who shared insights for this case study, commented:

“FIDO authentication delivers strong phishing resistance and high security, but there are still challenges such as account recovery from environments where passkeys are unavailable. It’s important to correctly recognize these issues and refer to the FIDO Alliance’s published design and implementation guidelines and checklists when adopting FIDO authentication.

As passkey authentication becomes more widespread, we are already seeing its positive impact with MIXI ID. FIDO/Passkeys are a rare technology that can simultaneously provide excellent UX and robust security at low cost. Going forward, we look forward to the evolution of the ecosystem to support an even wider variety of use cases.”

Read the Case Study

MyData

Data compliance support for MyData Global Members

We are happy to announce a new support program to help small businesses and organisations streamline and strengthen their compliance with evolving EU data regulations.  Through a new collaboration with […]
We are happy to announce a new support program to help small businesses and organisations streamline and strengthen their compliance with evolving EU data regulations.  Through a new collaboration with […]

Tuesday, 20. January 2026

OpenID

Notice of Vote to Approve Proposed OpenID Federation 1.0 Final Specification

The two-week voting period will be between Tuesday, February 3, and Tuesday, February 17, 2026, once the 60 day review of the specification has been completed. The OpenID Connect working group page https://openid.net/wg/connect/. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on […] The post Notice

The two-week voting period will be between Tuesday, February 3, and Tuesday, February 17, 2026, once the 60 day review of the specification has been completed.

The OpenID Connect working group page https://openid.net/wg/connect/.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/benefits-members/.

The vote will be conducted at https://openid.net/foundation/members/polls/397

Marie Jordan – OpenID Foundation Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post Notice of Vote to Approve Proposed OpenID Federation 1.0 Final Specification first appeared on OpenID Foundation.


DIF Blog

DIF Newsletter #57

January 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Groups; 4. Community Events; 5. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Are you in APAC? For the first quarter of 2026, DIF'

January 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Groups; 4. Community Events; 5. Get involved! Join DIF 🚀 Decentralized Identity Foundation News

Are you in APAC? For the first quarter of 2026, DIF's new Executive Director, Grace Rachmany, will be in the Singapore area. It's an opportunity for us at DIF to get to know (and possible add) members in this part of the world. Grace is actively looking to meet our members and attend events in the area, so please reach out. Of special interest: content people (think Bollywood and KPOP) and Agentic AI, as well as anyone using DIDcomm.

Membership: Goodbye Office Hours, Hello Hot Takes!

The DIF Office Hours have trailed off, with no attendees having questions for the staff. You may be thinking: "but I still want the chance to meet new cool members of DIF!" We've got you covered. Starting in February, we will be introducing a series where we'll combine updates from the field with networking. This month SC members will be participating in MOSIP Connect and in DID Unconference Africa, and reporting back to us in the following format: 10 minutes from 2 participants on their takeaways from the conference, 10 minutes of back-and forth (hopefully they don't agree on everything), and then half an hour for you to share your hot takes.

February 17, 9 am UTC: MOSIP Hot Takes with Markus & Juan. Live call: find it on the DIF Calendar. March: DID Unconference Africa Hot Takes with Eric & Gideon (Time to be announced in the February newsletter!) 🛠️ Working Group Updates

Browse our working groups here

Creator Assertions Working Group

CAWG crowned 2025 with the approval of the CAWG Identity Assertation specification v1.2, which received Steering Committee approval. The WG also drafted a User Experience Guidance document which was approved in its most recent meeting. Trust Registries are an upcoming topic of discussion, with Darrell O'Donnell scheduled to present Ayra's Trust Registry Query Protocol (TRQP) in the coming weeks. The TOIP group exploring interoperability of digital identity containers is disbanding, and has offered to see if CAWG would be interested in adopting their X.509 - Verifiable Credentials Interoperability Draft specification.In the coming weeks, three additional taskforce workstreams will be launching for General Purpose VCs, ACDC VLEIs and Relationship Matrix VLEIs.

👉 Learn more and get involved

Hospitality and Travel Working Group

The Hospitality and Travel Working Group will be issuing a survey of different types of hospitality, travel, and entertainment providers to understand their identity needs. The data will be collected using Google surveys under the DIF domain with a DIF working group email. The 2FA requirements on Google made everyone involved wish that Google would implement more decentralized authentication protocols for multi-user accounts.

The WG is working on an article for Phocuswright, and a schema that is planned for draft review by February 1. The WG has been engaging in deeper discussions on details of the schema given the complexity and variation in the kinds of information they expect to collect.
👉 Learn more and get involved

Trusted AI Agents Working Group

The Trusted AI Agents Working Group is launching a Delegatable Authorization Task Force, and have begun by creating a content outline to define the output of a report that will be created by the Task Force. Quite a bit of analysis, prior art, and academic work has already been collected by members of the WG and the document will get a jumpstart based on collating those free-form conversations into a grounding overview. More hands-on work prototyping the targeted use-cases is also starting to get underway, using existing protocols and frameworks.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group meeting focused on reviewing the status of various DID methods and their progress through the recommendation process. Currently, 11 DID methods are at different stages of approval, but only 2 are in active review, so other DID method champions are welcome to schedule "deep dives" and parallelize the review. On slow weeks, time is balanced between issue review (on the process itself) and the open PRs for active-review methods under onboing participation.

👉 Learn more and get involved

Identifiers and Discovery Working Group

The Identifiers and Discovery Working Group enjoyed the holidays.

👉 Learn more and get involved

🪪 Claims & Credentials Working Group

The Claims and Credentials WG has been taking a deep-dive into creating a "dogfooding" implementation, issuing DIF membership credentials. The group is considering the FPP approach and the Verifiable Community approach. The incoming ED clarified that the working groups should be the authorities on active membership credentials, rather than centralizing power through the centralized authority of the ED, who doesn't have a real sense of how to assess active contributors. The group also discussed concerns expressed by international participants around going to conferences in the United States, due to evolving entry requirements.

👉 Learn more and get involved

Applied Crypto Working Group

The BBS+ Work Item of the Applied Crypto WG discussed the progress of an advanced, yet little-known method called did:webplus, which is being finalized to incorporate both Blake and SHA-3 keys and signatures (on verified updates). The current draft RFC for BBS+ signatures at CFRG WG at IETF, has been reviewed and discussed by the WG chairs.

👉 Learn more and get involved

DIDComm User Group

The DIDComm User Group hosted a presentation by Entidad CTO Jorge Flores, demoing a working implmentation of DIDcomm for a decentralized fintech platform, using DIDcomm for group chat and private messaging between their web and mobile Unmio clients. DIF is actively looking for more DIDcomm implementations we can demonstrate to show the viability of DIDcomm and it's place in the wider landscape of international standards, so please reach out to the chairs if you have a demoable project at any stage of development.

👉 Learn more and get involved

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates

With the holidays this was a slow month. We'll update on the SIGs in February. Browse our special interest groups here


DIF Hospitality & Travel SIG DIF China SIG

👉 Learn more and get involved

APAC/ASEAN Discussion Group

👉 Learn more and get involved

DIF Africa SIG

👉 Learn more and get involved

DIF Japan SIG

👉 Learn more and get involved

DIF Korea SIG

👉 Learn more and get involved

📖 DIF User Group Updates
DIDComm User Group

👉 Learn more and get involved

Veramo User Group

👉 Learn more and get involved

📢 Upcoming Events

Will you be attending any upcoming Identity events? Let us know so other DIF members can find you!

MOSIP Connect 11-13 February, 2026 (Morocco)

Two day agenda and one-day unconference for the MOSIP community. Expect to see Steering Committee member Markus Sabadello (DanubeTech) and DIF staffer Juan Caballero participating on-stage at the Conference, as well as in the daylong Unconference facilitated by DIF member Kaliya Young. February 17th Markus and Juan will be giving their Hot Takes on MOSIP. See the DIF calendar for details.

DID Unconference Africa 24-26 February, 2026 (South Africa)

DID:UNCONF AFRICA brings together local and international innovators, leaders, and activists to reshape the future of digital identity. This event fosters innovation, collaboration, and interoperability, making a significant impact on the inclusive development of digital identity in Africa. For the second year running, DIF will be sponsoring the event. Expect to see Steering Committee Member and CAWG Co-chair Eric Scouten in attendance. Eric and Africa SIG Chair Gideon Lobard will be giving us their Hot Takes in March. Watch the February newsletter and DIF calendar for exact time and date.

ITB Berlin, 3-5 March, 2026 (Berlin)

DIF Member Alex Bainbridge (Autoura) will be speaking about identity at the world's largest travel conference.

IETF 125 Shenzhen, 14-20 March, 2026 (Shenzhen)

Our new Executive Director, Grace Rachmany, will be attending IETF125 this year in APAC.

Internet Identity Workshop IIWXLII #42

📅 April 28–30, 2026
📍 Mountain View, CA
Registration and details

Agentic Internet Workshop #2

📅 May 1, 2026
📍 Mountain View, CA
Learn more

Identiverse 2026

📅 June 15–18, 2026
📍 Las Vegas, NV
Conference details

Identity Week Europe 2026

📅 June 9–10, 2026
📍 Amsterdam
Event information

Call for Co-organizers: GDC 2026

The 2026 Global Digital Collaboration Conference has been announced for September 1-2, 2026, in Geneva. Entities who wish to participate as co-organizers to co-create the agenda can apply here.

Authenticate Conference 2026

📅 October 19–21, 2026
📍 Carlsbad, CA
Details coming soon

🗓️ ️DIF Members

📻 DIF Labs Co-Chair Daniel Thompson-Yvetot gave a great overview of the new EU Cyber Resilience Act and how much of a burden it is or isn't for various types of software manufacturers on an episode of the Open Source Security podcast.

👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website. DIF membership is free for individuals and companies with up to 1000 employees.

To get updates about DIF, follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog


Digital ID for Canadians

Spotlight on Dabadu.ai

1. What is the mission and vision of Dabadu.ai? Digital identity will transform the global economy by enabling instant verification, reducing friction, strengthening fraud prevention,…

1. What is the mission and vision of Dabadu.ai?

Digital identity will transform the global economy by enabling instant verification, reducing friction, strengthening fraud prevention, and supporting cross-border trust. In Canada, this transformation is particularly important given the country’s leadership in privacy protection, interoperability, and standards-based frameworks.

Dabadu embeds digital identity directly into dealership and lender workflows rather than treating it as a separate step. Our ID Verification and TrustShield services operate within the sales and finance process, helping ensure every transaction meets lender-grade compliance requirements while safeguarding consumer data. By integrating identity, compliance, and workflow automation, we help automotive organizations reduce risk, eliminate manual verification, and scale digital operations with confidence.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Trustworthy digital identity is a cornerstone of any digital economy. It enables organizations to confidently verify who they are interacting with, reduces fraud, and builds consumer confidence in digital transactions.

In industries like automotive retail and financing, where high-value transactions, sensitive personal data, and regulatory obligations intersect identity assurance is not optional. Without trusted digital identity, digital transformation cannot scale safely or sustainably. At Dabadu, we view identity as foundational infrastructure: the connective layer that enables secure, compliant, and efficient digital experiences across the entire customer journey.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital identity will transform the global economy by enabling instant verification, reducing friction, strengthening fraud prevention, and supporting cross-border trust. In Canada, this transformation is particularly important given the country’s leadership in privacy protection, interoperability, and standards-based frameworks.

Dabadu embeds digital identity directly into dealership and lender workflows rather than treating it as a separate step. Our ID Verification and TrustShield services operate within the sales and finance process, helping ensure every transaction meets lender-grade compliance requirements while safeguarding consumer data. By integrating identity, compliance, and workflow automation, we help automotive organizations reduce risk, eliminate manual verification, and scale digital operations with confidence.

4. What role does Canada have to play as a leader in this space?

Canada is uniquely positioned to lead globally in digital trust by demonstrating how innovation can coexist with strong privacy protections, transparency, and user control. Through initiatives like the Pan-Canadian Trust Framework and the leadership of DIACC, Canada is establishing a practical, interoperable model for digital identity.

As a Canadian company, Dabadu is committed to advancing this leadership by translating these principles into real-world industry adoption. We embed trust, privacy, and compliance directly into automotive workflows, ensuring that digital identity is not theoretical but operational, measurable, and impactful.

5. Why did your organization join the DIACC?

We joined DIACC because we share a common vision: a Canada where digital interactions are secure, privacy-preserving, and trusted by design. DIACC’s collaborative approach bringing together government, industry, and innovators is essential to building a resilient digital identity ecosystem.

For Dabadu, membership represents both a responsibility and an opportunity: to align our technology with nationally recognized trust frameworks, to contribute industry insight from the automotive sector, and to help accelerate adoption of trusted digital identity across high-impact commercial use cases.

6. What else should we know about your organization?

Dabadu.ai is an automotive technology ecosystem that unifies CRM, digital retailing, identity verification, and lender connectivity within a single platform. By reducing reliance on fragmented systems, Dabadu enables dealerships and lenders to operate from a shared, trusted source of data from lead intake and identity assurance through credit submission and funding.

Our platform is designed to ensure that every step of the customer journey is secure, compliant, and data-driven. With built-in identity verification, fraud prevention, and lender integrations, Dabadu is helping modernize automotive retail through a privacy-first, trust-centric approach. As a DIACC member, we are proud to contribute to Canada’s leadership in practical, interoperable digital identity adoption.

Quote from Pulkit Arora, Founder & CEO, Dabadu.ai

“You can’t build trust on disconnected systems. We built Dabadu to unify identity, compliance, and credit into a single trusted automotive ecosystem, one that works for businesses, lenders, and consumers alike.”


Velocity Network

Mark Baglia joins the Board of Directors of Velocity Network Foundation

The post Mark Baglia joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

MyData

MyData, on MyTerms

We all know the importance of empowering people with their data. And the benefits that come from that. That’s why we support the MyData organisation, declaration and principles. We also […]
We all know the importance of empowering people with their data. And the benefits that come from that. That’s why we support the MyData organisation, declaration and principles. We also […]

2025: From Principles to Power

As we move into 2026, one thing stands out about the year that just closed. 2025 was the year when we could no longer pretend that good principles alone would […]
As we move into 2026, one thing stands out about the year that just closed. 2025 was the year when we could no longer pretend that good principles alone would […]

Monday, 19. January 2026

Velocity Network

Joan Beets joins the Board of Directors of Velocity Network Foundation

The post Joan Beets joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Friday, 16. January 2026

FIDO Alliance

Security Boulevard: Driving Passwordless Adoption with FIDO and Biometric Authentication

The Passwordless Imperative For decades, passwords have been the default mechanism for securing digital access. They are deeply embedded in enterprise systems and workflows, yet they were never designed to […]
The Passwordless Imperative

For decades, passwords have been the default mechanism for securing digital access. They are deeply embedded in enterprise systems and workflows, yet they were never designed to withstand today’s threat landscape.

Passwords are easy to steal, easy to reuse, and costly to manage at scale. Despite years of awareness training and layered defenses, credential-based attacks remain one of the most common causes of security breaches. At the same time, password resets continue to consume a disproportionate share of IT support resources, slowing productivity across the organization.


Biometric Update: Maker builds FIDO2-compliant LionKey USB dongle for passwordless security

With their fiddly and indirect nature, one-time passwords (OTPs) are a curse of modern life. They’re a security risk and outdated. Frustrated, a maker has built a physical security key […]

With their fiddly and indirect nature, one-time passwords (OTPs) are a curse of modern life. They’re a security risk and outdated. Frustrated, a maker has built a physical security key that’s compliant with FIDO2.


Cybersecurity Market: Bitwarden Doubles Down on Identity Security as Passwords Finally Start to Lose Their Grip

Bitwarden’s latest round of product updates reads less like a feature dump and more like a quiet assertion that identity security is finally maturing into something operational, measurable, and—crucially—fixable. Long […]

Bitwarden’s latest round of product updates reads less like a feature dump and more like a quiet assertion that identity security is finally maturing into something operational, measurable, and—crucially—fixable. Long positioned as an open, zero-knowledge alternative in the password manager market, Bitwarden is now pushing beyond storage and toward decision-making: seeing credential risk clearly, prioritizing it intelligently, and nudging humans toward action without turning security into another productivity tax. That shift matters. Credential abuse remains the front door for most breaches, yet remediation still drags, stalled by poor visibility and employee friction. Bitwarden Access Intelligence, now generally available, tackles that gap head-on by mapping weak, reused, or exposed credentials directly to business-critical applications, then guiding users through the correct update flows. Nine days to fix a known credential issue is an eternity in attacker time; collapsing that window is less glamorous than AI SOC slogans, but far more consequential. Even at the individual level, vault health alerts and password coaching quietly reinforce better hygiene where it actually happens—inside browsers and apps—addressing the stubborn reality that awareness alone doesn’t stop reuse, especially among younger users who already know the risks but still fall back on convenience. We’ve all been there, honestly.

Thursday, 15. January 2026

Velocity Network

Kymberly Lavigne-Hinkley joins the Board of Directors of Velocity Network Foundation

The post Kymberly Lavigne-Hinkley joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Wednesday, 14. January 2026

Internet Safety Labs (Me2B)

AI Agent, AI Spy – Signal Talk from the 39th Chaos Communication Congress

Once again, great minds at Signal strike at the heart of impending catastrophic collapse of privacy. I love this talk from the 39th Chaos Communication Congress (December 2025) by Meredith Whittaker and Udbhav Tiwari so much. Here are my favorite things: It highlights the “down the stack” progression of unavoidable surveillance functionality into OS and […] The post AI Agent, AI Spy – Signal Tal

Once again, great minds at Signal strike at the heart of impending catastrophic collapse of privacy.

I love this talk from the 39th Chaos Communication Congress (December 2025) by Meredith Whittaker and Udbhav Tiwari so much. Here are my favorite things:

It highlights the “down the stack” progression of unavoidable surveillance functionality into OS and hardware. The closer to the metal, the greater the data purview and potential risk. Meaning, surveillance at the hardware layer is able to surveil all users that use the device, as well as all of the things those users do on the machine. This is why governance needs to apply different duties onto different types of digital products and components. I also really like how Whittaker dives into what it is to be an “agent”, and agentic AI’s insatiable need for context. If the task scope is narrow, the context is narrow, but in the world of “robot butlers”, as Whittaker calls them, the context is broad, thus requiring “everything about me” in order to perform a wide variety of tasks. Herein lies the need for unfettered surveillance. It’s staggering that we might consider ceding “everything about me” to commercial tech makers who have <checks notes> never acted in a trustworthy fashion and never will so long as digital product safety remains unregulated. Capitalism favors the manufacturer and exploits natural resources and humans, as both customers and laborers. We could and perhaps should rename “AI” to “amplified intensity” of digital product safety risks to humans, because it does so with alarming alacrity. In my Enigma talk last year, I described it as pouring gasoline on a privacy dumpster fire. This CCC talk concretizes just a few of the risks, with a special focus on the even more amplifying risk effect of the Model Context Protocol (MCP)—i.e. the lingua franca (or maybe more accurately, lingua francas[1]) for AI agents to talk to each other. I’m reminded of that old Faberge Organics shampoo commercial. What could possibly go wrong with unending autonomous communication between unknown third parties?

They highlight prompt injection attacks, noting that MCP “standardizes the exfiltration path for attackers.” Nifty.

Whittaker clarifies the difference between deterministic software and probabilistic software, demonstrated in her explanation of the “The Mathematics of Failure”. When each step in a technology process chain behaves at even 95% accuracy, the down-stream 30-step outcome result is not 95% overall accuracy but a horrifying 21.4% likelihood of success (remember multiplying fractions?). Nearly every agentic task that will be created so we can enjoy our “robot butlers” will have at least thirty steps. Who on earth would back a product with such a poor accuracy outlook? Which leads us to the overinvestment/AI-hype situation we find ourselves in. With trillions upon trillions of dollars being invested into this technology (because apparently we’re too feeble to actually do Things; or we’re so amazing that our time needs to be spent on perfuming the world with our own special brand of greatness, pick your poison), there is literally no break-even point on the horizon. Once again, the Amplifying Intensity and impact of AI: too big to fail on steroids. They emphasize that there is not an obvious root fix, but they offer three “band-aids”: Stop reckless deployment. (I cannot believe we’re still in the move fast and break things epoch. Capitalism knows no shame.) Privacy by default. They phrase it as inverting the permission model from opt-out to opt-in [to surveillance]. Unfortunately, we have reified opt-out in law (CPRA, I’m looking at you). They’re right, of course. At Internet Safety Labs (ISL) we have made privacy by default a core principle for a digital product to be regarded as “safe”. Transparency. In the talk, they’re mainly focused on transparency of agent behavior, and once again, of course that’s necessary. Heck, our entire mission is built on the premise that transparency drives safer technology and manufacturer accountability. But I have two concerns about this particular transparency: (1) we know quite a lot about transparency at ISL (given our production of safety labels https://appmicroscope.org), and it seems that we might be careening inexorably towards a transparency deluge, the likes of which will make current privacy policies seem like, well, AI generated summaries. (2) Transparency isn’t going to be overly helpful in a world of unbounded, probabilistically behaving software agents.

When it comes to software: Complexity + Time + Probabilistic Behavior = Increasingly unknowable, unpredictable, chaos

We heard Facebook engineers admit five years ago that data flow was already unknowable for them—it was deterministic, but it wasn’t a closed system, ergo, unpredictable.

Which isn’t to say these band-aids aren’t valuable. They are. And there are other things we could do if we were serious about privacy, such as ban the selling or sharing for consideration of personally identifiable information. A person can dream.

Meanwhile, I count the world lucky to have people like Meredith and Udbhav calling out “AI” truths in a powerful, accurate, and highly understandable way.

[1] Francae? Plural. Because there is no world where a single one wins out. I hope.

The post AI Agent, AI Spy – Signal Talk from the 39th Chaos Communication Congress appeared first on Internet Safety Labs.


Velocity Network

Colin Strasburg joins the Board of Directors of Velocity Network Foundation

The post Colin Strasburg joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Next Level Supply Chain Podcast with GS1

Looking Beyond 2026: The Next Big Breakthroughs in Supply Chains

In this episode, Reid Jackson and Liz Sertl sit down with Bob Czechowicz and Nick Latwis from the  GS1 US Innovation Team. They discuss the most promising trends, the challenges businesses face when navigating new tech, and the critical role of pilots in testing the viability of these innovations. Gain actionable insights into how companies can successfully experiment with new technologies an

In this episode, Reid Jackson and Liz Sertl sit down with Bob Czechowicz and Nick Latwis from the GS1 US Innovation Team. They discuss the most promising trends, the challenges businesses face when navigating new tech, and the critical role of pilots in testing the viability of these innovations. Gain actionable insights into how companies can successfully experiment with new technologies and drive meaningful change in their supply chains.

In this episode, you'll learn:

Key emerging technologies in supply chains

How to use AI as a creative partner for innovation

The indicators teams use to decide when a pilot is ready to scale

Things to listen for: (00:00) Introducing Next Level Supply Chain (02:54) Emerging technologies in the next three to five years (06:48) Using data to decide which trends matter (16:00) How to design and conduct a pilot test (19:52) Determining market readiness for new technology (27:38) Nick & Bob's favorite tech

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn Register now for this year's GS1 Connect and get an early bird discount of 10% when you register by March 31 at connect.gs1us.org.

Connect with the guests: Nick Latwis on LinkedInBob Czechowicz on LinkedIn


Blockchain Commons

Blockchain Commons 2025 Overview

What happened at Blockchain Commons in 2025? It turned out to be a very busy year, with us advancing several totally new technologies, as well as continuing on with some of our biggest ongoing priorities. Here’s the year in review! Community Support As we wrote in 2024, our goal is “the creation of open, interoperable, secure & compassionate digital infrastructure”, but we can’t do that on our

What happened at Blockchain Commons in 2025? It turned out to be a very busy year, with us advancing several totally new technologies, as well as continuing on with some of our biggest ongoing priorities.

Here’s the year in review!

Community Support

As we wrote in 2024, our goal is “the creation of open, interoperable, secure & compassionate digital infrastructure”, but we can’t do that on our own.

That’s why Gordian Developer meetings are an important part of our regular schedule. They offer us the opportunity to talk with the community, discover its priorities, and adjust our own work to fit those needs.

We’re even more thrilled when community members begin actively supporting and expanding our technologies, and we saw a lot of that in 2025. That community work included Typescript libraries for our full stack, a UR Playground, and a full IDE for Blockchain Commons. Thanks, folks, we love working with you!

For more, see our Meetings developer page and subscribe to our Gordian Developer announcements, either through our mailing list or Signal group.

ZeWIF

Early in the year, we also became involved with a totally new community: the Zcash developers community. Based on a Zcash Community Grant, we designed and developed ZeWIF, an interchange format for Zcash wallets.

This sort of interoperability is very important to Blockchain Commons, because it ensures the independence of users and the openness of the ecosystem, and those are both Gordian Principles.

But, we were particularly thrilled by our work with ZeWIF because it allowed us to work with a new digital-asset ecosystem. Traditionally, Blockchain Commons has been focused on Bitcoin, but we’re well aware that there are lots of other digital assets that could make good use of our specifications, and so we were happy to begin this expansion of our work.

The idea of supporting other digital-asset ecosystems has already continued into 2026, when our first Gordian Developer meeting of the year included discussion of how to expand Known Values to better support assets other than Bitcoin. (We’re also hoping to continue work with the Zcash community, which we met while working on ZeWIF, to help them integrate other Blockchain Commons specifications.)

For more, see our ZeWIF developer page.

FROST

Our biggest and most important work of the year was probably with FROST, a threshold signing system built on Schnorr signatures.

This year, we moved our FROST work from its supportive role of 2023-2024, which saw our hosting a variety of FROST meetings, to a more hands-on approach, which allowed us to produce some capstone work on the topic, including software expansions, demos, and a whole (short) course.

That work focused on ZF FROST, a FROST library and CLI created by the Zcash community. We wanted to show its general applicability, which we did in a number of ways.

First, we held a pair of meetings to demonstrate those wider capabilities. We showed that it can be used for more than just Zcash by using it to sign Bitcoin transactions (meeting links). Then, we showed how signing can be done using one of our new technologies, Hubert, even when a reliable network doesn’t exist (meeting links).

We also put together a short course that introduces FROST and demonstrates how it works with hands-on examples: it’s called “Learning FROST from the Command Line” and is a parallel to our (out-of-date but popular) “Learning Bitcoin from the Command Line” course.

Finally, we built a number of tools to support ZF FROST and its various capabilities, all of which are documented in the “Learning FROST” course. That includes a standalone app, the frost-verify tool, which can verify FROST signatures (as long as everything matches the format used by ZF FROST).

Thank you as ever to HRF, who has supported all of our FROST work. We hope we’ve been able to create a strong library of work to help people understand and utilize FROST.

For more see our FROST developer page, which has links to that library of work.

Gordian Clubs

Onward to new technologies, which we hope will allow Blockchain Commons to continue its innovation into 2026 and beyond …

The first tech that Blockchain Commons premiered in 2026 was the Gordian Club, which is an autonomous cryptographic object (ACO). That means it’s self-contained, with access determined by mathematical means. The protected and self-contained ACO envelope is then a carrier of information, whether that be identity details, a credential, news, or information about a gathering.

Because it’s autonomous, the Gordian Club doesn’t require infrastructure. If the ‘net is down due to a disaster, or if you are being censored, you can still use a Gordian Club to transmit information. And that’s really the point: to preserve agency when infrastructure fails.

We’ve written a Musings on Gordian Clubs, released a CLI and a Rust library, and demoed how it worked at our October Gordian meeting.

For more see our Gordian Clubs developer page.

Hubert

So how do you transmit a Club (or other data) when there’s no reliable infrastructure? There are lots of solutions. You could use Bluetooth. You could give someone an NFC token or a thumb drive. You could even publish a QR code in a newspaper. But we often need solutions that allow much faster back and forth and that can be automated. That’s where our next technological advance of 2026 comes in: Hubert, the Dead-Drop Hub.

Hubert takes advantage of existing distributed storage systems (BitTorrent, IPFS), combined with Gordian Envelope features including GSTP, to allow secure and private communication that can’t be censored or spied upon by a centralized server.

We’ve produced a CLI for using Hubert (demo) as well as a CLI specifically for conducting FROST ceremonies with Hubert (demo). Check out our BCR research paper for more on the system!

For more, see our Hubert developer page.

Provenance Marks

Provenance Marks are a concept that Blockchain Commons Lead Researcher Wolf McNally has been playing with for a few years, but brought to Blockchain Commons at the start of 2025.

A provenance mark is a forward-commitment hash chain, which means that each mark uses a cryptographic hash to commit to the next publication that will bear that provenance mark. When the key corresponding to the hash is used in that next publication, you know that it is the authentic and authorized next link in the chain. This is all done without a blockchain or other external reference, ensuring the independence of the publication.

Provenance marks are useful because they can prove the authenticity of a sequence of works. You want to know art is all by a specific artist? That writings or newsletters are all from a specific creator or group? A providence mark can verify that, providing truth in a world of AI and deepfakes.

Provenance marks can also be used with Gordian Clubs. Clubs have a mechanism allowing for updates of their information over time. Provenance marks tell you that the updates were from the original author (or some designated party).

We’ve produced a CLI for provenance marks and gave a presentation. Also see our research paper.

For more, see our provenance mark developer page.

XID

We actually introduced XIDs in December 2024, but we were able to better feature the new technology in 2025. A XID is, quite simply, an “extensible identifier”. It’s a specific format for documenting a stable decentralized identifier that can be self-sovereign.

We felt there was a need for XIDs in part because of the failure of the self-sovereign identity community but also because we wanted to offer the foundation for a decentralized digital identity that had redaction capabilities. That comes courtesy of Gordian Envelope. You can fill a XID Document with identity information, but then you can chose who gets to see specific details by using Envelope’s elision capabilities

We demoed XIDs last year, just before we released our research paper. This year, we added considerable XID functionality to our envelope-cli tool. Just type “envelope xid -h” for a list of options. We’ve also been working on and off on a XID Quickstart, which includes overviews of much of our technology and tutorials for XID, but it’s still in process. (Fundamentally, we haven’t been able to give it much priority due to the lack of a sponsor for XIDs. If you think XIDs might be a great fit for your company, talk to us about a partnership that would allow us to prioritize the work!)

For more see our XID developer page.

Revisiting SSI

Finally, Blockchain Commons kicked off a new initiative at the end of 2025, one that represented a long-term interest: Revisiting SSI.

We’ve been involved with self-sovereign identity (SSI) from the start. Christopher Allen choose and popularized the term in his 2016 article, “The Path to Self-Sovereign Identity”, then he shepherded its growth through his Rebooting the Web of Trust workshops.

Blockchain Commons has never had a sponsor to support our self-sovereign identity work, but we’ve nonetheless increasingly dabbled in it in recent years. Besides our work with XIDs, Christopher also gave interviews to SSI Podcast and HackerNoon last year and presented to Switzerland on Swiss e-ID and at TabConf 7. A lot of that included advocacy for better self-sovereign identity, which also led to Blockchain Commons signing the No Phone Home initiative. Finally, Christopher wrote some related Musings last year, on Fair Witnessing, on topics related to GDC25, and on the anchors for preserving sovereignty and autonomy that we suggested to Switzerland.

Our new Revisiting SSI project will be even more expansive than all of that 2025 work. The goal is to look at the principles that Christopher laid out in his 2016 article, and to see what’s worked, what hasn’t, and how we can redefine them to best evolve SSI over the next decade. The first meetings were held on December 2 & 9, and we’re continuing that work through 2026, which is the 10th anniversary of SSI.

For more see the Revisiting SSI website.

What’s Next?

We can’t imagine that we’ll have nearly as much new technology in 2026: though we laid solid foundations for our newest initiatives last year, now we need to help engineers turn them into reality! (If you’re interested in any of our new techs, please let us know.)

Beyond that, we have a number of other topics that we expect to continue into the new year:

The heart of our Revisiting SSI workshopping will occur from January through April, leading up to the 10th anniversary on April 26. Afterward, we hope to generate some papers, much as was done at Christopher’s Rebooting the Web of Trust workshops. The XID Quickstart will likely finally get done this year, complete with tutorials and an introductory look at all of Blockchain Commons’ core concepts. We’re considering an update of the Developer Web Pages to try and bring some order to the rather large set of specifications we now have. (We did that a few years ago, but the ordering we choose at the time hasn’t stood up to the introduction of new technologies.)

And there will definitely be new stuff too, as we talk with the community and seek out new grant opportunities.

Unfortunately, Blockchain Commons is currently running in the red due to a loss of patrons during the crypto-winter. We’re always seeking sponsors, but even more, we’d love to work with you if you are considering adopting Blockchain Commons specifications. Talk to us if you’re interested!

Gordian Developer Meetings (2025) Post-Quantum (March): Interoperability (May): Provenance Marks (June): FROST-CLI (August): Gordian Clubs (October): Exodus Protocols (November): FROST & Hubert (December): ZeWIF Meetings (Early 2025) Meeting #1 (January): Zmigrate Demo (February): Meeting #3 (March): Meeting #4 (April): Revisiting SSI Meetings (Late 2025) Kickoff #1 (December): Kickoff #2 (December):

Tuesday, 13. January 2026

OpenID

2026 OpenID Foundation Board of Directors Election Results

I want to kindly thank all OpenID Foundation members who voted in the 2026 elections for representatives to the OpenID Foundation Board of Directors. Per the Foundation’s Bylaws and as of December 1, 2025, there was one Corporate Representative seat and two Community Representative seats up for election in 2026.  Each year Corporate members of the Foundation elec

I want to kindly thank all OpenID Foundation members who voted in the 2026 elections for representatives to the OpenID Foundation Board of Directors.

Per the Foundation’s Bylaws and as of December 1, 2025, there was one Corporate Representative seat and two Community Representative seats up for election in 2026. 

Each year Corporate members of the Foundation elect up to two members to represent them on the Board for one-year terms with all Corporate members in good standing eligible to nominate and vote for candidates. Unfortunately, we are shrinking to one Corporate Representative seat in 2026 due number of Corporate members as of December 1st.

I am pleased to welcome Mark Verstege back to the board of directors again in 2026 as the Corporate Representative. In addition to his professional roles in Australia, Mark has supported the Foundation in many ways. To name a few, Mark is a founding co-chair of the Ecosystem Support Community Group, Board representative to the Bank of International Settlements “Project Aperta,” volunteer leader of FAPI 2.0 submission to ISO as a Publicly Available Specification (which passed the ISO ballot December 2025), and he is an active contributor to the Strategic Task Force and the Australian Digital Trust CG. I am sure the Board will benefit greatly from his leadership again this year.

I want to thank Atul Tulshibagwale for his many contributions to the Board of Directors as a Corporate Representative in 2025 including stepping up to participate in a number of board subcommittees including the Strategic Task Force and the Mission/Vision Subcommittee. Atul continues to Co-chair the Shared Signals WG, is a founding Co-chair of the AIIM CG and recently elected Co-chair for the AuthZEN WG. Atul has also taken on considerable work for the foundation by leading multiple interop events for the Shared Signals WG, work which supported the first three Shared Signals specifications going to final in 2025. Atul’s positive impacts will carry on in 2026. I also want to thank Atul for his numerous contributions to the Board and the Foundation overall, contributions which I am sure will carry forward in his leadership of three WG/CGs.

Four individual members represent the membership and the community at large on the board as Community Representatives, with offset-terms. Nat Sakimura and John Bradley have one year remaining on their two-year terms and I look forward to their continued contributions in 2026.

I am delighted to welcome back Dima Postnikov and Mike Jones who were elected to two-year terms as Community Representatives starting in 2026. Mike Jones was re-elected to another two-year term and Dima to a new two-year.   I want to thank Dima for his ongoing support as Vice Chairman of the board a role he has committed to serving in 2026 subject to board confirmation. Dima has actively embraced his Vice-Chair role, representing the Foundation at World bank and Western Balkan events, SIDI Hub events, and speaking at other OIDF events while also taking on the mantle of FAPI WG Cochair, Australian Digital Trust Cochair and Ecosystem Support CG Cochair. Similarly I want to thank Mike for his long going support and contributions to the Foundation as a board member, WG co-chair, mentor to the Secretary, Board representative to the Certification team, and many other contributions as AB/Connect Cochair (including hosting the Federation interop last Spring). Both Dima and Mike have also been active contributors to the Executive Committee and the Strategic Task Force. I look forward to continuing work with Dima and Mike in 2026.

This election does come with a significant loss in that George Fletcher was not re-elected as Community Representative. George far predates my time at the Foundation so I will not do his many contributions to the Board and to the WGs the justice they deserve, but I do know that they will have a lasting impact. George has always brough a unique perspective to the Board deliberations and he is the embodiment of the OIDF culture of consensus building and integrity. I want to thank George for his many years of service to the Foundation especially as a consistent thought leader on the Board. I know George plans to remain very active in several OIDF WGs/ CGs like DADE, AIIM and eKYC & IDA and with luck he will also bring his experience and leadership to special projects in the months ahead as well.

Please join me in thanking Atul and George for their commendable service and contributions to the Foundation.

And join me in thanking Dima, Mike, and Mark, as well as the entire board, for their service and support of the Foundation and the community at large. Here’s to a successful 2026!

 

Gail Hodges
Executive Director
OpenID Foundation

 

About the OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post 2026 OpenID Foundation Board of Directors Election Results first appeared on OpenID Foundation.


EdgeSecure

Edge Names Christopher R. Markham President and CEO Following Nationwide Search

Edge Names Christopher R. Markham President and CEO Following Nationwide Search NEWARK, NEW JERSEY, January 13, 2026 – Edge, a leading member-owned nonprofit provider of high-performance optical fiber networking and… The post Edge Names Christopher R. Markham President and CEO Following Nationwide Search appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Edge Names Christopher R. Markham President and CEO Following Nationwide Search

NEWARK, NEW JERSEY, January 13, 2026 – Edge, a leading member-owned nonprofit provider of high-performance optical fiber networking and advanced technology solutions, announced today that Christopher R. Markham, Ph.D. (c), has been selected to serve as President and Chief Executive Officer, effective immediately.

Markham has served as Interim President and CEO since October 1, 2025, following the retirement of Samuel Conn, Ph.D. His selection follows a comprehensive nationwide search conducted by Edge’s Board of Trustees to identify a leader capable of advancing the organization’s mission while building on its strong foundation of innovation and member service.

On Thursday, January 8, the Chair of Edge’s Board of Trustees, Dr. Stephen Rose, formally notified Markham that the Board had reached unanimous agreement to confirm his leadership as President and Chief Executive Officer. The decision reflects the Board’s confidence in Markham’s strategic direction, operational leadership, and stewardship of the organization during a period of transition and growth.

“I am deeply grateful to President Rose, the Executive Committee, and the full Board of Trustees for the trust they have placed in me,” said Markham. “Edge is a mission-driven organization with a strong legacy and an even stronger future. I approach this role with humility, a deep sense of responsibility, and a commitment to thoughtful stewardship in service of our members, our community, and the long-term health of the network.”

Markham brings more than 25 years of executive leadership experience across higher education, research networks, government, and the private sector. Since joining Edge in 2018, and previously serving as Executive Vice President of Operations and Chief Economic Development Officer, he has played a central role in advancing Edge’s evolution into one of the nation’s most respected and expansive research and education networks.

Working in close collaboration with Edge’s leadership team, Markham has led and supported the team's expansion of multi-state GigaPOP connectivity, anchored at Princeton University and Rutgers University, with critical hubs in Philadelphia and Manhattan. These investments have strengthened high-performance research networking for R1 universities, medical centers, and federal research partners, while positioning Edge as a nationally trusted leader in digital infrastructure.

Markham’s career reflects a consistent focus on aligning advanced technology with institutional research priorities, economic development, and mission-driven outcomes. He emphasizes comprehensive operational strategy, financial stewardship, digital transformation, and infrastructure modernization—capabilities essential to supporting Edge’s diverse and growing membership.

A scholar and educator, Markham has been actively engaged in academically rigorous, peer-reviewed research dissemination since 2013, with a focus on economic policy, technology, and institutional transformation. He is currently completing his doctoral dissertation titled “Policy Sequencing in the Age of Artificial Intelligence: How GPT Diffusion Shapes the Timing of Redistributive Interventions.” He has also served as an adjunct professor of economics, teaching at institutions ranging from community colleges to research universities.

From 2000 to 2021, Markham served in both the U.S. Army Active Duty and Reserve components, progressing from an enlisted technology engineer to a commissioned officer in military intelligence and battalion signal units. Over two decades of service, he led multi-state operations encompassing fiscal planning, logistics, and organizational readiness. This experience shaped a mission-oriented, resilient, and collaborative leadership style that continues to inform his executive approach.

Outside of his professional responsibilities, Markham takes great pride in his family and often emphasizes that nothing is more important to him than leaving a meaningful legacy, creating lasting impact, and being fully present in the lives of his two sons. That same sense of long-term responsibility and stewardship informs his leadership at Edge.

About Edge

Edge is a member-owned, nonprofit provider of high-performance optical fiber networking and internetworking, Internet2 access, and a broad portfolio of best-in-class technology solutions, including cybersecurity, educational technologies, cloud computing, and professional managed services. Edge serves colleges and universities, K–12 school districts, government entities, healthcare networks, and nonprofit organizations nationwide. Guided by a common-good mission, Edge empowers its members through affordable, reliable, and purpose-built digital infrastructure that enables innovation and digital transformation.

The post Edge Names Christopher R. Markham President and CEO Following Nationwide Search appeared first on Edge, the Nation's Nonprofit Technology Consortium.


Project VRM

The Only Way to Get Privacy Online

No regulation to make organizations respect personal privacy will work. We’ve had cookie laws since the ’00s, the GDPR since the ’10s, and the CCPA since 2020. None of them has worked. All those regulations are aimed at reducing the power of organizations to violate personal privacy. None is to empower people. That’s why, under […]

No regulation to make organizations respect personal privacy will work.

We’ve had cookie laws since the ’00s, the GDPR since the ’10s, and the CCPA since 2020. None of them has worked.

All those regulations are aimed at reducing the power of organizations to violate personal privacy. None is to empower people. That’s why, under those regulations, all we can do is agree to the terms organizations provide. We have no independent agency.  All we have is what they promise, and their promises aren’t worth the pixels they’re printed on.

The only way we will get privacy is with contracts, which are laws that two parties make for themselves.

And the only way to make contracts work, at scale, is if we are the ones proffering those terms as first parties, and organizations agree to them as second parties. This flips the script on business-as-usual online.

By the old script, privacy is a grace of corporate obedience to selections in cookie notices, many of which provide no choice at all. There is “Accept,” and that’s it. In that case, all you’re accepting is a corporate privacy policy, which is typically just a fig leaf over the company’s hard-on for personal data.

Regardless of what you do with a cookie notice, chances are the company still tracks you like a marked animal.  See here and here. You also have no easy of auditing compliance, because you keep no record of your “choices.” And we have that system because the incentives are worse than misaligned: they are completely broken.

See, if you are a typical website, you get paid for allowing third parties to harvest visitors’ personal data and use it to aim personalized advertising at their eyeballs. This is morally wrong on its face, but easily rationalized because it pays.

In the natural world, a store would never plant tracking beacons on every shopper, or require those shoppers to “choose” privacy protections by stripping naked and then selecting the purposes to which their personal tracking beacons will be put. Shoppers would avoid that store like the plague,

However, on the Net and the Web, we haven’t yet invented privacy, just as we hadn’t in the natural world before we invented clothing and shelter. So, on the Net and the Web, we are still naked as fish. As a result, a plague of near-ubiquitous surveillance has been raging online for decades. It is nearly impossible to avoid getting infected.

Most of that surveillance is for the $742 Billion surveillance-fed fecosystem* called adtech. And the only way we can obsolesce it is with a business ecosystem that works for everyone: customers and companies alike, and together.

We can do that now, with MyTerms.

MyTerms is the nickname for IEEE P7012 Standard for Machine Readable Personal Privacy Terms, which will be published next week after eight years in the works. (I chair the working group.)

It describes a protocol in the diplomatic sense: a way to reach and record agreements. Here is a diagram that shows how it works:

It is also the ultimate product of ProjectVRM, which began in 2006 with a mission: to prove that free customers are more valuable than captive ones—to companies, to markets, and to themselves. It was to ProjectVRM’s nonprofit spinoff, Customer Commons, that the IEEE came in 2017 with the challenge to create the MyTerms standard.

Of course, every agreement needs to be good for both sides. Right now we have five draft agreements for that. SD-BASE says “Service Delivery only.” This one requires that the site or service provide the visitor only what the visitor came for, and not to share personal data with third parties. This will make the site or service more inviting. (Customer Commons also plans to offer a trustmark to sites and services that sign MyTerms Agreements.) Lots of other mutually respectful agreements can also be built on top of SD-BASE: agreements that respect personal agency as well as privacy.

Other initial MyTerms agreements cover data portability, intentcasting, data-for-good, and AI training.

MyTerms will foster businesses and business methods that the surveillance fecosystem prevents. We describe how that will work, and some of the businesses MyTerms will create and improve, in The Cluetrain Will Run from Customers to Companies.

Of course, we need to develop tools and services for making that cluetrain run.  Please tell us what you’ve got or plan.

The place to list those is in a new section of our Developments page. We also need to re-write and condense our privacy manifesto, and welcome help with both.

We also need to thank our many teams over the past two decades for jobs well done, even if many of those jobs didn’t go anywhere, mostly because they were too early.

Now is the time, because the world is fed up with surveillance—and it is easier than ever to develop tools and services using AI.

MyTerms will be announced on 28 January at this event in the Imperial Business School and online. Please come.

*The word fecosystem is apropos, kinda like Cory Doctorow’s ensittification. Spread both words.


Blockchain Commons

Blockchain Commons Receives 2026 Learning Bitcoin Grant from HRF

In previous years, Blockchain Commons has won grants from the Human Rights Foundation (HRF) to support our internship program and our work on FROST. We were delighted to win another grant in 2026 to support a large-scale revamp of Learning Bitcoin from the Command Line. Learning Bitcoin from the Command Line is one of Blockchain Commons’ oldest initiatives, predating the formation of the organizati

In previous years, Blockchain Commons has won grants from the Human Rights Foundation (HRF) to support our internship program and our work on FROST. We were delighted to win another grant in 2026 to support a large-scale revamp of Learning Bitcoin from the Command Line.

Learning Bitcoin from the Command Line is one of Blockchain Commons’ oldest initiatives, predating the formation of the organization. The foundational work on the project was supported by Blockstream, where Christopher Allen was working as Principal Architect at the time. The earliest parts of Learning Bitcoin from the Command Line simply showed how to get Bitcoin installed and running on your local machine, an idea that we’ve returned to a few times, in our Bitcoin Standup Scripts (which are also being updated, making them fully usable again) and Gordian Server.

However, Learning Bitcoin from the Command Line quickly grew from that foundation to become a full course on Bitcoin, explaining how it worked, and demonstrating all of its intricacies. The command-line part of the course, which largely focused on bitcoin-cli, offered hands-on examples so that a learner could follow along, increasing the impact of the learning. The ultimate goal of this expanded course was to help bring new developers into the Bitcoin ecosystem.

We’ve been very proud of the course over the years because of its obvious impact. Hundreds of forks and thusands of stars on GitHub told us that the course had gained traction. But it was really when we started to meet developers (including some of our interns) who had gotten into Bitcoin programming due to the course that we knew that our work had been successful. We’d accomplished our goal, but a bigger challenge lay ahead …

The problem was that Bitcoin was constantly evolving. Each new release of Bitcoin Core, of which there tend to be a few a year, offered new features, deprecated old features, or brought in whole new paradigms that had been decided upon by the Bitcoin community. That meant that the course was constantly being outdated. Shortly after we moved the course over to Blockchain Commons under lead tech writer Shannon Appelcline, we released a full 2.0 edition. Then, with the support of interns, we released an updated 2.1 edition in 2021, followed by translations into Portuguese and Spanish later in the year.

Since 2021, Bitcoin has incorporated major updates such as Signet, Taproot, and descriptor wallets, while Segwit (just coming into wider usage when we last wrote) has become the default address type. Learning Bitcoin was not updated for these advancements because our patrons were interested in other work, from Animated QRs to post-quantum cryptography. The course remained a terrific resource, but slowly grew more out of date.

That’s why we’re so thrilled by HRF’s 2026 grant. It gives us the opportunity to bring one of Blockchain Commons’ most prestigious projects up to date. We’re planning it as a year-long effort (intermingled with our other work). Our TODO details all the work: we’re going to be updating to major new features, rechecking all of the code, and then (as time allows) improving the pedagogy with some new visual learning. If you want to follow along with the work, you can find it in the lbtcftcl-v3.0 branch (but be aware, it’s literally a work in progress!).

Updating Learning Bitcoin from the Command Line is an exciting project, and we hope the course will once more be a gateway to bring new Bitcoin developers into our ecosystem when it’s done. We thank HRF for the opportunity!

Monday, 12. January 2026

FIDO Alliance

HID Global Blog: Understanding FIDO Alliance: Backbone of Passwordless Authentication

In today’s digital-first world, passwords are no longer enough. As phishing attacks and credential theft increase, enterprises require a secure, scalable and user-friendly method for authenticating users. That’s where the FIDO […]

In today’s digital-first world, passwords are no longer enough. As phishing attacks and credential theft increase, enterprises require a secure, scalable and user-friendly method for authenticating users. That’s where the FIDO Alliance — a global consortium shaping the future of passwordless authentication — comes in.


Corbado: Passkeys Japan: An Overview

In 2025, Japan accelerated passkey adoption in response to evolving security challenges. Following a rise in unauthorized access incidents across the financial sector, regulators emphasized that “ID/password-only authentication and even email/SMS one-time passwords […]

In 2025, Japan accelerated passkey adoption in response to evolving security challenges. Following a rise in unauthorized access incidents across the financial sector, regulators emphasized that “ID/password-only authentication and even email/SMS one-time passwords are not sufficient” and that stronger authentication methods like passkeys should be prioritized for high-risk financial actions.


New Scientist: Passwords will be on the way out in 2026 as passkeys take over

Can you remember all your passwords off the top of your head? If so, you probably have too few of them – or, heaven forbid, only one that you use […]

Can you remember all your passwords off the top of your head? If so, you probably have too few of them – or, heaven forbid, only one that you use everywhere. But that problem could become a thing of the past in 2026.

Passwords are a cybersecurity nightmare, with hackers trading stolen sign-in credentials on illicit markets every day. That’s because the overwhelming majority of passwords are too hackable, according to an analysis by Verizon, with just 3 per cent complex enough to withstand hackers.


OpenID

Authorization API 1.0 Final Specification Approved

The OpenID Foundation membership has approved the following as an OpenID Final Specification:   Authorization API 1.0: https://openid.net/specs/authorization-api-1_0.html    A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenID AuthZEN Wo
The OpenID Foundation membership has approved the following as an OpenID Final Specification:   Authorization API 1.0: https://openid.net/specs/authorization-api-1_0.html    A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenID AuthZEN Working Group.   The voting results were: Approve – 81 votes Object — 1 vote Abstain – 25 votes   Total votes: 107 (out of 378 members = 28.3% > 20% quorum requirement)

 

Marie Jordan – OpenID Foundation Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post Authorization API 1.0 Final Specification Approved first appeared on OpenID Foundation.


Velocity Network

Kelly Hoyland joins the Board of Directors of Velocity Network Foundation

The post Kelly Hoyland joins the Board of Directors of Velocity Network Foundation appeared first on Velocity.

Friday, 09. January 2026

OpenID

Public Review Period for Proposed OpenID Connect Relying Party Metadata Choices 1.0 Final Specification

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Final Specification: OpenID Connect Relying Party Metadata Choices 1.0 A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Final Specification:

OpenID Connect Relying Party Metadata Choices 1.0

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a fourteen-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Final Specification.

The relevant dates are:

Final Specification public review period: Friday, January 9, 2026 to Tuesday, March 10, 2026 (60 days) Final Specification vote announcement: Wednesday, February 25, 2026 (14 days prior to voting) Final Specification voting period: Wednesday, March 11, 2026 to Wednesday, March 25, 2026 (14 days)

The OpenID Connect working group page is https://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB/Connect” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.

Marie Jordan – OpenID Foundation Board Secretary

 

About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.



The post Public Review Period for Proposed OpenID Connect Relying Party Metadata Choices 1.0 Final Specification first appeared on OpenID Foundation.

Thursday, 08. January 2026

Velocity Network

Zach Daigle is re-elected to third term on the Board of Directors of Velocity Network Foundation

The post Zach Daigle is re-elected to third term on the Board of Directors of Velocity Network Foundation appeared first on Velocity.

The Engine Room

Join our team! We’re looking for an Associate for Engagement and Support in Latin America [CLOSED]

The Engine Room is currently seeking an Associate for Engagement and Support in Latin America. This role will help us to strengthen and expand  our ability to provide context-appropriate tech and data support to partners in the region as well as collaborate with our broader team on South-to-South initiatives. The post Join our team! We’re looking for an Associate for Engagement and Suppor

The Engine Room is currently seeking an Associate for Engagement and Support in Latin America. This role will help us to strengthen and expand  our ability to provide context-appropriate tech and data support to partners in the region as well as collaborate with our broader team on South-to-South initiatives.

The post Join our team! We’re looking for an Associate for Engagement and Support in Latin America [CLOSED] appeared first on The Engine Room.

Wednesday, 07. January 2026

Velocity Network

Velocity Network Foundation Welcomes Rachel Gordon to its Board of Directors

The post Velocity Network Foundation Welcomes Rachel Gordon to its Board of Directors appeared first on Velocity.

Monday, 05. January 2026

OpenID

OIDF presents at European Commission’s EUDI Wallets Launchpad 2025

The OpenID Foundation was invited to present at the recent EUDI Wallets Launchpad 2025, a landmark three day community event organized by the European Commission on December 10-12 in Brussels, Belgium. The EUDI Wallets Launchpad 2025 represents a critical milestone for digital identity implementation across the European Union. As the first multi-day testing event of […] The post OIDF presents at

The OpenID Foundation was invited to present at the recent EUDI Wallets Launchpad 2025, a landmark three day community event organized by the European Commission on December 10-12 in Brussels, Belgium.

The EUDI Wallets Launchpad 2025 represents a critical milestone for digital identity implementation across the European Union. As the first multi-day testing event of its kind for the EUDIW community, the invitation only gathering was designed to establish the EUDI Wallets Implementers Community and accelerate the adoption of wallets and their use cases across all EU Member States.

Aligning technical teams, policy objectives, and user experience across Europe requires coordination and trust. This was the goal of the Launchpad – by bringing implementers together to test, learn, and fix next steps collectively. This collaborative approach is essential for ensuring interoperability across borders and building a cohesive digital identity framework for Europe.

Photos capturing the event can be found here.

OpenID foundation presentations

The OpenID Foundation participated in two talks, the first included Executive Director Gail Hodges on “Outlook of Wallet Technical Specifications”, where she presented updates on the progress of OIDF specs selected by the EU and collaboration with the European Commission and peer international standardisation organisations, like ISO, ETSI, to align technical specifications supporting the EUDI Wallet.

The following chart summarizes the specifications selected and required for the EUDIW, how they map to ETSI profiles of the specs, and the work on test cases, open source tests, and 1.1 version of the specifications now underway by the Digital Credentials Protocols WG and the Certification team.


The second session by Technical Director Mark Haine on the Wallet Conformance Test Plan introduced functional conformance testing for EUDI Wallets and outlined what conformance deliverables OIDF has underway to support the EUDIW community. These deliverables included improvements to testing tools, new tests, and enhancements to the governance processes around test requirement definition and publishing.

 

Supporting the EUDI Wallet Community

During the event, Gail unveiled a dedicated new EUDIW Resource Hub on the OpenID Foundation’s website to support the EUDIW community. The page is now live and provides implementers with immediate access to essential specifications and tools useful to the EUDIW community.

Implementers working on EUDI Wallet deployments are encouraged to utilize these resources and engage with the OpenID Foundation for technical support and guidance as they progress their implementations.

The Foundation also outlined several initiatives launching in early 2026 to further strengthen the ecosystem, including self-certification, interoperability events, and formal accreditation programmes.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the FAPI standard for interoperable, high security, OAuth2 has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

To learn more about conformance testing and self-certification, please visit the OpenID Foundation’s FAQ section.

The post OIDF presents at European Commission’s EUDI Wallets Launchpad 2025 first appeared on OpenID Foundation.


Digital ID for Canadians

Spotlight on Dealertrack Canada

1. What is the mission and vision of Dealertrack? To make identity verification simple, trusted, and built into every automotive deal.In partnership with Interac Corp.…

1. What is the mission and vision of Dealertrack?

To make identity verification simple, trusted, and built into every automotive deal.
In partnership with Interac Corp. and Equifax Canada, our vision is a seamless dealer workflow with no added complexity.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Because high-value transactions require confidence in who you’re dealing with. Verified identity protects dealers and consumers while keeping deals moving efficiently.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital trust works best when it’s integrated, not layered on. Dealertrack Canada combines Equifax’s identity orchestration with Interac® document verification to deliver a trusted, real-time ID verification, built directly into dealer-to-lender workflows.

4. What role does Canada have to play as a leader in this space?

Canada can lead by proving that secure identity doesn’t have to be complex.
Practical, connected solutions will drive adoption across regulated industries. By aligning industry, regulators, and technology providers, Canada can set a global standard for trusted digital identity.

5. Why did your organization join the DIACC?

To help advance shared standards for embedded digital identity. We believe trust scales fastest when identity solutions work across systems – not in silos.

6. What else should we know about your organization?

Dealertrack Canada’s goal is to make a meaningful impact on the industry’s effort to mitigate fraud. Together, we connect better, protect smarter, and perform stronger.

Sunday, 04. January 2026

Velocity Network

Results of 4th Annual Elections to the Board of Directors of the Velocity Network Foundation

The post Results of 4th Annual Elections to the Board of Directors of the Velocity Network Foundation appeared first on Velocity.

Monday, 29. December 2025

FIDO Alliance

Biometric Update: NIST announces new mDL use case, resources to support financial sector adoption

A webinar on mobile driver’s licenses (mDLs), presented by the National Cybersecurity Center of Excellence (NCCoE) at the National Institute of Standards and Technology (NIST), introduces new resources to help […]

A webinar on mobile driver’s licenses (mDLs), presented by the National Cybersecurity Center of Excellence (NCCoE) at the National Institute of Standards and Technology (NIST), introduces new resources to help financial institutions implement support for mDLs.

Off the top, hosts Bill Fisher and Ryan Galuzzo of NIST provide a walk-through of NCCoE’s mDL privacy risk assessment, to help parties gauge what’s at stake in implementing mDLs. Its data flow diagram, an “abbreviated version of the NIST Privacy Risk Assessment Methodology”  written from the perspective of a financial institution, includes five questions that cover goals, potential problems and potential solutions.


Cyber Insider: Telegram adds passkey support for secure frictionless logins

Telegram has introduced support for passkeys in its latest update, marking a significant shift away from SMS-based login systems in favor of modern, phishing-resistant authentication methods. The move to support passkeys brings […]

Telegram has introduced support for passkeys in its latest update, marking a significant shift away from SMS-based login systems in favor of modern, phishing-resistant authentication methods.

The move to support passkeys brings Telegram in line with a growing number of platforms embracing the FIDO2 standard, a cryptographic login method backed by the FIDO Alliance and major industry players including Apple, Google, and Microsoft. With passkeys, Telegram users can now authenticate into their accounts using biometric data like Face ID or fingerprints, or a device PIN, instead of waiting for SMS codes that may be delayed or intercepted.


ZDNet: The coming AI agent crisis: Why Okta’s new security standard is a must-have for your business

Counting Google, Amazon, and Microsoft among its early adopters, the new standard will provide organizations with more visibility and control over external applications. Here’s how it works.

Counting Google, Amazon, and Microsoft among its early adopters, the new standard will provide organizations with more visibility and control over external applications. Here’s how it works.


Tech HQ: FIDO Alliance encourages adoption of digital credentials

The FIDO (Fast IDentity Online) Alliance has announced a new initiative designed to accelerate the adoption of verifiable digital credentials and identity wallets. Its undertaking hopes to let technology organisations […]

The FIDO (Fast IDentity Online) Alliance has announced a new initiative designed to accelerate the adoption of verifiable digital credentials and identity wallets. Its undertaking hopes to let technology organisations build a trust-based ecosystem for digital identities, helping move the industry beyond the fragmented and sometimes incompatible solutions currently prevalent. Its initiative will provide a framework for best practice.

The initiative arrives at a time when governments and large businesses worldwide are focused on providing (and increasingly, insisting on) digital identities, such as the increased momentum behind the European Digital Identity Wallet, which will be required to do business online by EU and EU-trading businesses next year. The need for secure and interoperable digital credentials is apparent, therefore, driven by a need for greater convenience, better security, and the ability to access services (especially public sector providers) and verify identity online.

“The FIDO Alliance united the industry to solve the password problem, and the world is now embracing the simplicity and security of passkeys – with billions of accounts now benefiting from this significant shift in user authentication,” said Andrew Shikiar, CEO of FIDO Alliance.


American Banker: BankThink Banks need to adopt passkeys as a safer alternative to passwords

By FIDO Alliance’s Andrew Shikar The password is dying. If not in theory, certainly in practice. After years of technical development and cross-platform alignment, passkeys have reached a state of […]

By FIDO Alliance’s Andrew Shikar

The password is dying. If not in theory, certainly in practice. After years of technical development and cross-platform alignment, passkeys have reached a state of real-world maturity. The user experience is seamless. The infrastructure is robust. Compliance is no longer a barrier. And, most importantly, passkeys are working at scale for both consumers and the companies serving them.


OpenID

OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0 Final Specification Approved

The OpenID Foundation membership has approved the following as an OpenID Final Specification: OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0: https://openid.net/specs/openid4vc-high-assurance-interoperability-profile-1_0-final.html  A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This F

The OpenID Foundation membership has approved the following as an OpenID Final Specification:

OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0: https://openid.net/specs/openid4vc-high-assurance-interoperability-profile-1_0-final.html 

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This Final Specification is the product of the OpenID Digital Credentials Protocols (DCP) Working Group.

The voting results were:

Approve – 87 votes Object — 1 votes Abstain – 19 votes

Total votes: 107 (out of 434 members = 24.7% > 20% quorum requirement)

 

Marie Jordan – OpenID Foundation Secretary


About The OpenID Foundation (OIDF)

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0 Final Specification Approved first appeared on OpenID Foundation.

Saturday, 27. December 2025

Project VRM

Writings on the Failings of Notice & Consent

As with the notice above, notice & consent online is worse than a fail. It’s absurd.  But it helps to have sources that explain how ceremonies promising privacy online will always fail when those running the ceremonies are also incentivised to violate their privacy commitments (or not to make them in the first place). I’m […]

This notice actually appeared on the front door of my house for a while.

As with the notice above, notice & consent online is worse than a fail. It’s absurd.  But it helps to have sources that explain how ceremonies promising privacy online will always fail when those running the ceremonies are also incentivised to violate their privacy commitments (or not to make them in the first place). I’m including coverage of adjacent and dependent topics (e.g. adtech and CRM/CX).  Of course, this is all toward setting the stage for MyTerms. Feel free to add your own.

A list of scholarly (or simply serious) sources:

Automated Large-Scale Analysis of Cookie Notice Compliance Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising Do Not Track Initiatives: Regaining the Lost User Control Usability and Enforceability of Global Privacy Control Websites’ Global Privacy Control Compliance at Scale and Over Time
SoK: Advances and Open Problems in Web Tracking
Do Cookie Banners Respect my Choice? Measuring Legal Compliance of Banners from IAB Europe’s Transparency and Consent Framework Can I Opt Out Yet? GDPR and the Global Illusion of Cookie Control
Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence
Cookie Disclaimers: Dark Patterns and Lack of Transparency Analyzing Cookies Compliance with the GDPR
Automating Cookie Consent and GDPR Violation Detection Navigating Cookie Consent Violations Across the Globe
Opted Out, Yet Tracked: Are Regulations Enough to Protect Your Privacy? Why Johnny Can’t Opt Out (popular summaries and related work) A Fait Accompli? An Empirical Study into the Absence of Consent to Third-Party Tracking in Android Apps To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising
Active Consent Transparency Violation Fine Counter

Don Marti’s writings:

surveillance pricing in the news Targeted Advertising Considered Harmful
Perfectly targeted advertising would be perfectly worthless
Can privacy tech save advertising?
Don’t punch the monkey. Embrace the Badger.
A fresh start for advertising and the web?
5 five-minute steps up
Interest dashboard?
Who’s taking all the online ad money? (it’s not me)
Surveillance marketing meets sales norms
Unpacking privacy
Adversariality and web ads
We’re All Gun Nuts Now, So We Had Better Get Good At It
Newspaper dollars, Facebook dimes

Iain Henderson’s writings:

The ‘My’ Protocol Stack MyTerms and the Great Online Privacy Re-boot MyTerms as an Independence Movement MyTerms is for our Children Ten Ways in Which People Benefit from Data Portability We need a new approach to Data Portability for it to work at the scale required for Growth
We All Need a Personal Private Digital Space of Our Own
An Emerging Framework for Personal Fiduciary Agents
How many AI Agents will we each have? and why…
“Of course, it’s all about the data” (…and where do I find the Magical Data Quality Fairy?)
Putting that R back in practice – “MeRM”?
Addressing Two of the Web’s Fundamental Problems with IEEE7012 and FedID
My Terms Overview from IIW 40
Iain Henderson – The Personal Data Eco-System
The Case for Personal Information Empowerment – The Rise of the Personal Data Store (Mydex white paper)

My own writings:

People vs. Adtech (which includes many of the items below) Let’s use the “No Track” button we already have How adtech, not ad blocking, breaks the social contract Wanted: Online Pubs Doing Real (and therefore GDPR-compliant) Advertising Toward no longer running naked through the digital world We’ve seen this movie before How the cookie poisoned the Web Apple vs (or plus) Adtech, Part I A Cure for Corporate Addiction to Personal Data Are you in charge of what you buy, or is it vice versa? Freedom vs. Tracking A Way to Peace in the Adblock War Why #NoStalking is a good deal for publishers Why personal agency matters more than personal data The Wurst of the Web Personal scale Time for THEM to agree to OUR terms Choosing Your Terms Solutions: Choose Your Agreement and #NoStalking #NoStalking (P2B1) agreement MyTerms agreements overview A Way off the Ranch Help Us Cure Online Publishing of Its Addiction to Personal Data Cookies That Go the Other Way Privacy Is Still Personal Advertising 3.0 A Brand Advertising Restoration Project Privacy is personal. Let’s start there. The Data Bubble Beyond the Web The real waste is adtech — but waste isn’t a strong enough word How True Advertising Can Save Journalism From Drowning in a Sea of Content MyTerms (has many listings)

Also Terms and Conditions May Apply, a 2013 documentary by Cullen Hobeck.

Tuesday, 23. December 2025

Origin Trail

5 Trends to drive the AI ROI in 2026: Trust is Capital

Executive Summary: After years of experimentation, business leaders are entering 2026 with a clear mandate: make AI investments pay off, but do it in a way that stakeholders can trust. In enterprise settings, artificial intelligence is no longer a speculative pilot project; it’s a business-critical asset whose success or failure hinges on trust, transparency, and accountability. Recent industry a

Executive Summary: After years of experimentation, business leaders are entering 2026 with a clear mandate: make AI investments pay off, but do it in a way that stakeholders can trust. In enterprise settings, artificial intelligence is no longer a speculative pilot project; it’s a business-critical asset whose success or failure hinges on trust, transparency, and accountability.

Recent industry analyses show a striking gap between AI ambition and actual returns — only 14% of CFOs report measurable ROI from AI to date, even though 66% expect significant impact within two years. This optimism comes with a sobering realization: without verifiability and integrity at every level, AI projects risk underdelivering or even backfiring. An MIT study reveals that up to 95% of firms investing in AI have yet to see tangible returns, often because of hidden flaws, opaque models, or poor data foundations. In response, companies are pivoting from hype to hard results — “after years of pilots, firms are shifting focus to monetization” in AI initiatives.

Share of S&P 500 companies disclosing AI-related risks, 2023 vs. 2025. In 2025, 72% of S&P 500 warned investors about material AI risks (up from just 12% in 2023), reflecting growing concerns about AI’s impact on security, fairness, and reputation (full study).

The result is a strategic shift: trustworthy AI infrastructure is becoming a business advantage rather than a compliance burden.

This article outlines five key AI trends for 2026, each mapped to a layer of the I-DIKW framework (Integrity, Data, Information, Knowledge, Wisdom). These trends show how aligning AI efforts with integrity at every level enables organizations to unlock ROI amid regulatory scrutiny and competitive pressure.

In traditional systems, the DIKW pyramid (Data → Information → Knowledge → Wisdom) was linear and siloed. OriginTrail reshapes this entirely. By merging blockchain, knowledge graphs, and AI agents, it transforms DIKW into a networked, self-reinforcing trust flywheel, adding Integrity as the foundational layer, evolving into the I-DIKW model. Trend 1: Integrity Layer — Trustworthy AI Infrastructure by Design

Integrity is the foundation of the I-DIKW framework: it’s about building AI systems that are trustworthy and verifiable from the ground up. In 2026, leading firms will treat AI integrity (security, ethics, and transparency) as a first-class requirement. This means baking in cryptographic provenance, audit trails, and robust governance controls into AI platforms. For example, new architectures use immutable provenance chains and digital signatures to ensure every AI input and output can be traced and verified. Such measures give executives and regulators high confidence in the integrity of AI outputs.

The business payoff is significant: integrity by design reduces the risk of AI failures, bias incidents, or data leaks that can derail ROI. Companies that invested early in trust infrastructure are finding their AI projects scale faster and face fewer roadblocks from compliance or public concern. Conversely, a lack of integrity can be a deal-breaker. Case in point: the government of Switzerland rejected a prominent AI platform (Palantir) after finding it posed “unacceptable risks” to data security and sovereignty. Swiss evaluators concluded the system couldn’t guarantee full control or transparency, raising alarms about dependence on a foreign black-box solution.

The lesson for CIOs and CEOs is clear: if an AI system can’t prove its integrity and accountability, savvy clients (and regulators) will walk away. In 2026, trustworthy AI by design will be a strategic imperative, enabling organizations to deploy AI confidently and at scale, turning trust into a competitive advantage rather than a cost.

Trend 2: Data Layer — Sovereign Data and Quality Foundations

Moving up the hierarchy, Data is the raw material for AI — and its quality and governance determine whether AI initiatives thrive or falter. It’s well known that garbage in leads to garbage out, yet many organizations still underestimate how data issues sabotage AI ROI. Executives may invest millions in AI tools, only to find that the tools can’t deliver value because the underlying data is incomplete, biased, or untrustworthy. A recent survey of CFOs found that poor data trust is the single greatest inhibitor of AI success — 35% of finance chiefs cite lack of trusted data as the top barrier to AI ROI. It’s no wonder only 14% have seen meaningful AI value so far.

Data sovereignty is a particularly hot issue. Companies and governments alike want assurance that critical data remains under their control. This is driving a trend toward “sovereign AI” solutions — those that allow data to be kept locally or in trusted environments, rather than forcing lock-in to a vendor’s cloud. Europe’s upcoming regulations emphasize data localization and digital sovereignty, reinforcing this shift. The stakes became evident when Switzerland’s defense authorities rejected Palantir’s AI software after a risk assessment warned it could leave Swiss data vulnerable to U.S. jurisdiction. In the evaluators’ words, “No foreign software should compromise our ability to control and protect sensitive national information.”

For businesses, the takeaway is that control over data = trust. In 2026, leading enterprises will choose AI platforms that offer transparent data handling, open standards, and interoperability so they aren’t handcuffed to a single provider. By building sovereign data ecosystems — for instance, using decentralized data networks — organizations ensure data integrity and privacy, which in turn unlocks AI value. When your data is high-quality, compliant, and under clear ownership, AI initiatives can progress without the hidden friction that often stalls pilots. In short, trusted data is the fuel for AI ROI.

Trend 3: Information Layer — Explainable and Verifiable AI Insights

Turning raw data into actionable Information is the next layer — and in 2026, the key word is “explainable”. As AI systems generate reports, recommendations, and content, organizations are realizing that if the people using that information don’t trust it, the AI investment is wasted. Thus, a major trend is the adoption of explainable AI (XAI) and verifiable AI outputs. Business leaders want AI that not only does the analysis but can show its work — revealing the logic, source data, or confidence behind an output.

This trend is fueled by both internal needs (e.g. a manager trusting an AI-generated forecast) and external pressure. Regulators are stepping in: the EU’s AI Act, for example, includes transparency obligations requiring that users be informed when they interact with AI or encounter AI-generated content. Draft European guidelines even call for marking and labeling AI-generated media to curb misinformation. Likewise, in the U.S., authorities have encouraged AI developers to implement watermarking for synthetic content. The message is clear — 2026 is the year when “black box” AI won’t cut it in many business applications.

Companies are responding by building trust layers around AI information. One approach is integrating cryptographic provenance: for instance, embedding invisible signatures in AI-generated content or logs that allow anyone to verify where it came from and whether it’s been altered. Another approach is to leverage verifiable credentials for information sources, ensuring that data feeding AI models (or experts providing oversight) is authenticated and reputable. Forward-looking firms are also deploying AI explainability tools — from simple model scorecards that highlight key factors in an AI decision, to advanced techniques that trace an AI recommendation back to the supporting facts.

A practical example is in financial services: banks deploying AI credit scoring are using explainable models and audit trails so that each loan decision can be explained to a regulator or customer, building trust and avoiding compliance roadblocks. In the realm of generative AI, companies are pairing large language models with knowledge bases and fact-checking mechanisms to prevent hallucinations from reaching end-users. In essence, information generated by AI is becoming self-documenting and self-verifying. By making AI’s information outputs transparent, explainable, and traceable, businesses not only mitigate risk but also encourage greater adoption — employees and customers are far more likely to use AI-driven insights when they can trust the why behind the answer. The result is faster decision cycles and more impactful AI use, directly boosting ROI.

Trend 4: Knowledge Layer — Decentralized Knowledge Networks and Collaboration

The Knowledge layer elevates information into shared organizational intelligence. In 2026, a standout trend will be the rise of decentralized and verifiable knowledge networks as the backbone of AI-powered enterprises. Organizations have learned that AI projects in isolation often hit a wall — the real value emerges when insights are captured, linked, and reused across the company (and even with partners). To enable this, companies are turning to knowledge graphs and collaborative AI platforms that break down silos. Crucially, these knowledge systems are being built with trust and verification in mind. Every contribution to a modern enterprise knowledge graph can be accompanied by metadata: who added this insight, from what source, and with what evidence?

A powerful enabler here is the convergence of blockchain (decentralization) and AI. By combining blockchains’ distributed trust with AI-driven knowledge graphs, organizations create shared knowledge ecosystems that no single party solely controls — yet everyone can trust. For example, in supply chain and manufacturing, partners are beginning to contribute to decentralized knowledge graphs in which data on product quality and provenance are cryptographically signed at each step.

One notable case: Switzerland’s national rail company (SBB) uses a decentralized knowledge graph for real-time traceability of equipment data, ensuring all stakeholders see a single source of truth with integrity. In such networks, verifiable credentials play a role too — only authorized contributors (with digital credentials) can add or modify knowledge, preventing bad data from polluting the system. The benefit to ROI is clear: when knowledge is integrated and trusted, AI can draw on a much richer context to solve problems, and organizations avoid the costly mistakes of inconsistent information.

Moreover, a decentralized approach reduces vendor lock-in and increases resilience — knowledge isn’t trapped in one platform, it’s part of a federated infrastructure the company owns. Leaders are also finding that trusted knowledge sharing accelerates innovation: teams reuse each other’s AI-derived insights instead of reinventing the wheel. As Dr. Robert Metcalfe (inventor of Ethernet) observed, knowledge graphs can “improve the fidelity of artificial intelligence” by grounding AI in verified facts. In 2026, companies that master this knowledge layer — creating a living, vetted memory for the organization — will reap compounding returns from each new AI deployment, as each project makes the next one smarter and faster.

Trend 5: Wisdom Layer — AI Governance and Strategic Alignment for Sustainable ROI

At the top of the I-DIKW stack is Wisdom — the ability to make prudent, big-picture decisions. For enterprises, this translates to strong AI governance and strategic alignment at the leadership level. The trend for 2026 is that AI is no longer just the domain of IT departments or innovation labs; it’s a C-suite and boardroom priority to ensure AI is used wisely, ethically, and in line with the company’s goals. One telling sign: nearly 61% of CEOs say they are under increasing pressure to show returns on AI investments than a year ago. This pressure is forcing a new alignment between tech teams and business leaders. We see the emergence of Chief AI Officers and cross-functional AI steering committees to govern AI initiatives with a balance of innovation and risk management. In practice, companies are establishing AI governance frameworks — formal policies and oversight processes to supervise AI model development, deployment, and performance.

According to recent research, about 69% of large firms report having advanced AI risk governance in place, though many others are still catching up. In 2026, closing this governance gap will be crucial. Effective AI governance ensures that there is “wisdom” in how AI is applied: systems are tested for fairness, AI-driven decisions are subject to human review when needed, and AI strategies align with business values and compliance requirements.

This strategic alignment of AI yields tangible ROI by preventing missteps and unlocking faster adoption. Companies with mature governance can deploy AI in customer-facing processes or critical operations with confidence that they won’t run afoul of regulations or ethics scandals. In contrast, firms that push AI without guardrails often face costly setbacks — whether it’s a PR crisis over biased AI results or a regulator halting a project.

Moreover, organizations are starting to augment their internal governance with collaborative, cross-industry safety nets. For instance, Umanitek has introduced a decentralized “Guardian” agent to coordinate AI safety across platforms. Guardian can fingerprint and cross-check content against a shared knowledge graph of known illicit or deceptive media, blocking harmful deepfakes or flagged materials in real time. Crucially, this approach preserves privacy and data ownership for all participants: each contributor’s data stays private while the agent exchanges trust signals via a permissioned decentralized network . By leveraging such cross-industry trust infrastructure, enterprises effectively extend their AI governance beyond their own walls, aligning multiple AI agents and stakeholders to uphold common integrity standards. This kind of collaborative safeguard strengthens the wisdom layer by ensuring that as AI systems interact across the web, they do so under a unified, verifiable set of ethical guardrails.

Trust, once again, is a differentiator at the wisdom level. A reputation for trustworthy AI can become a selling point: for example, enterprise clients may choose a software provider not just for its AI features, but because it can prove those features are fair and compliant. We’re effectively seeing trust as a brand asset. Internally, strong governance also brings the wisdom of knowing where AI truly adds value. Leading organizations have learned to “lead with the problem, not with AI”, ensuring that each AI project is tied to a clear business outcome (revenue growth, cost reduction, customer experience) rather than AI for AI’s sake. This focus on value alignment is paying off. In fact, research on AI leaders (the Fortune 50 “AIQ” companies) shows they excel not by spending the most, but by integrating AI deeply into strategy and operations to drive measurable results.

Looking at the competitive landscape, those who invest in wisdom-layer capabilities, like company-wide AI literacy, scenario planning for AI risks, and continuous training to fill AI skill gaps, are pulling ahead. CFOs note that strengthening “the systems, data, and talent” around AI is key to turning AI’s promise into performance.

That is wisdom in action: recognizing that ROI comes not just from technology, but from enabling people and processes to harness that technology effectively. As regulatory regimes (from the EU AI Act to industry-specific AI guidelines) come into effect, having a solid governance foundation will mean fewer disruptions and fines and more freedom to innovate.

In sum, the Wisdom trend for 2026 is about treating AI not as a magic black box, but as a strategic enterprise capability that must be nurtured, overseen, and aligned with human judgment. Businesses that do so will find that trust breeds agility — they can push the envelope on AI usage because they have the wisdom to manage the risks. That translates directly into higher ROI and sustained competitive advantage.

Conclusion: Trust-Powered AI as the Blueprint for Leadership

As we head into 2026, one theme resonates across all five layers of I-DIKW: trust is the through-line that turns AI from a gamble into a solid investment. By strengthening Integrity (the technical and ethical bedrock), mastering Data quality and sovereignty, insisting on Information transparency, cultivating verifiable Knowledge networks, and enforcing wise Governance at the top, organizations create a virtuous cycle. Each layer reinforces the others — trustworthy data leads to more reliable AI information, which feeds organizational knowledge, enabling wiser decisions, which in turn guide further data strategy, and so on. Companies that embrace this holistic approach are positioning themselves as leaders in the AI economy. They are better prepared for tightening regulations and rising customer expectations, turning those into opportunities rather than obstacles. Not least, they are demonstrating to investors and boards that AI dollars are well spent: projects don’t stall in pilot purgatory, but scale with confidence because the infrastructure of trust is in place.

In a business climate where 61% of CEOs feel the heat to prove AI is delivering value, aligning with the I-DIKW framework provides a clear roadmap. It ensures that AI efforts are built on integrity and purpose at every step, rather than chased as shiny objects. The experience of firms at the forefront underscores this: those who treated trust as a core principle of their AI strategy are now reaping tangible returns — whether through increased automation efficiencies, new revenue streams from AI-driven products, or stronger customer loyalty thanks to ethically sound AI practices. On the other hand, organizations that neglected these layers are encountering what one might call “AI growing pains,” from data compliance headaches to lackluster ROI, and even public backlash.

The strategic reflection for executives is this: AI leadership in 2026 will belong to those who marry innovation with verification. By investing in trustworthy infrastructure — be it cryptographic provenance for data, explainability modules for AI, or robust governance councils — you not only de-risk your AI investments, but you amplify their reward. Trust is more than a compliance checkbox; it’s a performance multiplier. In the coming AI-driven economy, build trust, and the ROI will follow.

5 Trends to drive the AI ROI in 2026: Trust is Capital was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 22. December 2025

Project VRM

When Branding Means Relating

What is your best friend’s personal brand? How about your spouse’s? Those questions came to mind as I read through The Death of Merchandising in an Online World, by  Dana Blankenhorn, who is reliably wise. In that post, Dana correctly observes that brand value is declining as merchandising shifts from stores to online services, and […]

What is your best friend’s personal brand? How about your spouse’s?

Those questions came to mind as I read through The Death of Merchandising in an Online World, by  Dana Blankenhorn, who is reliably wise. In that post, Dana correctly observes that brand value is declining as merchandising shifts from stores to online services, and to influencers who are also stores.

I think there’s also something else going on at the same time: the shift in media from real advertising to the online equivalent of junk mail, which is what you see with nearly every ad you encounter on your browsers and apps. To marketers, browsers and apps are boxes for junk mail, which at its most ideal is personalized by surveillance.  As I put it in Separating Advertising’s Wheat and Chaff, ” Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”

I wrote that a decade ago. With AI today, that alien replica is the real thing. Madison Avenue is now AM radio, with a whip antenna and tail fins.

Brand advertising worked best when “the media” were mostly print and broadcast. Sources of both were so few that they all fit on a newsstand and the dials of radios and TVs. To operate a source of either, you needed a printing plant or transmitting towers. Publishers and broadcasters are still around, but now their goods are mostly distributed over the Internet and consumed through glowing rectangles. And they’re competing in a world where the abundance of other sources of content is incalculably vast. In that world, the only places you can still reliably create and maintain brands is by sponsoring live events. Especially sports. That’s why I know fifteen minutes will save me fifteen percent with Geico, even though Geico stopped saying that years ago. I also know that you only pay for what you need with Liberty Mutual. And I’ll never get the Shaefer Beer jingle out of my mind.

On the whole, however, branding has finished running the same course as the broadcasting it paid for.

It helps to remember that the words brand and branding were borrowed from ranching. They applied especially well when people had few choices of media, and few if any ways to avoid ads meant to burn the names of companies and products onto mental hides.

What we really (or at least should) mean by brand today is reputation. How a business obtains that in our still-new Digital Age (now with AI!) is an open question.

I believe the answer will come from the natural world, where markets have been working far longer than we’ve had digital media, broadcasting, or print. It was in the natural world that two very different people—one an athiest and the other a pastor—separately explained to me, not long after The Cluetrain Manifesto came out, that markets are not just about transactions and (as Cluetrain insisted) conversations. They are about relationships.

Marketing prevents those. Or shortcuts them. Especially as it continues to devolve into funnels at the bottom end of which are transactions alone, or entrapment in a company’s “loyalty” system.

The Internet and the Web were both designed to support maximum agency and independence for every entity using them. We can have far better markets and marketing if demand and supply both work with maximized agency, and scale in ways that are good for both. That’s the idea behind market intelligence that flows both ways.

Making and maintaining those kinds of relationships will be VRM+CRM, What those together will make are wholes that exceed the sum of either part.


Oasis Open

Signal in the Noise: An Industry-Wide Perspective on the State of VEX

Abstract: Software security has always been a race between complexity and clarity. The Vulnerability Exploitability eXchange (VEX) aims to bring clarity to that race. It’s a structured, machine-readable way for software producers to tell the world whether a vulnerability truly affects their products. That clarity has the potential to cut through noise, eliminate false positives, […] The post Sig

Abstract: Software security has always been a race between complexity and clarity. The Vulnerability Exploitability eXchange (VEX) aims to bring clarity to that race. It’s a structured, machine-readable way for software producers to tell the world whether a vulnerability truly affects their products. That clarity has the potential to cut through noise, eliminate false positives, and reduce the human toil involved in vulnerability management. But for now, adoption remains inconsistent and uncertain. This report collects the stories and insights of leading players in the software industry—Amazon, Anchore, Aquasec, Chainguard, Cisco, Debian, Ericsson, Freexian, Google, Microsoft, Red Hat, and OpenSUSE. Together, they form a mosaic of progress, frustration, and hope. What follows is not a technical manual. It’s an honest account of how VEX is evolving, what’s holding it back, and how we can build a future where vulnerability data empowers security teams instead of overwhelming them.

Introduction

Every day, a security team receives a list of vulnerabilities that looks terrifying. Most of those entries will turn out to be harmless. The challenge is that no one knows which ones matter without a lot of digging. VEX was created to make that process faster and smarter. The Vulnerability Exploitability eXchange is a framework for software producers to publish clear, structured statements about which vulnerabilities do and do not apply to their products. It’s a way to replace endless guesswork with precise, verifiable data.

And yet, the reality today is that VEX feels more like a promise than a practice. Across the industry, there’s agreement on what VEX could be, but less on how to get there. Formats like CSAF, OpenVEX and CycloneDX offer different visions for VEX documents. SPDX, specifically the 3.0 specification, takes a relationship-based approach. While it can function as a standalone document, its architecture is designed to encode vulnerability relationships directly into the broader software supply chain graph, capable of ingesting and mapping information from other formats like CSAF or OpenVEX. While organizations wrestle with tooling, policy, and regulation, the VEX Industry Collaboration Working Group brought together experts from across the ecosystem to compare notes and chart a shared path forward.

Methodology

This paper draws from months of structured interviews and discussions with major software producers, open-source maintainers, and vulnerability management vendors. We listened, compared, and synthesized their experiences into a unified view of VEX adoption. We focused on five big questions:

What motivates companies to adopt VEX? Which formats are being used, and why? How are VEX documents distributed and trusted? What tools exist—and what’s missing? How are regulations shaping these decisions?

These conversations were complemented by a comparison of popular existing standards (such as CSAF, OpenVEX, and CycloneDX), as well as regulatory frameworks like the EU Cyber Resilience Act and the U.S. Executive Order on Software Supply Chain Security.

Note: The insights in this paper largely reflect the perspectives of the enterprise and vendor interviewees listed in the acknowledgments. While valuable, this does not represent an exhaustive audit of all VEX implementations or the wider open-source ecosystem.

Current State of VEX Why Companies Care

VEX adoption is slowly gathering momentum, pulled forward by regulation, customer expectations, and a simple desire to reduce noise.

Reducing False Positives: Microsoft reports that common vulnerabilities in libraries like curl generate hundreds of unnecessary support tickets. VEX could stop those calls before they happen. Enabling Automation at Scale: With nearly 40,000 new CVEs published annually, communication about vulnerabilities along complex software supply chains can only be handled through automation. Machine-readable VEX is essential for this. Meeting Compliance Requirements: The EU’s Cyber Resilience Act makes effective vulnerability handling a legal requirement for anyone doing business in Europe, and CSAF-based VEX will be a key enabler for efficient compliance. Customer Pressure: Enterprises are asking for VEX data. Cisco now requires it from suppliers through its contractual terms. Who’s Doing What Established Implementers: Red Hat, OpenSUSE, and Microsoft are ahead, publishing CSAF VEX documents and building infrastructure to manage them. Emerging Players: Cisco exposes VEX through APIs and is transitioning to CSAF exports. OpenVEX Ecosystem: Chainguard maintains an OpenVEX feed for its secured libraries. The Go toolchain, Kubescape, and Edgebit have integrated OpenVEX for native data generation and reachability analysis. Investigating Participants: Amazon and Debian are experimenting, learning, and planning for broader adoption. Standardization Drivers: Companies like Microsoft, Cisco, and Ericsson are actively evolving the CSAF VEX standard within OASIS to address current and future use cases. The Four Flavors of VEX

Currently, four primary formats exist for implementing VEX, each with distinct characteristics:

CSAF (Common Security Advisory Framework): A comprehensive standard used heavily by major vendors and aligned with regulatory requirements (like the EU CRA). It is powerful and expressive but can be complex to generate and validate without specialized tooling. OpenVEX: Designed for simplicity, interoperability, and integration into open-source workflows. It supports cryptographic attestation (via in-toto) and is supported by tools like the Go toolchain and Docker. It prioritizes “boring” reliability and ease of use over complex advisory features. CycloneDX: A bill-of-materials (SBOM) standard that includes VEX capabilities. While it allows for standalone vulnerability reports, its unique value proposition is embedding vulnerability status directly within the SBOM. However, this can create challenges when SBOM generation lifecycles (build-time artifacts) differ from VEX lifecycles (continuous security updates). SPDX (Software Package Data Exchange): The SPDX 3.0 specification includes a full VEX implementation. It is designed to be fully compatible with the CISA VEX “spec of specs,” capable of round-tripping data to and from other formats. Challenges and Gaps

The following challenges reflect the specific pain points identified by our interviewees, particularly those focused on enterprise CSAF implementations.

Discovery and Distribution

Finding the right VEX document is harder than it should be. There’s no common lookup system, and trust is uneven. Today, every organization distributes VEX differently: some through APIs, others through static repositories or experimental hubs. Functionally, VEX sits between SBOMs (or attestations) and vulnerability database information. While VEX and SBOMs share the same method of referencing software components, the shape of their distribution problems is not the same. Unlike static build artifacts, VEX documents require frequent and dynamic updates, creating a unique hurdle for automation.

Verification and Trust

Verifying the source of a VEX document remains a complex problem for many implementers. While standards like OpenVEX were designed with attestation in mind (e.g., in-toto predicates), widespread industry consensus on a shared standard for signing and verifying all VEX formats is still evolving. Beyond the mechanics of verification, there is the added difficulty of defining the policy itself—determining whose VEX statement to trust (e.g., the software vendor, a distro maintainer, or a third-party scanner). For many enterprise consumers, trust is currently based on where the file is hosted rather than cryptographic proof, which limits the utility of aggregated hubs.

Tooling Maturity

The maturity of tooling remains a significant variable in the ecosystem. Our research specifically highlighted challenges regarding CSAF: while the format is powerful, its tooling can be complex for many users. Some official checkers were reported to miss logical errors, and smaller companies often struggle with the custom development required to manage the documents effectively. This gap has forced adopters like OpenSUSE and Debian to build their own internal tools rather than relying on standard implementations. Conversely, OpenVEX has prioritized a “tooling-first” approach, resulting in stable generation and validation libraries in major ecosystems like Go, which lowers the barrier to entry for smaller teams, even if it lacks the full expressive powers of CSAF.

Mismatched Lifecycles

A VEX document changes whenever new vulnerability status information appears. An SBOM, typically, is a snapshot of build artifacts. While VEX best practices suggest keeping these lifecycles distinct to avoid confusion, theory and practice often diverge. Formats like CycloneDX and SPDX allow for embedding VEX information directly within an SBOM (e.g., via CycloneDX VDR or BOV profiles and SPDX Security profile). This approach has valid use cases—such as providing a summary of known vulnerability status at the exact moment of a software release—but our research suggests adoption is limited. Documentation for these specific CycloneDX use cases is often scarce, and the data model for handling complex vulnerability status statements within the SBOM is perceived by some as less mature than standalone VEX implementations.

Software Identifier Confusion

Every ecosystem has its own way of naming things (PURLs, CPEs, hashes). Without shared conventions and trusted authorities to map these identifiers, automation breaks down. This is a fundamental metadata problem that affects VEX but is not inherent to the VEX format itself.

Education and Incentives

Most open-source maintainers don’t see a reason to publish VEX statements. Some vendors treat VEX as a premium feature, not a baseline responsibility. Adoption isn’t just a technical challenge; it’s cultural.

Role Clarity

VEX should describe exploitability, not serve as a policy tool for ignoring issues. Blurring those lines makes it harder to trust the data.

Recommendations Build a Common Distribution System

The community should fund and design a VEX Discovery and Distribution Protocol. This could be hosted under OpenSSF, enabling anyone to discover, verify (via digital signatures or OCI-based attestation), and use trusted VEX data in a consistent way. This effort should leverage existing contributions, such as the potential donation of the Aqua VEX Hub or existing OpenVEX archives, to accelerate the creation of a neutral, federated index.

Invest in Better Tools

Product teams have made it clear: they cannot adopt these standards without friction-free integration. Regardless of the specific format, the ecosystem urgently needs robust, maintained libraries to generate VEX documents. Bridging the gap between policy requirements and engineering reality requires meeting developers where they are, with reliable tooling in languages like Go, Python and Java.

Align on Standards Without Excluding Others

Enterprises should consider CSAF as the target standard for high-fidelity production due to its expressiveness and regulatory alignment. However, the industry must acknowledge the implementation friction reported by product teams. We should support CSAF adoption where necessary without invalidating the use of lighter-weight formats that effectively serve engineering needs.

Clarify VEX’s Purpose

The industry should agree on what VEX is—and what it isn’t. That means clear definitions of exploitability reporting, fix availability, lifecycle status (EOL), and legal considerations. A shared guide or reference paper could help bring this clarity.

Fix the Identifier Problem

Projects like OpenSSF GUAC can lead the way by defining shared identifier-matching libraries that unify PURLs, CPEs, and hashes. Reliable identifiers are the foundation of reliable automation.

Strengthen Cross-Industry Collaboration

The OpenVEX SIG under OpenSSF currently serves as a home for this collaboration. However, driving generic VEX improvements across the ecosystem may require a shift in structure or branding. To effectively signal a format-neutral mission, the industry needs a forum explicitly dedicated to the broader VEX interoperability–focusing on transport and discovery protocols–rather than operating under a specific specification’s banner.

Lead by Doing

The fastest way to make VEX real is to use it.

Large vendors should start publishing VEX for their own products. Consumers should ask vendors for VEX data. Open-source maintainers should engage with the community to find the tools and support you need. Future Directions

The next chapter of VEX will be written not in standards bodies but in the build systems, scanners, and repositories that people use every day.

CSAF 2.1 Adoption: In the coming year, we should focus on implementing the CSAF 2.1 specification to leverage its flexible identifiers and integration with modern scoring systems like Exploit Prediction Scoring Systems (EPSS), ensuring these features translate into actual risk reduction. Federated Discovery: OCI registries and projects like Aquasec’s VEX Hub point toward a future of distributed but trusted VEX sharing. Integration with CI/CD: The industry objective should be to normalize VEX generation within standard release pipelines. We should move beyond isolated success stories to a state where automated VEX production is a default capability in major build systems, independent of the specific format used. Regulatory Momentum: The Cyber Resilience Act and similar efforts will turn VEX from a “nice to have” into a key enabler for compliance. Acknowledgements

This work is the result of many conversations, generous expertise, and the steady patience of people who care deeply about making software safer for everyone. The authors extend our sincere thanks to the individuals who contributed their time, insight, critiques, and lived experience. Their perspectives shaped this paper in ways both visible and quiet.

Authors (listed alphabetically): Aubrey Olandt (Red Hat), Brandon Lum (Google), Charl de Nysschen (Google), Christoph Plutte (Ericsson), Georg Kunz (Ericsson), Jonathan Douglas (Microsoft), Jautau “Jay” White (Microsoft), Martin Prpič (Red Hat), Rao Lakkakula (Microsoft)

Contributors (listed alphabetically): Adolfo Garcia Veytia (Carabiner Systems), Alex Goodman (Anchore), Brad Bock (Chainguard), Dario Ciccarone (Cisco), Itay Shakury (Aquasec), James Fuller (Red Hat), Jens Reimann (Red Hat), Johannes Segitz (OpenSUSE), Lisa Olson (Microsoft), Lucas Kanashiro (Freexian), Marcus Meissner (OpenSUSE), Omar Santos (Cisco), Philippe Deslauriers (Chainguard), Rex Pan (Google), Samuel Henrique (Debian), Santiago Ruano Rincón (Freexian), Suresh Goacher (Amazon), Teppei Fukuda (Aquasec), Thomas Schmidt (German BSI), Yousef Alowayed (Google).

The post Signal in the Noise: An Industry-Wide Perspective on the State of VEX appeared first on OASIS Open.

Thursday, 18. December 2025

Digital Identity NZ

Meri Kirihimete me te Tau Hou Hari

As we approach the business end of the year, the hard mahi is ramping up more than anything. The strength of the fabric of our DINZ community can be seen right across the ecosystem from the Reference Architecture working groups expertly facilitated by Christopher Goh, to the heavy lifting being undertaken by so many government and industry practitioners. The post Meri Kirihimete me te Tau Hou Ha

Kia ora,

What a year! As 2025 draws to a close, we want to extend our heartfelt thank you to every member of the Digital Identity NZ community, our government partners, industry supporters, and especially our hosts Air New Zealand and friends at IATA who helped us celebrate all we’ve achieved together last week in Tāmaki Makaurau. 

Your mahi has powered real momentum — from trusted frameworks and inclusive working groups to practical progress in credentialing use cases across both public and private sectors. This collective effort is what’s driving Aotearoa towards a future where digital identity enabled by verifiable attestations isn’t just a concept, but a tool people use every day with confidence.

Thanks to everyone who joined us for our Annual Meeting on 4 December. The presentation is available here, including an introduction to our new Executive Council.

Highlights from the past month

Government app & digital wallet progress
The Government Chief Digital Officer and Department of Internal Affairs have been advancing a new all-of-government app that will securely host digital credentials — signalling a major step toward seamless digital interaction with public services. Initial functionality and user research are underway, with staged rollout expected into 2026. Learn more at Digital.govt.nz

Infinite possibilities for credentials
This year saw ongoing trust framework work, partners preparing real world credential solutions, and momentum building ahead of broad ecosystem adoption. With updated Digital Identity Services Trust Framework rules and expanding credential support, the foundation for digital transformation is a work in progress as highlighted by community feedback to the recent research released by the DIA.

From a Te Ao Māori perspective, digital credential ecosystems must be built on trust, choice, and genuine Māori governance. Without co-design, respect for data sovereignty, and alignment with tikanga Māori, digital identity risks reinforcing exclusion rather than enabling rangatiratanga.

— Dr Karaitiana Taiuru, Māori data and digital governance specialist

Partnerships that matter
We celebrated with industry peers and honoured collaboration across sectors at our end-of-year event on 11 December — particularly with Air New Zealand and the International Air Transport Association (IATA), underscoring the global importance of digital identity standards and interoperability. 

Thank you to all our speakers — especially Janelle Riki-Waaka, who MC’ed the event with grace, warmth, and a deep sense of manaakitanga.

We were indeed fortunate to hear from Dr Samir Saran who highlighted the business case for digital public infrastructure from a Digital India perspective, including accelerated GDP and asset value growth (from 7 to 17% per annum). Samir also shared the progress of the Gates Foundation backed Modular Open Source Identity Platform (MOSIP) — an open-source foundational digital identity platform. The MOSIP centralised identity has been adopted by 160 million global citizens outside India, demonstrating the export potential of our open and decentralised approach to trust infrastructure.

Government digital leadership & future structure
The public service continues its digital transformation with structural evolution under the Public Service Commission, including integration of Government Chief Digital Officer functions into a new Government Digital Delivery Agency — all designed to deliver smarter, simpler services for all New Zealanders. Learn more here.

Looking ahead — the year of the credential (2026)

2026 is shaping up to be a defining year. With foundational work complete and adoption readiness accelerating, we’ll be turning plans into real, everyday experiences — from digital driver licences to secure wallets and beyond. Your continued engagement, innovation, and leadership will be vital.

Take a break, recharge & return ready

As calendars turn and we head into the holidays, we hope you find time to rest, reconnect, and recharge with whānau and friends. Come back refreshed — because 2026 is going to be our most exciting year yet.

Ngā mihi nui — thank you for your partnership, your passion, and your vision for an open, trusted digital future for Aotearoa.

Meri Kirihimete!

Andy Higgs
Executive Director
Digital Identity NZ

Read the full news here: Meri Kirihimete me te Tau Hou Hari

SUBSCRIBE FOR MORE

The post Meri Kirihimete me te Tau Hou Hari appeared first on Digital Identity New Zealand.

Wednesday, 17. December 2025

DIF Blog

DIF Newsletter #56

December 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents 1. Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🏛️ Decentralized Identity Foundation News From the Executive

December 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents

1. Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF

🏛️ Decentralized Identity Foundation News From the Executive Director

This is my final DIF newsletter as Executive Director, and I want to take a moment to reflect on what this community accomplished over the course of 2025.

This year marked a shift in how DIF showed up in the digital identity ecosystem. Across multiple working groups, we increasingly focused on clarifying, validating, and applying identity infrastructure in contexts where people, organizations, devices, and -- increasingly -- AI systems intersect. That focus showed up in different ways: requirements for fine-grained, revocable delegation; secure messaging in constrained environments; greater rigor around DID methods and their operational properties; and domain-focused work in content authenticity and travel, where identity systems must operate under demanding constraints.

What mattered most to me was the shared commitment across our groups to deployability without giving up on principles. Privacy, user control, and interoperability stayed central to our technical decisions, even when tradeoffs were involved. That balance is difficult to maintain, and this community approached it thoughtfully and with care.

This year also highlighted the importance of principled technical voices in broader identity discussions. Through efforts such as the No Phone Home campaign, DIF helped surface concrete privacy and architectural concerns in emerging digital identity systems, contributing a technical perspective that aligned with the work of organizations such as the ACLU and EFF. Our role was helping ensure that questions of user control, data minimization, and unintended centralization remain part of the conversation.

I’m deeply grateful to the Steering Committee, Technical Steering Committee, group chairs, spec editors, implementers, and contributors who made this work possible, often quietly and without fanfare. DIF is in a stronger and more focused place than when I started, with a clearer sense of where it can lead and where it can add the most value.

As I pass leadership to Grace, I’m excited for what comes next and grateful for the trust you’ve placed in me over the years. I’ll be cheering DIF on from the sidelines.

Welcoming Grace Rachmany as DIF's New Executive Director

We're thrilled to announce that Grace Rachmany has joined the Decentralized Identity Foundation as our new Executive Director.

Grace brings deep experience in decentralized governance and community building, with a track record of helping technical ecosystems clarify their purpose and make impact. She joins DIF at a moment when governments, enterprises, and identity visionaries are all making different bets on the future of digital identity. Grace’s ability to navigate that complexity, while keeping communities aligned around shared principles, makes her a strong fit for this next chapter.

Please join us in welcoming Grace to the DIF community. You can connect with her on LinkedIn or reach out via the DIF Slack workspace.

🛠️ Working Group Updates

Browse our working groups here.

Below are highlights from November 2025 working group activity.

Trusted AI Agents Working Group

In November, the Trusted AI Agents Working Group continued refining the Agentic Authority Use Cases work item, with discussions centered on how authority, delegation, and accountability can be expressed when agents act on behalf of people or organizations.

Recent conversations focused on concrete scenarios — exploring how agents might authenticate, present credentials, and operate within clearly scoped boundaries. The group also discussed where existing DID and VC building blocks are sufficient, and where new patterns may be needed to support agent-to-agent interactions without eroding human control.

👉 Learn more and get involved

Hospitality & Travel Working Group

November meetings in the Hospitality & Travel WG focused on traveler profile schemas and the operational realities of deploying them across different regions and systems.

The group continued work on structured, consent-driven profiles — covering preferences, accessibility needs, and multilingual considerations — while examining how these profiles can be used by both human-facing systems and AI-assisted services.

👉 Learn more and get involved

Identifiers and Discovery Working Group

In November, the Identifiers and Discovery WG continued advancing the DID Traits work item, focusing on how traits can help implementers and relying parties reason about DID method properties in a consistent way.

👉 Learn more and get involved

DID Methods Working Group

In a recent email, Chair Jonathan Rayback reminded the group of its significant accomplishments this year:

Drafted a charter for the DID Methods Working Group at W3C Defined the DIF Recommended DID Method process Formally recommended the first DID method

The formal review period for did:webplus began on 3 December 2025. Community review during the current evaluation period is strongly encouraged.

👉 Learn more and get involved

DIDComm Working Group

November discussions in the DIDComm WG focused on practical deployment considerations, including routing models, mediation, and interoperability challenges observed in real deployments.

👉 Learn more and get involved

Creator Assertions Working Group

In November, the Creator Assertions Working Group continued work on content authenticity and provenance, including how assertions may be consumed by automated systems and agents in the future. The CAWG group reached WG approval status for two of its specs, which are nearing DIF Ratified status.

👉 Learn more and get involved

🌎 Special Interest Group Updates

Browse our special interest groups here.

Hospitality & Travel SIG

In November, the Hospitality & Travel SIG continued to serve as a forum for cross-industry knowledge sharing, reinforcing themes such as multilingual traveler profiles and accessibility considerations.

👉 Learn more and get involved

📖 User Group Updates DIDComm User Group

In November, the DIDComm User Group continued its focus on practical implementation experience, sharing lessons from deployments and interoperability testing.

👉 Learn more and get involved

📢 Announcements Year-End Meeting Schedule

Many DIF working groups adjust their schedules in late December and early January. Please check your group’s calendar and mailing list for details.

Call for Participation: Early 2026 Work Items Trusted AI Agents WG: additional agent-related use cases Claims & Credentials WG: continued community schema work Identifiers and Discovery WG: DID Traits implementation and testing 🗓️ Community Events Internet Identity Workshop IIWXLII #42

📅 April 28–30, 2026
📍 Mountain View, CA
Registration and details

Agentic Internet Workshop #2

📅 May 1, 2026
📍 Mountain View, CA
Learn more

Identiverse 2026

📅 June 15–18, 2026
📍 Las Vegas, NV
Conference details

Identity Week Europe 2026

📅 June 9–10, 2026
📍 Amsterdam
Event information

Authenticate Conference 2026

📅 October 19–21, 2026
📍 Carlsbad, CA
Details coming soon

🚀 Get involved! Join DIF

The Decentralized Identity Foundation is a community-driven organization. Join a working group, contribute to open source, attend events, or become a member to help shape the future of decentralized identity.

Visit identity.foundation/join to learn more.


Hyperledger Foundation

Building the foundations for a decentralized world: A decade of community, code, and market development at the Linux Foundation

Ten years ago today, the Linux Foundation launched its first blockchain-related project, the Hyperledger Project, a collaborative effort to “advance popular blockchain technology.” At the time, blockchain was largely viewed in the enterprise as an experimental technology. It was  promising in theory, but unproven at scale, raising questions about security, performance, governance,

Ten years ago today, the Linux Foundation launched its first blockchain-related project, the Hyperledger Project, a collaborative effort to “advance popular blockchain technology.” At the time, blockchain was largely viewed in the enterprise as an experimental technology. It was  promising in theory, but unproven at scale, raising questions about security, performance, governance, and regulatory fit. It was often also conflated with cryptocurrency, which many institutions associated with volatility, regulatory uncertainty, and speculative use cases. 


Next Level Supply Chain Podcast with GS1

How Better Metrics Help Small Businesses Operate Like Pros

Resilience isn't just for small startups—it's vital for businesses of all sizes. In this episode, Jonathan Biddle, author of Supply Chain for Startups, joins Reid Jackson to discuss how companies can build resilient supply chains using key metrics and smart strategies. Jonathan explains how early decisions about structure, visibility, and supplier engagement can set you up for long-term success.

Resilience isn't just for small startups—it's vital for businesses of all sizes.

In this episode, Jonathan Biddle, author of Supply Chain for Startups, joins Reid Jackson to discuss how companies can build resilient supply chains using key metrics and smart strategies. Jonathan explains how early decisions about structure, visibility, and supplier engagement can set you up for long-term success.

This conversation offers a clear view of what it takes to build supply chain operations that can adapt as the business grows.

In this episode, you'll learn:

How process mapping uncovers weak points that limit supply chain stability

Why consistent supplier communication strengthens visibility and reduces risk

The early operational signals that indicate it's time to upgrade systems

Things to listen for: (00:00) Introducing Next Level Supply Chain (04:07) Building resilience for early-stage supply chains (06:37) Why supplier insight matters for managing risk (09:41) Daily and weekly habits that improve operations (14:36) Signals that current processes are no longer scalable (20:06) Tools that support growing operations (25:11) How AI can help small supply chain teams (30:21) Jonathan's favorite tech

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

This episode is brought to you by:

Aarongraphics and Wholechain

If you're interested in becoming or working with a GS1 US solution partner, please connect with us on LinkedIn or on our website.

Connect with the guests: Jonathan Biddle on LinkedInCheck out Jonathan's book, Supply Chain for Startups

Tuesday, 16. December 2025

Hyperledger Foundation

2025 Community Awards: Recognizing Contributions from Across the LF Decentralized Trust Community

As part of our celebration of a decade of building better together, this year we are giving out 10 community recognition awards. The winners have contributed in a variety of different but important ways. 

As part of our celebration of a decade of building better together, this year we are giving out 10 community recognition awards. The winners have contributed in a variety of different but important ways. 

Friday, 12. December 2025

FIDO Alliance

Mobile ID World: FIDO Alliance Sharpens Passkey Trust With New Metadata Service Rules

The FIDO Alliance is tightening how relying parties evaluate passkeys and other FIDO authenticators, rolling out new versions of its Metadata Service (MDS) and a streamlined Convenience Metadata Service aimed at making […]

The FIDO Alliance is tightening how relying parties evaluate passkeys and other FIDO authenticators, rolling out new versions of its Metadata Service (MDS) and a streamlined Convenience Metadata Service aimed at making it easier to separate trustworthy authenticators from outdated or non-compliant devices. The update is pitched as a way to raise assurance levels for passkey deployments without sacrificing user experience across mobile and desktop platforms.


Biometric Update: FIDO Alliance broadens scope with new digital credentials work, deployments

The FIDO Alliance is leveling up. Several announcements show the passwordless-focused organization evolving, as it expands beyond its initial push for passkeys to engage with the wider identity ecosystem. After dropping hints on the […]

The FIDO Alliance is leveling up. Several announcements show the passwordless-focused organization evolving, as it expands beyond its initial push for passkeys to engage with the wider identity ecosystem.

After dropping hints on the Biometric Update Podcast, the FIDO Alliance today announced the launch of a new digital credentials initiative, to be carried out by a new Digital Credentials Working Group (DCWG). The company’s announcement calls it an expansion of the FIDO Alliance’s mission to accelerate the adoption of verifiable digital credentials and identity wallets. Work will focus on three foundational workstreams: wallet certification, specification development and usability and relying party enablement.


The Register: Death to one-time text codes: Passkeys are the new hotness in MFA

Whether you’re logging into your bank, health insurance, or even your email, most services today do not live by passwords alone. Now commonplace, multifactor authentication (MFA) requires users to enter […]

Whether you’re logging into your bank, health insurance, or even your email, most services today do not live by passwords alone. Now commonplace, multifactor authentication (MFA) requires users to enter a second or third proof of identity. However, not all forms of MFA are created equal, and the one-time passwords orgs send to your phone have holes so big you could drive a truck through them.


Digital ID for Canadians

Statement on Canada-EU Digital Credentials and Trust Services MOU: International Alignment Benefits From Domestic Coordination

December 12, 2025 DIACC welcomes Canada and the European Union’s commitment to collaborate on digital credentials and trust services, formalized through the December 8 memorandum…

December 12, 2025

DIACC welcomes Canada and the European Union’s commitment to collaborate on digital credentials and trust services, formalized through the December 8 memorandum of understanding. International alignment matters deeply—for Canadian economic competitiveness, for secure cross-border transactions, and for ensuring our citizens can participate fully in the global digital economy.

This announcement comes after more than a decade of DIACC advocacy for precisely this kind of strategic partnership. The urgent question now is how quickly this international momentum can catalyze the domestic coordination needed to put economic growth and Canadian competitiveness at the centre, while ensuring privacy and security are foundational to design. 

Canadians and their businesses need interoperable digital public and private infrastructure working for them at home now. Every day of delay costs our economy opportunity, competitiveness, and the trust dividend that secure, privacy-respecting verification systems deliver.

Domestic Coordination Enables International Opportunity

Mutual Recognition at Home
Canadians must experience mutual recognition across our own borders with urgency. A business credential recognized in Ontario must work in British Columbia. A professional verification issued in Quebec must be valid for Alberta workers. A digital credential from Nova Scotia must enable service access in Saskatchewan.

Quebec’s recent adoption of Bill 82 demonstrates provincial leadership in digital identity legislation. British Columbia’s Connected Services initiative, built on the Service Card, demonstrates jurisdictional innovation in action. These achievements are significant, and they underscore the urgent need for interprovincial mutual recognition that respects jurisdictional sovereignty while enabling seamless digital trust across Canada.

Economic Imperative Spans All Sectors
Digital credentials must work seamlessly across both public and private sectors, respecting both jurisdictional authority and market needs. Canadian businesses, especially small and medium enterprises, need trusted digital verification capabilities that reduce friction, prevent fraud, and enable growth regardless of which jurisdiction issues or validates credentials.

Implementation must explicitly address how banking, telecommunications, healthcare, professional credentialing, and supply chain sectors can participate. These are multi-jurisdictional challenges requiring coordinated solutions, not top-down mandates.

Federal Collaboration
The federal government has specific authorities in international trade, border management, federal services, and specific regulatory domains. Within this scope, federal action matters enormously, particularly in negotiating mutual recognition agreements that open international markets for Canadian credentials and businesses.

Equally important: the federal government can convene, facilitate, and invest in tools that enable coordination without dictating implementation. The DIACC’s public and private sector Pan-Canadian Trust Framework offers exactly this approach—a consensus-based framework that provinces, territories, Indigenous governments, and private sector participants can adopt voluntarily now while maintaining their respective authorities.

What Canadians and Their Businesses Need Now

For international alignment to deliver tangible benefits, Canada’s jurisdictions and sectors must demonstrate:

Interprovincial and sectoral interoperability commitments that make credentials portable across Canadian borders Multi-stakeholder governance models where provinces, territories, Indigenous governments, industry sectors, and federal authorities coordinate as partners, not hierarchies Standards adoption that leverages existing frameworks like the PCTF to reduce regulatory fragmentation while respecting jurisdictional sovereignty Economic impact focus showing how mutual recognition, domestic and international, creates opportunities for Canadian businesses, workers, and communities Transparency and concrete implementation timelines with clear accountability distributed across appropriate authorities Impactful Progress Happens Through Coordination

Since 2012, DIACC has advocated for a digital trust infrastructure that prioritizes economic growth and respects Canada’s federal structure while enabling seamless verification capabilities across jurisdictions and sectors. Progress is happening: Quebec’s new legislation, BC’s service transformation, Ontario’s legal sector achievements with 700,000+ digital verifications, and growing private sector adoption all demonstrate momentum.

What’s needed now is coordination mechanisms that connect these provincial initiatives, enable interprovincial recognition, align with Indigenous data sovereignty principles, and position Canadian credentials for international mutual recognition. The federal government’s international agreements, like this MOU, create valuable opportunities. Domestic coordination makes those opportunities accessible to Canadians everywhere.

This MOU represents progress toward international alignment. The more complex work remains: achieving the domestic interoperability that makes international mutual recognition practically valuable. Every jurisdiction has a role. Every sector has expertise to contribute. Every delay in coordination represents lost economic opportunity and continued inefficiency across both government and commercial services.

Canada has world-class expertise, proven frameworks like the PCTF, provincial leadership in implementation, and strong private-sector innovation in digital trust services. We have the components needed for success. What we need is a sustained, coordinated commitment across jurisdictions and sectors to make these components interoperable and to ensure all Canadians and businesses can benefit.

DIACC stands ready to support coordination, as Canada’s longest-standing, largest, and most diverse forum focused solely on digital trust and verification. We will continue to contribute through our expertise, our membership ecosystem spanning public and private sectors across all regions, and our commitment to advancing digital trust that serves Canadians in all aspects of their lives—public, private, and economic.

Joni Brennan
President, DIACC


EdgeSecure

Dr. Forough Ghahramani Delivers Talk Dedicated to Empowering Research and Education through Advanced & Emerging Technologies at “New Horizons for AI and Data Science Symposium” at Rutgers University

Dr. Forough Ghahramani Delivers Talk Dedicated to Empowering Research and Education through Advanced & Emerging Technologies at “New Horizons for AI and Data Science Symposium” at Rutgers University NEWARK, NEW… The post Dr. Forough Ghahramani Delivers Talk Dedicated to Empowering Research and Education through Advanced & Emerging Technologies at “New Horizons for AI and Data Science Sy
Dr. Forough Ghahramani Delivers Talk Dedicated to Empowering Research and Education through Advanced & Emerging Technologies at “New Horizons for AI and Data Science Symposium” at Rutgers University

NEWARK, NEW JERSEY, December 08, 2025  – Rutgers University convened Rutgers University academic leaders, members from industry and government  on December 8, 2025, for "New Horizons for AI and Data Science," a comprehensive symposium exploring the future of artificial intelligence and data science innovation at the university. The event, held at Express Newark on the Rutgers-Newark campus, showcased four key Rutgers University Roadmap Initiatives: the Center for Biomedical Informatics and Health Artificial Intelligence (BMIHAI) at Rutgers Health, the Rutgers Artificial Intelligence and Data Science (RAD) Collaboratory at Rutgers-New Brunswick, the Institute for Data, Research, and Innovation Sciences (IDRIS) at Rutgers-Newark, and Prevention Science at Rutgers-Camden.

Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs for Edge and Vice President for Technology at the New Jersey Big Data Alliance, was an invited speaker for the Stakeholder talk titled "Empowering Research and Education through Advanced & Emerging Technologies: A New Jersey Perspective."

"I was delighted to be invited to present at this important symposium and share insights on the growing momentum around advanced technologies, including AI, quantum computing, high-performance computing, and the national resources that are broadening participation in these transformative technologies."

– Dr. Forough Ghahramani
Assistant Vice President for Research, Innovation, and Sponsored Programs
Edge

"I was delighted to be invited to present at this important symposium and share insights on the growing momentum around advanced technologies, including AI, quantum computing, high-performance computing, and the national resources that are broadening participation in these transformative technologies," said Dr. Ghahramani. Dr. Ghahramani emphasized the important role of organizations such as Edge, New Jersey Big Data Alliance, and NJ AI Hub in facilitating collaborations and shared infrastructure to support cutting-edge research and education initiatives across New Jersey.

The day-long event featured United States Senator Andy Kim who delivered an inspiring speech focused on Building the Einstein Corridor in New Jersey. Multiple panel discussions explored ing Rutgers' AI and data science ecosystem innovation strategy. Thank you to Dr. Stephen K. Burley, Director of the Rutgers Artificial Intelligence and Data Science (RAD) Collaboratory and President of the New Jersey Big Data Alliance, Dr. Fay Cobb Payton, Executive Director of the Institute of Data Research and Innovation Science (IDRIS), and  Dr. Leslie Lenert, Director of the Center for Biomedical Informatics and Health Artificial Intelligence (BMIHAI), who served as event organizers and hosts.

Provosts and research leaders from Rutgers-Camden, Rutgers-Newark, Rutgers-New Brunswick, and Rutgers Health participated in roundtable discussions examining opportunities for cross-campus synergies and collective impact. The symposium also featured a panel of postdoctoral fellows and Ph.D. candidates showcasing emerging research talent across the university.

The breakout session brought together community stakeholders to address critical questions about building a robust AI and data science ecosystem, including infrastructure needs, sustainability, and the role of interdisciplinary collaboration.

The Roadmaps to Collective Academic Excellence initiative is supported jointly by the Office of the Executive Vice President for Academic Affairs and by the four Rutgers Chancellors.

About Edge

Edge is a member-owned, nonprofit provider of high-performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge's membership spans across the nation, serving colleges and universities, K-12 school districts, government entities, hospital networks, and nonprofit business entities. Edge's common good mission ensures success by empowering members for digital transformation with affordable, reliable, and thought-leading purpose-built advanced connectivity, technologies, and services.

For more information, visit www.njedge.net.

The post Dr. Forough Ghahramani Delivers Talk Dedicated to Empowering Research and Education through Advanced & Emerging Technologies at “New Horizons for AI and Data Science Symposium” at Rutgers University appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Thursday, 11. December 2025

Digital ID for Canadians

The DIACC 2025 Annual Report

2025 marked a turning point: digital trust moved from concept to operational reality. Three developments proved DIACC’s value as a neutral convener: 1. Scalable Evidence

Canada’s legal sector processed 700,000+ client IDV transactions, proving digital trust works at enterprise scale in highly regulated environments. This isn’t pilot data – it’s production, and it’s positioned Canada as a global early adopter.

2. Certification Maturity

Treefort and FCT achieved PCTF Verified Person certification. Outlier became Canada’s first DIACC-accredited auditor, expanding ecosystem capacity. The PCTF Legal Professionals Profile transformed practices into auditable standards.

3. Policy Influence

We submitted comprehensive Federal Budget recommendations, shaped AI Strategy consultations, and influenced Canada-EU Digital Trade discussions. Quebec’s Bill 82 and BC’s Connected Services initiative demonstrate provincial leadership aligned with DIACC principles.

The Urgency is Clear

AI-generated fraud and misinformation threaten economic stability. DIACC’s work has never been more critical. Our new Canadian Digital Trust Adoption Dashboard provides unprecedented transparency into provincial programs. Our partnership with the SIROS Foundation positions Canada to leverage credentials for cross-border labour mobility.

These accomplishments were possible through your expertise, dedication, and collaborative spirit. As we look ahead, the gap between leading jurisdictions and emerging markets creates pathways for growth. The global identity verification market is growing at a 16.7% CAGR.

Canadian providers are positioned to capture this opportunity—if we maintain momentum.

Download the report here.

DIACC-Annual-Impact-Report-2025

Hyperledger Foundation

ToIP Announces Public Review 02 of the Trust Registry Query Protocol (TRQP) Specification V2.0

The Trust Registry Task Force (TRTF) at Trust Over IP (ToIP) is pleased to announce Public Review 02 (PR02) of the Trust Registry Query Protocol Specification V2.0.

The Trust Registry Task Force (TRTF) at Trust Over IP (ToIP) is pleased to announce Public Review 02 (PR02) of the Trust Registry Query Protocol Specification V2.0.


FIDO Alliance

Recap: FIDO Taipei Seminar 2025 – Welcome to Passkey World

On December 2nd, 2025, the digital identity community gathered in Taipei for the FIDO Taipei Seminar 2025. Under the theme “Welcome to Passkey World,” the event brought together around 300 […]

On December 2nd, 2025, the digital identity community gathered in Taipei for the FIDO Taipei Seminar 2025. Under the theme “Welcome to Passkey World,” the event brought together around 300 CISOs, business leaders, government officials, and identity architects to discuss the accelerating global shift away from passwords and the rapid adoption of phishing-resistant authentication across the Asia-Pacific region.

Setting the Stage: Global Momentum, Local Leadership

The seminar kicked off with a strong message on the state of the industry. Karen Chang, Chair of the FIDO Taiwan Regional Engagement Forum, and Andrew Shikiar, CEO & Executive Director of the FIDO Alliance, opened the day by framing the global success of passkeys.

Andrew Shikiar shared updated metrics on the global adoption of FIDO standards—noting that billions of user accounts are now secured by passkeys—while emphasizing that the technology has moved from “early adoption” to “mainstream deployment.” Karen Chang highlighted the region’s critical role in this ecosystem, detailing how local industries and government bodies are integrating these standards to build a more resilient digital infrastructure.

Keynote: AI, Identity, and Digital Trust

No technology conversation in 2025 is complete without addressing Artificial Intelligence. Dr. Yennun Huang, Distinguished Research Fellow at Academia Sinica and former Minister of Digital Affairs, delivered a compelling keynote titled “AI, Identity, and Digital Trust.”

Dr. Huang bridged the gap between policy and technology, warning that as AI tools reshape the threat landscape, traditional authentication methods are becoming obsolete. He argued that phishing-resistant authentication is no longer just a security feature but a foundational requirement for establishing trust in the AI era.

From the Trenches: Deployments, Strategies, and Future Tech

The sessions then shifted focus to execution, featuring a chronological lineup of industry leaders sharing insights on platforms, deployments, and certification.

Google: The Google team, represented by Niharika Arora and Eiji Kitamura, demonstrated the latest platform enhancements designed to smooth the implementation path for developers.

Keypasco: As the Host Sponsor of the event, Hsin-Yi Lin, General Manager, spoke on “The Passkey Era: Embrace Passwordless Transformation,” offering a roadmap for enterprises to embrace passwordless transformation without disrupting existing workflows.

Mercari: Naohisa Ichihara, CISO of Mercari, provided a view into the e-commerce sector, explaining how FIDO standards are helping the platform reduce fraud rates while keeping checkout flows seamless.

OneSpan: Koh Gim Leng explored “Augmenting Passkey for Different Use Cases,” discussing how to tailor authentication experiences to fit diverse security requirements and user behaviors.

FIME: James Daniels highlighted the “Value of FIDO Certification,” emphasizing how rigorous testing and certification are essential for ensuring global interoperability and trust in authentication products.

HID: Edwardcher Monreal presented “The Passkey Playbook,” outlining a phased approach that allows organizations to transition from legacy credentials to passkeys at a pace that suits their infrastructure.

TikTok: Yan Cao, Engineering Leader at TikTok, shared a fascinating case study on rolling out passkeys to hundreds of millions of users globally, proving that robust security does not have to come at the expense of user experience.

Jmem Technology: Shifting the focus to hardware, John Chang discussed “Building Secure Chips for the Quantum Era,” highlighting the intersection of Post-Quantum Cryptography (PQC) and trusted edge AIoT integration.

Innovation at the Edge: IoT and Zero Trust

The seminar concluded its technical tracks by exploring how authentication standards are securing the Internet of Things (IoT) and edge computing.

A standout moment was the presentation by Simon Trac Do, CEO & Founder of VinCSS. He introduced a “creative combination” of FIDO authentication and FIDO Device Onboard (FDO) standards, demonstrating how fusing these technologies creates a comprehensive Zero Trust Network Access (ZTNA) solution that secures both user identity and device integrity in the IoT era.

Meanwhile, Doris Liu from ASRock Industrial shifted the focus to the hardware foundation of intelligent systems. In her session on pioneering secure Edge AI, she outlined how ASRock is leveraging FDO deployment to build trusted devices, offering a robust, one-stop solution for the burgeoning Edge AI market.

Panel Discussion: The Road Ahead

The day concluded with a dynamic panel discussion moderated by Megan Shamas, CMO of the FIDO Alliance. Panelists, including Koichi Moriyama (NTT DOCOMO, FIDO Executive Council Member, FJWG Chair), Paul Liu (Keypasco), Jiunn-Shiow Lin (Ministry of Digital Affairs), Da-Yu Kao (National Chengchi University), and Niharika Arora (FIDO India Working Group Chair), explored the future of identity.

The conversation reinforced a clear consensus: the standards are mature, the technology is ready, and the focus must now shift to optimizing usability and broadening adoption across all sectors.

Looking Forward

The FIDO Taipei Seminar 2025 was a testament to the strength and collaboration of the APAC identity community. As we move into 2026, the partnership between government, industry, and standards bodies will be the key to finally eliminating the password for good.

A special acknowledgment goes to our Host Sponsor, Keypasco, and other sponsors for their generous support in making this event possible, as well as to all our speakers and attendees. We look forward to seeing you at our next event!

Wednesday, 10. December 2025

Digital ID for Canadians

Spotlight on EEZE

1. What is the mission and vision of EEZE? Mission:EEZE is dedicated to helping the automotive industry prevent and deter fraud and identity theft, protecting…

1. What is the mission and vision of EEZE?

Mission:
EEZE is dedicated to helping the automotive industry prevent and deter fraud and identity theft, protecting dealerships, lenders, and their valued customers.

Vision:
Our vision is to create a trusted automotive ecosystem where secure, worry-free transactions are the standard, empowering businesses and consumers alike.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Digital trust and identity verification are critical because identity fraud is becoming increasingly sophisticated and pervasive. Criminals are now leveraging state-of-the-art technologies, including AI, deepfakes, and automated bots, to manipulate personal data, create synthetic identities, and bypass traditional security measures. In both existing and emerging markets, this threatens consumers, businesses, and financial institutions by facilitating fraud, financial loss, and erosion of trust. Robust digital identity verification is essential to ensure that individuals and organizations can transact securely, prevent fraud, and maintain confidence in the digital economy. Moreover, Verification platforms should give consumers clear control and trust over how their data is shared and stored.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital trust and identity verification are transforming the Canadian and global economy by enabling secure, transparent transactions and reducing fraud. In an ever-evolving world, where fraudsters have access to the latest AI and sophisticated technologies, it is critical to have the right checks and balances. Organizations like EEZE must continuously adapt to emerging threats, ensuring robust verification and protection for consumers, dealers, and lenders alike

EEZE tackles this by continuously enhancing our system with additional layers of identity verification, validating individuals, vehicles, and transactions to protect consumers, dealers, and lenders from fraud and identity theft. Customers using EEZE have clear control over how their data is shared and stored. By building trust into every transaction, we help create a safer, more efficient digital economy.

4. What role does Canada have to play as a leader in this space?

Canada can lead in digital trust and identity verification by setting high standards for security, privacy, and innovation. By supporting organizations like EEZE and promoting robust verification practices, Canada can reduce fraud, build global confidence in digital transactions, and serve as a model for secure digital identity solutions worldwide.

5. Why did your organization join the DIACC?

EEZE joined DIACC because a governing organization like DIACC provides a platform for collaboration across the industry. By bringing together vendors, competitors, and stakeholders as a collective braintrust, together with DIACC, it will foster the development of innovative solutions that enhance security, strengthen digital trust, and combat fraud on an industry-wide scale.

6. What else should we know about your organization?

EEZE is a hyper-focused, customizable platform tailored for the automotive industry. We envision a solution where all vendors in this space can collaborate to stay ahead of identity theft and fraud by securely sharing information through a centralized “Citadel.” This is a project we aim to launch in late 2026, and we believe DIACC and its members could greatly benefit from participating.


FIDO Alliance

Passkeys Week 2025: The Resources, Talks, and Success Stories

In November we took part in Passkeys Week, an industry-wide campaign to accelerate the adoption of passkeys and encourage developers to build passkey support into their apps, websites, and authentication […]

In November we took part in Passkeys Week, an industry-wide campaign to accelerate the adoption of passkeys and encourage developers to build passkey support into their apps, websites, and authentication products.

Throughout the week, we released early selections of talks and presentations from our flagship Authenticate 2025 event, shared resources, highlighted passkey success stories from industry leaders, and hosted a live AMA webinar.

In case you missed any of the action on social media, we’ve rounded up everything we shared to help promote the work of those leading the way with passkey deployments and to support everyone on their passkey journey.

Early Access: Authenticate 2025 Presentations

We released early access to select presentations from Authenticate 2025, our flagship conference held in October. These presentations showcase how leading organizations are deploying passkeys at scale and achieving measurable results. These talks are all available to watch on our YouTube channel.

Apple: Ricky Mondello shared insights on how to “Get the Most out of Passkeys.” Google: Chirag Desai and Rohey Livine discussed “The Future of the User Account Lifecycle.” TikTok: Cherise Cen, Patrick Liao, and Yingran Xu presented on “Shipping passkeys for hundreds of millions” and shared what they learned during the process. Roblox: Yuki Bian and Dylan Siegler gave a fascinating look at “Bringing passkeys to all ages,” addressing deployment across a younger demographic. Uber: Ryan O’Laughlin discussed “Realizing the Full Potential of Passkeys at Uber.” PayPal: Mahendar Madhavan, Mohit Ganotra, and Walmik Deshpande shared “Learnings and best practices” from their deployment. DocuSign: Yuheng Huang and Dina Zheng presented on “Modernizing Authentication with True Passwordless.” Dashlane: We released two sessions from Dashlane. First, Tina Zhuo spoke on “Leveling up phishing resistance” using passkeys, confidential computing, and AI. Second, Rew Islam gave a talk titled “What’s wrong with passkeys?”

Success Stories

We also shone a spotlight on companies that have made progress on their Passkey Pledge – a call to action for organizations to accelerate passkey adoption. Here are just a few of the success stories we shared:

Atlancube: The pledge accelerated their certification timelines, helping them prepare to launch a certified hardware security key. Dashlane: Integrated FIDO2 security keys to replace the master password with a hardware-backed secret. First Credit Union: After rolling out passkeys to their 60,000+ members, 54.5% of all authentications now use passkeys. Glide Identity: Achieved FIDO certification for new products to serve organizations seeking interoperable solutions. HYPR: Deployed passkeys at scale to Fortune 500 enterprises, including two of the four largest US banks. LY Corporation: Improved passkey sign-in rates to 41% and reduced SMS transmission costs by replacing OTPs. NTT DOCOMO: Confident of increasing passkey usage by 10% this year by refining user messaging on enrollment pages. Secfense: Enabled passkey sign-ins across banking and insurance sectors without modifying legacy applications. Thales: Extensively promoted the benefits of passwordless to customers through workshops and webinars.

You can read more about these success stories on our website. It’s not too late to take the Pledge, you can find out more here.

Resources

Throughout the week, we pointed to key resources to help those implementing passkeys, including:

Design Guidelines: For consumer use cases, visit PasskeyCentral.org to access the FIDO Alliance Design Guidelines. Developer Hub: For technical resources brought to you by the W3C WebAuthn Community Adoption Group and FIDO Alliance, visit passkeys.dev. UX Research: Read our blog, “Beyond the Protocol,” co-authored by Patryk Les (Yubico) and Philip Corriveau (RSA), which highlights the human-centered shift defining the future of workforce security.

New Data

We shared new research from our Passkey Index, a confidential survey of nine FIDO Alliance member organizations—Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target, and TikTok—that have deployed passkeys for 1 to 3 years on eight utilization and performance areas. It shows the adoption and business impact of passkeys from leading service providers. The data reveals that:

93% of accounts are now eligible for passkeys. 36% of accounts are enrolled with a passkey. 26% of all sign-ins now leverage passkeys. Read the full Index here.

We also highlighted Dashlane’s new report, which offers a one-of-a-kind look at the apps leading the move to passwordless across consumer and enterprise environments globally. You can read the report here.

The Passkeys AMA

To wrap up the educational aspect of the week, we hosted a live, interactive Ask Us Anything (AMA) session. With speakers from Dashlane, FIDO Alliance, Google, and Okta, this webinar was the perfect chance to bring questions about passkey implementation, UX, security, standards, and ecosystem adoption directly to the experts shaping the industry. If you missed the live session, you can still watch it here.

Tuesday, 09. December 2025

FIDO Alliance

Dark Reading: Enterprise FIDO Authentication: An Easy, 3-Step Plan

Enterprise passkey adoption has reached a tipping point. According to new data from HID and the FIDO Alliance, two-thirds of executives believe that passkey deployment is a high or critical priority, and 87% […]

Enterprise passkey adoption has reached a tipping point. According to new data from HID and the FIDO Alliance, two-thirds of executives believe that passkey deployment is a high or critical priority, and 87% have either successfully deployed or are currently deploying passkeys.

The challenge? Often, it’s the very first step. 


Energy Web

Alpha Launch: Liquid Staking and Verified Compute Cloud on Energy Web X

Today’s Alpha Launch marks the first live deployment of Energy Web’s Verified Compute Cloud (VCC) on Energy Web X, leveraging the blockchain platform’s advanced capabilities, including the newly introduced EWX liquid staking. This integral solution enables EWT holders, participating as stakers and/or VCC compute node operators, to be compensated for serving sustainability markets via protocol-leve

Today’s Alpha Launch marks the first live deployment of Energy Web’s Verified Compute Cloud (VCC) on Energy Web X, leveraging the blockchain platform’s advanced capabilities, including the newly introduced EWX liquid staking. This integral solution enables EWT holders, participating as stakers and/or VCC compute node operators, to be compensated for serving sustainability markets via protocol-level VCC solution service fees. The first commercial VCC pilot on EWX facilitates a decentralised, multiparty validation of public Sustainable Aviation Fuel Certificate (SAFc) data on the SAFc Registry which can be tracked in the explorer.

Liquid staking is one of the key enablers of this digital service, allowing EWT holders to stake their tokens without removing them from circulation. They deposit EWT into a pooled nominator and receive stEWT, a liquid representation of their stake. This stEWT can simultaneously contribute to EWX network security and be re-staked to back the accountability requirements of a VCC solution. Importantly, pooling removes the barrier for smaller actors who do not have the capacity or do not wish to manage infrastructure to engage in on-chain network activity. Moreover, any received rewards are automatically restaked, increasing the stEWT:EWT exchange rate over time without minting new tokens (exchange-rate changes reflect protocol mechanics and service-fee distribution, not guaranteed growth or financial return). This simplifies the staking process for users, removes the need to manage delegations and restake rewards while compounding utility, safeguarding network security and avoiding excessive concentration of stake. Slashing is an important part of this process as it prevents malperformance.

Verified Compute Cloud EWX Pilot Application in Partnership with SAFc Registry

Verified Compute Cloud is Energy Web’s innovative off‑chain business logic computation service with on-chain finality, supporting verification, automation and auditability for sustainable, mission-critical enterprise solutions.The VCC Alpha Launch introduces the first live Verified Compute Cloud pilot on the EWX network, conducted in partnership with the SAFc Registry. The SAFc Registry, founded by three clean tech organisations, the Rocky Mountain Institute, the Environmental Defense Fund and SABA, has been operated by Energy Web since its launch at the December 2023 COP28 climate conference in Dubai. With the VCC model innovation introduced in December 2025, the SAFc Registry principle workflows will be continuously audited, embedding data input and process outcome authenticity. EWX network participants engaged in this VCC solution delivery will validate public SAFc Registry data for each retired (officially issued) certificate.

In this process, independent distributed nodes (VCC operators) verify emissions-reduction calculations, check whether certificates were previously claimed, and corroborate whether each retirement meets SAFc classification rules on beneficiary type, claim year, and production and blending dates. Energy Web’s VCC service solidifies confidence in the integrity and accuracy of each SAFc retirement, delivering a higher-quality and more reliable level of oversight than is possible with today’s largely manual, scope and time-restrictive audit processes. For EWX network stakers, this pilot represents the first opportunity to deploy stEWT to support a live Verified Compute Cloud solution. By staking into the SAFc Verified Compute Cloud solution group, participating EWT holders contribute directly to securing an important sustainability process validation. SAFc VCC Solution payment (service fees) is routed on-chain to compensate performant EWX network participants based on their operational and staking contributions, with payments executed in stablecoin (USDC) pursuant to a solution compensation contract over the three-month technical alpha launch period. VCC service fees are expected to increase gradually throughout the period, as more certificates are purchased on the SAFc platform, increasing the total available compensation pool. Service fees will also be variable, since the number of certificates purchased via SAFc Registry varies from month-to-month. No specific level of compensation is guaranteed, with relevant parameters adjusting based on participation levels, network performance and any future upgrades or modifications related to network operations. Any received USDC-denominated service fees can be held by users in their wallet on the EWX network, or transferred to Polkadot Asset Hub via wallet providers (such as SubWallet), or via extrinsic calls through Polkadot.JS for any further action or exchange.

How EWT Holders Can Prepare and Participate

Through the ecosystem of applications on EWX, users will be able to participate in the SAFc Verified Compute Cloud solution by acquiring (or moving their tokens to EWX), liquid staking and then committing their stEWT to the relevant Verified Compute Cloud Solution Group (VCG). The first step is to ensure a sufficient EWT token holding on EWX, and based on current mechanics a minimum of 3,500 stEWT is required to subscribe to the Alpha Launch VCC SAFc solution group. If EWT holders store tokens on the legacy Energy Web Chain, or have already bridged tokens to Ethereum, they can use the Energy Web Bridge interface and this guide to move these to EWX. Next, participants should liquid stake their EWT tokens on EWX, through the EWX Staking interface using this guide, receiving stEWT in return. Finally, the EWX Marketplace interface can be used to select the SAFc solution group, complete the KYC process and contribute stEWT to complete the subscription process. Guides for completing the KYC process and subscribing to solution groups can be found here.

During the initial phase of the SAFc VCC pilot, staking contributions will be open to any KYC-ed user, while the VCC computation service operation shall be limited to a vetted set of operators. At the discretion of the SAFc registry governance as a VCC client, VCC operation may be expanded to include a broader segment of the community as the pilot progresses. This phased approach allows the community to begin participating immediately through staking, while ensuring that node operations scale efficiently as Verified Compute expands.

VCC Service Slashing Mechanics

The VCC solution technical alpha launch applies an initial, conservative slashing configuration for Verified Compute Cloud operators, designed to enforce baseline performance and protocol compliance while allowing greater operational tolerance during this first production validation phase. The operational objective of this phase is to collect real-world performance data and validate end-to-end workflows under live conditions. Accordingly, slashing thresholds are intentionally set at higher tolerance levels than those expected under standard operating conditions. As an additional protection measure, any slashed funds are temporarily routed to a multisignature-controlled holding address for review and may be returned where slashing is determined to be unwarranted under the applicable VCC protocol.

The causes for slashing fall into two categories: Operational Penalty and performance penalties. Operational Penalty are triggered based on an outcome of a disputed or failed voting round (the cycle in which operators submit their verification results on-chain). The penalty renders orchestrated attacks and malicious behaviour economically unviable, while protecting the reliability of outcomes of applications leveraging Verified Compute Cloud. Operational Penalty would only occur under extreme and rare circumstances, but must remain sufficiently strong to achieve the aforementioned objectives. Performance penalties monitor each VCC operator’s individual performance in a voting round, penalising those that fall far below the agreement performance thresholds. These penalties ensure a minimum quality of service from each operator.

Energy Web X On-Chain Service Scaling

The SAFc VCC Solution Alpha Launch brings together the core components of the 2025 Energy Web platform upgrade into a single operational workflow for the first time.

Verified Compute enables verification, automation and auditability for sustainable, mission-critical enterprise solutions; Liquid Staking (stEWT) expands participation and unlocks new on-chain utility; Energy Web Bridge provides multichain mobility and broader ecosystem reach; The SAFc Pilot demonstrates these capabilities in a real, commercial application context and delivers immediate value to management and users of the SAFc Registry.

Together, these components form the foundation of a decentralised digital infrastructure designed for high-integrity climate and energy applications. They enable continuous validation, transparent audit trails and automated rule compliance, all of which are essential for markets such as renewable energy tracking, sustainable fuel certification, supply chain emissions accounting and grid operations. This launch marks the beginning of a new phase for Energy Web X, where staking, compute and real-world decarbonisation workflows operate together to deliver trust, automation and transparency at scale.

Alpha Launch: Liquid Staking and Verified Compute Cloud on Energy Web X was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.


FIDO Alliance

ID TECH: FIDO Alliance Tightens Authenticator Verification with Metadata Service Update

The FIDO Alliance has released a major update to its Metadata Service (MDS) that is intended to improve how relying parties vet passkey and FIDO authenticator devices, with a particular focus on […]

The FIDO Alliance has released a major update to its Metadata Service (MDS) that is intended to improve how relying parties vet passkey and FIDO authenticator devices, with a particular focus on compliance, security assurance, and user experience. In a news post announcing the changes, FIDO describes the new MDS v3.1 and v3.1.1 releases, along with a new Convenience Metadata Service, as a critical step in supporting the continued evolution of the FIDO ecosystem.


Hyperledger Foundation

Climate Action: Official Release of the Anthropogenic Impact Accounting Ontology Suite

Understanding the challenge Across climate and sustainability programs, data about human impacts on the environment is fragmented and inconsistently named. Projects use different schemas and vocabularies; even when the underlying methods align, the data often does not. The result is friction for integrators and auditors and limited verification of evidence across platforms.
Understanding the challenge

Across climate and sustainability programs, data about human impacts on the environment is fragmented and inconsistently named. Projects use different schemas and vocabularies; even when the underlying methods align, the data often does not. The result is friction for integrators and auditors and limited verification of evidence across platforms.


Oasis Open

OASIS Approves Two NIEMOpen Standards to Advance AI-Ready Data Interoperability

Boston, MA – 9 December 2025 – OASIS Open, the international open source and standards consortium, announced the approval of two new OASIS Standards: NIEM Model Version 6.0 and NIEM Naming and Design Rules (NDR) Version 6.0. These standards represent a transformative evolution for NIEMOpen that, for over two decades, has enabled the effective and […] The post OASIS Approves Two NIEMOpen Standard

New Standards Strengthen Trusted Data Exchange Across Government and Enterprise Applications

Boston, MA – 9 December 2025 – OASIS Open, the international open source and standards consortium, announced the approval of two new OASIS Standards: NIEM Model Version 6.0 and NIEM Naming and Design Rules (NDR) Version 6.0. These standards represent a transformative evolution for NIEMOpen that, for over two decades, has enabled the effective and efficient sharing of critical data in the justice, public safety, emergency and disaster management, intelligence, and homeland security sectors. 

NIEM Model v6.0 introduces a modern, flexible architecture that positions the framework to serve the evolving needs of public and private organizations, featuring a format-agnostic framework supporting XML, JSON, and RDF. NIEM Naming and Design Rules (NDR) v6.0 provides the normative specifications for creating data models, namespaces, schemas, and messages that conform to the NIEM framework, defining enforceable rules for naming conventions, documentation, structural integrity, and conformance targets that enable seamless integration with diverse enterprise architectures and applications.

“These standards mark a pivotal moment in NIEMOpen’s evolution,” said Paul Wormeli, Chair of the NIEMOpen Project Governing Board (PGB). “While we celebrate the hundreds of conformant information exchanges already built on NIEMOpen, NIEM Model v6.0 and NIEM NDR v6.0 represent our commitment to expanding the framework’s reach across new domains and supporting trustworthy AI through standardized, interoperable data.”

NIEMOpen 6.0: Next-Generation Interoperability 

Through its revolutionary Common Model Format (CMF) approach, the releases debut powerful new tools including an enhanced API 2.0 and CMF Tool command-line utility, enabling developers to seamlessly translate between data formats. By integrating knowledge graph support, data is AI-ready while maintaining the semantic consistency and interoperability that have defined NIEM for more than 20 years.

A Legacy of Industry Impact and Collaboration

NIEMOpen, which became an OASIS Open Project in October 2022, originated as the National Information Exchange Model (NIEM) in the wake of the September 11, 2001 attacks to address the urgent need for improved information sharing between agencies. Formally launched in April 2005 by the CIOs of the U.S. Department of Homeland Security and Department of Justice, NIEM has evolved into a globally adopted standard used across all 50 U.S. states, numerous federal agencies, and organizations worldwide.

Premier Sponsors supporting NIEMOpen include the Joint Staff JS-J6 Command, Control, Communication, & Computers/Cyber; the US Department of Homeland Security Science and Technology; and the US FBI Criminal Justice Information Systems (CJIS) Division. Additional sponsors include All Purpose Networks, Georgia Tech Research Institute, IJIS Institute, Office of Data Governance and Analytics (ODGA) – Commonwealth of Virginia, Senzing, and the US National Association for Justice Information Systems (NAJIS).

Technical contributors, researchers, and organizations are welcome to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

Support for NIEMOpen 

FBI Criminal Justice Information Systems (CJIS) Division
“The FBI is dedicated to streamlining data exchange within the law enforcement and criminal justice systems. By utilizing the National Information Exchange Model (NIEM) and Information Exchange Package Documents (IEPDs), we adopt common vocabulary and standardized processes—reducing costs and development time. This approach promotes consistency across essential data exchanges, enabling us to share pertinent information swiftly and securely, which supports our efforts to crush violent crime and protect the homeland.”
– Timothy A. Ferguson, assistant director of the FBI’s Criminal Justice Information Services Division

Georgia Tech Research Institute
“Georgia Tech Research Institute has played a key role in NIEM since its inception over 20 years ago–providing support to its technical architecture and tooling to assisting data modelers and implementors from across all levels of government. We are developing new open source tools that work with NIEM v6.0 (and all previous versions) as well as online training. GTRI stands ready to help the community leverage v6.0 and advance NIEM’s evolution to meet the growing demands of information sharing.”
– John Wandelt, Georgia Tech Research Institute

Senzing
“Senzing is proud to be a sponsor of NIEMOpen and has worked with the ecosystem and its expanded support for JSON to provide world-class entity resolution for public sector projects.”
– Jeff Butcher, Chief Architect, Senzing

Additional Information
NIEMOpen GitHub Repository

About NIEMOpen

NIEMOpen is a community collaborative between federal, state, local, tribal, and territorial government agencies and the private sector. NIEMOpen includes the data model with over 20,000 harmonized data elements, the naming and design rules for extending the model to include new data elements, the methodology for creating information exchange specifications, the tools created to automate the process of specifying information exchanges, and the online training to use the tools in the data model for information sharing. All of these elements of the framework are available at no cost on the NIEMOpen website. NIEMOpen operates under OASIS Open, an international standards and open-source consortium. https://niemopen.org/

About OASIS Open

One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org

Media Inquiries
communications@oasis-open.org

The post OASIS Approves Two NIEMOpen Standards to Advance AI-Ready Data Interoperability appeared first on OASIS Open.

Monday, 08. December 2025

Project VRM

A MyTerms Summary

MyTerms will give strength to the Internet’s fabric of human connections, through agentic agreements between people and the organizations that serve them. The Internet is peer-to-peer, by design. It supports agreements between equals, for the good of both. On that equality a massive amount of new and better dealings can be built, on stronger foundations […]

MyTerms will give strength to the Internet’s fabric of human connections, through agentic agreements between people and the organizations that serve them.

The Internet is peer-to-peer, by design. It supports agreements between equals, for the good of both. On that equality a massive amount of new and better dealings can be built, on stronger foundations of mutual agency and respect.

MyTerms are contracts, which are binding mutual agreements between parties. They replace consents, which are corporate protections to which individuals can only acquiesce. Consents give individuals no record of having agreed to anything and cannot be audited or enforced. They are also annoying for both individuals and companies, with massive amounts of operational and cognitive overhead. In most cases they also don’t obey the settings people make.

With MyTerms, individuals, operating as first parties, proffer a contract they choose from a limited list posted on a public website by a neutral nonprofit organization. The company, as the second party, can choose to agree to that contract or an alternate specified by the individual from the same list. Both sign the agreement electronically and keep matching records that can be audited later if need be. If the company declines to agree, the individual can keep a record of that choice, which they are free to share.

This process is described in a new standard from the IEEE called P7012, which is due for publication in January 2026. Its nickname is MyTerms, much as the nickname of IEEE 802.11 is Wi-Fi.

The most basic MyTerms agreement is for services only. This resets the marketplace to what we have in the natural world, where one can visit an establishment for the services it provides, in faith that one will not be tracked out of it for any reason, and data about oneself will not be sold or given to others. It also commits the individual to respect for the establishment and the services it provides.

With MyTerms, voluntary and genuine relationships can be built on a foundation of mutual respect and willingness to engage. Following a MyTerms agreement, individuals can selectively disclose information about themselves and their intentions, and additional services might be provided, in mutually agreeable and fruitful ways.

In this manner, companies can come to know individuals far better than has ever been possible through unwelcome surveillance and algorithmic guesswork and manipulation. Genuine relationships can also replace the coercive kind typified by “loyalty” programs meant constantly to manipulate customers. (Consider how marketers, without irony, speak of customers as “targets” to be “acquired,” shoved through a “funnel,” “controlled,” “managed,” and “locked in” as if they were slaves or cattle.)

The MyTerms standard also says that both sides will use machine agents to make agreements. These can be as simple as browser plug-ins on the individual side and server plug-ins on the corporate side. They can also be AI agents, which is why it is opportune for the standard to be published in an age when AI is still a new and rapidly evolving—for both companies and individuals.

For maximized agency on both sides, AI agents must be private instruments of full sovereignty, meaning they work privately and exclusively for each party. They cannot be instruments of surveillance or control by outside actors of any kind. Working exclusively will also maximize agency for both sides.

Civilization requires privacy. Simple as that. We worked out privacy in the natural world with technologies such as clothing and shelter, and well-understood ways to signal our intentions. The digital world, however, is still new, and not civilized. We lack the equivalents of clothing and shelter, and in their absence, surveillance has become the norm. So has the theater of consent, with its insincere and ineffective cookie notices.

The only way to obtain personal privacy and make good on the Internet’s original promises is with mutually beneficial agreements that begin with the simple privacy requirements we as individuals present to the corporations of the world. With MyTerms, we can start civilizing the worldwide public marketplace, making it a safe and productive environment for business, and everything else that depends on it.


Human Colossus Foundation

ArgonAuths x Human Colossus: Finalists at HackNation 2025 — Redefining Trust in the Digital Public Sphere

“Truth-on-the-Web”. Congratulation to the ArgonAuths-Human Colossus Foundation team that finished on the second place out of 300 projects and 1’500 participants !

We are thrilled to share that Team “Sigmion” - joint forces of Argonauths and Human Colossus – has been selected as one of the three finalists in the Prawda w Sieci (“Truth on the Web”) challenge, held under the patronage of Centralny Ośrodek Informatyki (COI), the institution behind Poland’s digital-services and identity initiative mObywatel.

Source: https://www.bydgoszcz.pl/aktualnosci/tresc/maraton-programowania-za-nami/

What is HackNation

Prawda w Sieci (Truth on the Web) was one of the themes of HackNation — a new, nationwide hackathon ecosystem organized by the Ministry of Digital Affairs (Ministerstwo Cyfryzacji). This first edition gathered an extraordinary 1,500+ participants, who delivered over 430 projects across 16 thematic categories addressing real problems faced by public institutions.

HackNation is not a typical hackathon. It is the first implementation-driven innovation program for the Polish public sector, created to solve real administrative challenges with real code — and with a clear path to production. Finalist projects, such as ours, are not just prototypes: they are evaluated for practical deployment potential inside the institutions that sponsored specific challenge tracks.

Due to the overwhelming interest and the quality of outcomes, HackNation is already planned to return next year — a testament to how much untapped civic tech energy exists in Poland.

Our Unique Angle: Trust That Follows the Information

Most cybersecurity solutions verify locations — checking whether a website is legitimate, the URL is correct, and the certificate is valid. While necessary, this model collapses the moment information travels outside that domain.

We approached the problem differently:

We don’t bind trust to a website. We bind trust to the information itself.

Using cryptographically bound identifiers and verifiable payloads, our approach allows:

✔ Authenticating a government website
✔ Verifying any content that originates from it — even after it leaves the website
✔ Detecting manipulation in screenshots, copied texts, PDFs, or printed documents
✔ Extending trust to both digital and offline channels
✔ Integrating seamlessly with identity frameworks such as mObywatel and future eIDAS 2.0 wallets

This allows truth to become portable, not trapped inside a URL.

Why This Matters

We are entering an era where:

cloned sites are trivial to generate,

phishing outpaces awareness campaigns,

AI can fabricate screenshots indistinguishable from the original,

and public trust is routinely weaponized.

If trust depends only on where content is hosted, it becomes fragile.

Our vision introduces a new paradigm:

Trust becomes cryptographic, portable, and citizen-verifiable — anywhere, anytime.

This empowers Poland to set a global standard in digital public trust, extending the achievements of mObywatel beyond identity into the realm of verifiable truth.

What’s Next

Being among the top three among 300 project in such a competitive field is an honor — but for us, it feels like the start of something bigger.

We cannot wait to continue our collaboration and bring this prototype toward production. With institutional backing, what began during HackNation can evolve into a national — and possibly European — trust layer for public information.

If successful, citizens will no longer need to ask:

"Can I trust this website?"

Instead, they will be able to ask:

"Can I verify that it comes from authorize source?"

Stay tuned — the next chapter begins now.
Team Argonauths / Human Colossus

Friday, 05. December 2025

FIDO Alliance

CNET Japan: FIDO Alliance Launches New Initiative to Accelerate Passkey Adoption, Next Up: Digital Credentials

At a meeting held in Tokyo on December 5th, the FIDO Alliance explained the current status of the adoption of “Passkey Authentication (FIDO2)” and announced that as a new initiative, […]

At a meeting held in Tokyo on December 5th, the FIDO Alliance explained the current status of the adoption of “Passkey Authentication (FIDO2)” and announced that as a new initiative, it aims to realize a secure and convenient digital wallet that stores digital credentials.

Passkey authentication is a system for accessing an account by verifying identity using a device, biometric information, PIN, etc. Since its introduction by NTT Docomo and PayPal in 2022, it has rapidly gained popularity as a secure method highly resistant to phishing attacks that target traditional authentication information such as IDs and passwords.

According to Executive Director and CEO Andrew Shikiar, the number of accounts using passkeys will reach over 3 billion by 2025, with approximately 15 billion potentially available accounts by 2024. Organizations such as the US National Institute of Standards and Technology (NIST) and the European Cybersecurity Agency (ENISA) have included passkeys in their security policies, and the system is also being increasingly adopted by government agencies, online services, and the private sector, particularly in the financial sector.


FIDO Alliance Launches New Digital Credentials Initiative to Accelerate and Secure an Interoperable Digital Identity Ecosystem

New Digital Credentials Working Group to work with global FIDO Alliance members and industry partners to align digital identity ecosystem  December 4, 2025 – The FIDO Alliance announced today the […]

New Digital Credentials Working Group to work with global FIDO Alliance members and industry partners to align digital identity ecosystem 

December 4, 2025 – The FIDO Alliance announced today the launch of a new digital credentials initiative, marking an expansion of its mission to accelerate the adoption of verifiable digital credentials and identity wallets. This initiative is poised to help the world simplify and secure online and in-person interactions by establishing a trusted, and interoperable identity wallet ecosystem.

Work on this new initiative will be carried out by the FIDO Alliance’s new Digital Credentials Working Group (DCWG). 

“FIDO Alliance united the industry to solve the password problem, and the world is now embracing the simplicity and security of passkeys – with billions of accounts now leveraging this seachange in user authentication. We’re now aiming to bring that same proven, collaborative model to the adjacent digital credentials landscape — working closely with partners including EMVCo,  ISO, OpenID Foundation, and W3C to align a fragmented ecosystem,” said Andrew Shikiar, CEO of FIDO Alliance. “Together, we aim to deliver trusted, interoperable digital wallets that make everyday interactions simpler, more secure, and privacy-preserving for everyone.”

Digital credentials have the potential to offer enhanced ease, security, and privacy to everyday interactions and transactions. Governments around the world are helping lead the way in issuing digital identity credentials — including the European Digital Identity Wallet program that will see all 27 member states offer citizens digital identities by the end of 2026, and with 18 departments of motor vehicles in the United States having deployed standards-based mobile drivers licenses to over 5 million American citizens.  

Widespread adoption has been hindered by ecosystem fragmentation, however, including a lack of global alignment and end-to-end certification. Building on its success with passkeys, the FIDO Alliance will address these challenges through its proven ability to unite stakeholders, develop specifications and certification programs, collaborate with other standards organizations, and implement global adoption initiatives. By applying these strategies to the digital credentials ecosystem, the FIDO Alliance aims to foster a future where digital credentials are as pervasive, trusted, and user-friendly as passkeys are today – helping secure the entire identity account lifecycle for consumers and businesses around the world. 

FIDO Alliance will focus on three foundational workstreams in partnership with ecosystem partners such as The OpenID Foundation, ISO, W3C, and EMVCo to unblock the digital credentials ecosystem: 

Wallet Certification: This program will establish certification criteria for digital wallets, ensuring they are secure, protect user privacy, and are interoperable with credential issuers and relying parties. This will provide crucial assurance that credentials are handled with proper security, privacy, and functionality. Specification Development: FIDO will develop specifications to complement existing protocols and frameworks from industry partners such as OpenID Foundation, ISO and other standards organizations. For example, the Alliance will develop specifications for presenting credentials across devices by expanding the existing FIDO cross-device protocol. The Alliance also intends to define credential schemes (for example in payments and/or loyalty) as required to address new use cases as they emerge.  Usability and Relying Party (RP) Enablement: This workstream will accelerate adoption by providing the industry with necessary tools, branding, and best practice guidelines for successful implementation. Drawing from its experience with passkeys, the FIDO Alliance will ensure a seamless user experience, which is critical for new technology adoption.

Through these efforts, the Alliance aims to reduce friction for issuers and relying parties, increase user trust in data security and privacy, and create a vibrant, interoperable market for issuers, wallet providers, and identity services.

Work has already commenced, with initial deliverables planned for 2026.

Industry partner comments:

Loffie Jordaan, Business Solutions Architect at AAMVA and Convenor of ISO/IEC JTC1/SC17/WG10 said, “WG10’s work includes standards for digital credential exchange protocols. Wallets, being one side of a credential exchange, have to support these protocols. In addition to requiring support for these protocols, issuing authorities often have additional requirements on the wallets into which they provision, covering things like device security, holder privacy, and credential life cycle management. The FIDO work will allow issuing authorities to confirm if a wallet being presented for provisioning has been certified against a profile representing the issuing authority’s protocol and other requirements. In doing so, the FIDO work will be of significant value to issuing authorities.”

Gail Hodges, Executive Director of the OpenID Foundation said, “OpenID Foundation welcomes FIDO Alliance’s new initiative on digital credentials as an important step toward advancing a secure and interoperable identity ecosystem. Our organizations have a long history of close collaboration on standards that make authentication simpler and more resilient, and we see the same opportunity to align our efforts as the market rapidly moves toward verifiable credentials and identity wallets. We look forward to working with FIDO and the broader community to help ensure that digital credentials are built on open, privacy-preserving standards that scale globally.”

Seth Dobbs, President & CEO, the World Wide Web Consortium (W3C) said, “It will take the cooperation of many to address the challenges and opportunities of Digital Identities on the Web. The W3C Verifiable Credentials and Digital Credentials API specifications are designed to help ensure the privacy and security of web users. W3C is pleased to work with FIDO Alliance and others on the technical foundation for interoperable, secure, privacy-preserving digital credentials that work across different platforms and systems.”

Daniel Goldscheider, Executive Director of the OpenWallet Foundation said, “FIDO Alliance specifications are already foundational to the wallet landscape. We warmly welcome this expansion into digital credentials and wallet certification.”

Patrik Smets, EMVCo Executive Committee Chair, commented: “Through our Digital Identity and Payment Task Force, EMVCo is engaging with industry partners to advance agentic payments, authentication, verifiable digital credentials, passkeys for payment, and digital wallets. Earlier this year, we shared our existing digital payment credential schema activity with FIDO to align and gather feedback from its members. This level of ongoing collaboration is crucial to promoting global interoperability across the ecosystem in how we use identity in payments, and we are committed to working on payments use cases with all stakeholders as this progresses at pace.”

About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.

Thursday, 04. December 2025

Digital ID for Canadians

From Trust to Growth: The Business Case for Digital Client Verification in Open Banking and Lending

Download the report here.

The Digital ID and Authentication Council of Canada (DIACC) convened an industry workshop in Montreal focused on exploring the business case for digital trust in open banking and lending. The session brought together stakeholders from government, financial services, technology providers, and legal sectors to examine how digital client identity verification (IDV) can drive measurable value while mitigating fraud and enabling growth.

Three Core Themes Explored

Participants explored three core themes:

quantifying fraud prevention and risk mitigation, converting trust into business growth leveraging digital trust verification as a strategic competitive advantage

The discussions revealed strong consensus around treating digital trust and verification as critical infrastructure rather than compliance overhead, while highlighting the need for business-problem-solving standards and clearer metrics to demonstrate return on investment.

Key Outcomes

Key outcomes included recommendations to develop shared metrics for fraud prevention, prioritize frictionless user experiences, and position Canada’s regulatory framework as a global differentiator in digital trust ecosystems.

Recommendations on quantifying fraud prevention and risk mitigation, converting trust into business growth

Develop Shared Industry Metrics: Create standardized measurements for fraud avoided and efficiency gained that can be adopted across sectors to enable meaningful benchmarking. Lifecycle Cost Analysis: Conduct thorough build-versus-buy assessments that capture total lifecycle cost benefits, including both direct and indirect savings from fraud prevention.

Recommendations on leveraging digital trust

Prioritize Frictionless Experience: Treat user experience as a measurable growth driver with dedicated metrics and executive accountability. Capture Drop-off Metrics: Implement comprehensive tracking of conversion drop-off points and onboarding speed to identify improvement opportunities. Enable Cross-Sector Data Sharing: Encourage secure data sharing frameworks that expand market access to underserved populations, including underbanked individuals and newcomers to Canada.

Recommendations on verification as a strategic competitive advantage

Leverage Regulatory Credibility: Position Canada’s strong regulatory environment and institutional trust as a global differentiator in digital trust and verification markets. Build Cross-Sector Alignment: Develop consensus around reusable, standards-based identity systems that work across industries and use cases. Frame IDV as Revenue Enabler: Communicate digital trust and verification as a driver of new revenue streams, product innovation opportunities, and enhanced international competitiveness rather than simply a cost center.

Download the report here.

Fall-2025-ROI-Roundtable-Summary_ENG

Hyperledger Foundation

Open source flywheel: Indicio turns leadership in the LF Decentralized Trust community into credibility, growth, and global reach

Read the full case study here.

Read the full case study here.

Wednesday, 03. December 2025

FIDO Alliance

ID TECH: FIDO Alliance Brings Authenticate Conference to Asia-Pacific With Singapore Event Focused on Passkeys and Digital Credentials

The FIDO Alliance is expanding its flagship Authenticate conference series to the Asia-Pacific region with Authenticate APAC 2026, a two-day event in Singapore dedicated to phishing-resistant authentication and digital identity. […]

The FIDO Alliance is expanding its flagship Authenticate conference series to the Asia-Pacific region with Authenticate APAC 2026, a two-day event in Singapore dedicated to phishing-resistant authentication and digital identity. The conference will be held June 2 to 3, 2026 at the Grand Hyatt Singapore, followed by a FIDO Member Plenary from June 4 to 5 at the same venue.


Enhancing Compliance and User Experience with Major Updates to the FIDO Metadata Service

We’re excited to announce updates to the FIDO Metadata Service (MDS), which helps ensure organizations have the information necessary to successfully validate authenticators. As organizations deploy passkeys and FIDO authentication, […]

We’re excited to announce updates to the FIDO Metadata Service (MDS), which helps ensure organizations have the information necessary to successfully validate authenticators. As organizations deploy passkeys and FIDO authentication, it is critical to validate trusted, certified authenticators.

This is especially useful to deploying organizations in regulated industries and organizations handling sensitive data. These organizations can use MDS to verify that accepted authenticators meet certain criteria, such as FIDO L1, L2 and L3 certifications for compliance, as well as leverage security issue notifications to determine suitable responses.

To support the continued evolution of the FIDO ecosystem, we have released an update to the MDS that provides new tools for relying parties (RPs) to verify authenticator compliance, improve interoperability and life cycle management, while enhancing the user experience. This includes several substantial enhancements to the existing service:

Standardized Security Policy Enforcement: RPs can now ensure the correct level of FIPS compliance by verifying that authenticators meet their exact security criteria before granting access. Streamlined Cross-Provider Integration: RPs can dynamically discover and retrieve detailed information about the passkey provider’s Credential Exchange (CX) definitions, streamlining the process of cross-provider communication and setup. Authenticator Lifecycle Management: The addition of a new “retired” authenticator status value to accurately reflect MDS entries that are no longer actively supported or recommended for use. This status will help RPs maintain secure and up-to-date deployment strategies by clearly flagging deprecated metadata. MDS Version Check: Cuts processing times by introducing localCopySerial, a new parameter that can be specified to only return metadata if a new version of the MDS BLOB is available.

In addition to these MDS updates, the FIDO Alliance also launched a new Convenience Metadata Service. This enables RPs to offer a consistent user experience so that end-users see the same presentation of their passkeys, no matter which service or platform they’re using, to simplify the process of selecting and managing their credentials. This includes standardized, user-friendly names for passkey providers, and high-quality logos for RPs to use in user interfaces and presentation layers.

The updated FIDO MDS and the new Convenience Metadata Service are now live. For more information, visit https://fidoalliance.org/metadata/. For technical questions, implementation guidance, or inquiries regarding the new MDS versions or the Convenience Metadata Service, please reach out to support@mymds.fidoalliance.org.


Oasis Open

OASIS Approves Open Document Format (ODF) V1.4 Standard, Marking 20 Years of Interoperable Document Innovation

BOSTON, MA, 3 December 2025 — Members of OASIS Open, the global open source and standards organization, have approved the Open Document Format (ODF) for Office Applications V1.4 as an OASIS Standard, the organization’s highest level of ratification. ODF V1.4 improves developer documentation, adds new features, and maintains full backward compatibility. The release of ODF […] The post OASIS Appro

IBM, Microsoft, and Industry Partners Launch ODF 1.4 with Enhanced Accessibility, Compatibility, and Security Across Platforms

BOSTON, MA, 3 December 2025 — Members of OASIS Open, the global open source and standards organization, have approved the Open Document Format (ODF) for Office Applications V1.4 as an OASIS Standard, the organization’s highest level of ratification. ODF V1.4 improves developer documentation, adds new features, and maintains full backward compatibility.

The release of ODF V1.4 coincides with the 20th anniversary of ODF as an OASIS Standard. Over two decades, ODF has served as a vendor-neutral, royalty-free format for office documents, ensuring that files remain readable, editable, and interoperable across platforms. Governments and international organizations, including NATO, the European Commission, and countries across multiple continents, have adopted ODF for document exchange.

“ODF V1.4 is the effort to evolve the ODF format to its newer challenges, adding relevant clarification and additions to the existing ODF V1.3,” said Patrick Durusau, OpenDocument TC co-chair. “We are pushing hard to meet expectations of the Office software industry.”

OpenDocument V1.4 contains enhancements in accessibility, professional document formatting, and advanced functionality across text documents, spreadsheets, and presentations. Improvements include better support for assistive technologies, enhanced visual design capabilities, and expanded features for data analysis and technical documentation. These updates strengthen OpenDocument’s role as a comprehensive solution for modern workplace productivity and inclusive document creation.

“ODF provides a vendor-neutral foundation for office productivity and collaboration worldwide. With V1.4, the standard continues to evolve, supporting cloud collaboration, richer multimedia, and standardized security,” said Svante Schubert, OpenDocument TC co-chair. “The format will remain reliable across platforms for years to come. Looking ahead, ODF is moving beyond document exchange toward standardized, semantic change-based collaboration — enabling precise, meaningful sharing of interoperable changes across platforms.”

The OpenDocument TC actively encourages global collaboration and input from stakeholders to support the standard’s ongoing evolution and adoption. To learn more about how to get involved, contact join@oasis-open.org.

Additional Information
OpenDocument GitHub
OpenDocument TC Homepage

About OASIS Open
One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement.www.oasis-open.org

Media Inquiries:
communications@oasis-open.org

The post OASIS Approves Open Document Format (ODF) V1.4 Standard, Marking 20 Years of Interoperable Document Innovation appeared first on OASIS Open.


The Engine Room

Community call recap: psychosocial support & digital safety

Are you a digital security trainer, a helpline responder supporting spyware cases, or someone who accompanies communities through security challenges? If you missed our recent Community Call on Psychosocial Support & Digital Safety, we’ve got you covered. The post Community call recap: psychosocial support & digital safety appeared first on The Engine Room.

Are you a digital security trainer, a helpline responder supporting spyware cases, or someone who accompanies communities through security challenges? If you missed our recent Community Call on Psychosocial Support & Digital Safety, we’ve got you covered.

The post Community call recap: psychosocial support & digital safety appeared first on The Engine Room.


MyData

EU’s data policy overhaul: what’s missing, what’s next, and four big ideas that matter 

November 19 was a big day for EU data policy. It saw the release of the European Data Union Strategy (now reframed to pursue data for AI), the Digital Simplification […]
November 19 was a big day for EU data policy. It saw the release of the European Data Union Strategy (now reframed to pursue data for AI), the Digital Simplification […]

Next Level Supply Chain Podcast with GS1

Omnichannel or Omni-Chaos? How Bad Data Erodes Customer Trust

Shoppers lose trust the moment they see that your data is inconsistent. In this episode, Jon Gatrell of Loren Data Corp. joins Reid Jackson and Liz Sertl to discuss why product information must stay consistent across every channel. Jon explains how aligned data reduces friction between partners, supports accurate inventory management, and strengthens the experience for buyers and internal teams

Shoppers lose trust the moment they see that your data is inconsistent. In this episode, Jon Gatrell of Loren Data Corp. joins Reid Jackson and Liz Sertl to discuss why product information must stay consistent across every channel. Jon explains how aligned data reduces friction between partners, supports accurate inventory management, and strengthens the experience for buyers and internal teams.

This conversation shows how accurate product data becomes the thread that keeps every part of the supply chain working in sync.

In this episode, you'll learn:

The operational value of keeping product data consistent across channels

The challenges that emerge when item information falls out of sync

How accurate data protects customer trust

Things to listen for: (00:00) Introducing Next Level Supply Chain (02:54) What customers expect from product data (05:29) How inconsistent product information affects buying decisions (07:37) The biggest data challenges supply chain teams face today (12:55) What the industry is doing to improve data synchronization (19:32) Data strategy shifts companies should prepare for next (25:19) Jon's favorite tech

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

This episode is brought to you by:

AccuGraphiX and LSPedia

If you're interested in becoming or working with a GS1 US solution partner, please connect with us on LinkedIn or on our website.

Connect with the guests: Jon Gatrell on LinkedIn Check out Loren Data

Tuesday, 02. December 2025

FIDO Alliance

Passkeys Week Webinar: Ask Us Anything!

As part of Passkeys Week, which took place on November 17 – 21, 2025, FIDO Alliance hosted a live, interactive Ask Us Anything (AMA) session designed for developers, product managers, […]

As part of Passkeys Week, which took place on November 17 – 21, 2025, FIDO Alliance hosted a live, interactive Ask Us Anything (AMA) session designed for developers, product managers, and anyone building—or buying—authentication products and services. Attendees were able to bring their questions about passkey implementation, UX, security, standards, and ecosystem adoption directly to the experts shaping the industry.


Biometric Update: Regulatory clarification sets stage for major FIDO biometrics uptake in South Korea

South Korea has eliminated a significant barrier to the usage of the FIDO protocol for passwordless authentication by confirming that it falls outside the scope of a requirement for user […]

South Korea has eliminated a significant barrier to the usage of the FIDO protocol for passwordless authentication by confirming that it falls outside the scope of a requirement for user consent to process biometrics.

Members of the FIDO Alliance Korea Working Group (FKWG) submitted an official inquiry to the Korea Personal Information Protection Commission (KPIPC), which has responded by stating that the consent rules do not apply to biometric processes performed entirely on user-controlled devices. Since biometric data is not collected, stored or processed by the organization requesting FIDO authentication, the process does not qualify as processing personal information under the Personal Information Protection Act.


Recap of the FIDO Alliance Korea Working Group Workshop

Strengthening Korea’s Passkey Ecosystem Through Technical Collaboration and Regulatory Clarity The FIDO Alliance Korea Working Group (FKWG) held its year-end workshop on November 14, 2025, at the Telecommunications Technology Association […]
Strengthening Korea’s Passkey Ecosystem Through Technical Collaboration and Regulatory Clarity

The FIDO Alliance Korea Working Group (FKWG) held its year-end workshop on November 14, 2025, at the Telecommunications Technology Association (TTA) office in Pangyo. Co-hosted by Samsung Electronics and TTA, the workshop brought together local FIDO members and invited guests to discuss the latest developments in passkey deployment, biometric authentication, and the accelerating momentum behind phishing-resistant authentication across the country.

With a half-day agenda featuring the Q4 member plenary, technical deep-dives, ecosystem updates, and a community networking session, the event highlighted the rapid expansion of Korea’s passkey landscape and the central role of the FKWG in driving adoption across industries.

One of the most important topics covered during the workshop was a newly clarified regulatory interpretation confirming that “FIDO authentication using on-device biometrics does not require separate user consent, since no biometric data leaves the device.”

This clarification removes a long-standing compliance concern for organizations and is expected to significantly accelerate enterprise adoption of FIDO-based biometrics across finance, telecom, commerce, and government services. The update has already drawn national and international attention, including coverage by Biometric Update, underscoring its significance to the broader authentication ecosystem.

Read the Coverage from Biometric Update

The technical presentations and updates from FIDO members provided insights into real-world deployments, new research, and ongoing product development:

Samsung SDS shared lessons learned from large enterprise-scale passkey rollouts at Samsung Group Companies and UX refinement. LINE presented developer-focused guidance and demonstrated how they are using passkeys for end-to-end encryption (E2EE). TTA shared perspectives on AI privacy challenges and mitigation strategies, along with associated regulatory considerations. Korea Quantum Computing (KQC) discussed how they developed PQC-based FIDO security keys, offering a forward-looking view on post-quantum security.

These sessions demonstrated the depth of local technical expertise and the collaborative spirit that defines the FIDO Alliance Korea Working Group community.

The workshop concluded with a networking dinner, a quiz session, and a prize giveaway that added a fun and engaging community element to wrap up the day.

With clear regulatory support, growing cross-industry deployments, and an active technical ecosystem, the FIDO Alliance Korea Working Group is well positioned to accelerate the adoption of phishing-resistant authentication throughout 2026 and beyond.

The FIDO Alliance extends its appreciation to Samsung Electronics, TTA, all presenters, and all members and guests who contributed to this successful event.


DIF Blog

Welcoming Grace Rachmany as DIF’s New Executive Director

We’re excited to welcome Grace Rachmany as the new Executive Director of the Decentralized Identity Foundation! Grace joins DIF at a critical time. Digital identity is evolving in multiple directions at once—governments building national systems, enterprises balancing privacy with functionality, Web3 projects rethinking sovereignty. The result?

We’re excited to welcome Grace Rachmany as the new Executive Director of the Decentralized Identity Foundation!

Grace joins DIF at a critical time. Digital identity is evolving in multiple directions at once—governments building national systems, enterprises balancing privacy with functionality, Web3 projects rethinking sovereignty. The result? A fragmented ecosystem where credentials don't cross borders, privacy promises fall short, and the people who need accessible identity solutions most often can't reach them.

DIF has always focused on practical, interoperable building blocks that enable real systems to work in production, while prioritizing privacy and sovereignty. Grace brings both the technical depth and community perspective to help us expand that mission. Under her leadership, we'll continue doing what DIF does best—creating the foundational standards and infrastructure for decentralized identity—while bringing more voices into the conversation and strengthening collaboration across traditionally separate ecosystems.

Welcome from Kim Hamilton Duffy

It has been a profound honor to serve as DIF’s Executive Director. I’ve been part of this community since its earliest days – in working groups, on the Steering Committee, and ultimately in this role – and I will continue to enthusiastically support DIF. What has given me the most joy is how DIF consistently attracts and nurtures new participants who volunteer their time, contributing new energy and perspectives that continually strengthen DIF’s culture and momentum.

I’m beyond delighted to welcome Grace into this role. Grace has been deeply involved in building communities around decentralized technology that prioritize individual rights and human agency. She brings a rare combination of governance expertise, practical execution, and a deep understanding of how people collaborate in decentralized environments. I'm especially excited about her commitment to global outreach.

Recently, DIF has built remarkable momentum across AI agents, IoT, secure communication protocols, and emerging efforts like travel & hospitality and creator assertions. Grace is the right person to accelerate this work while we navigate key decisions about how DIF evolves as an organization and best supports its growing community. I look forward to supporting Grace's transition as she leads DIF into 2026 and beyond.

— Kim Hamilton Duffy

Meet Grace Rachmany

Grace is a leader in:

new economic models and tokenomics, blockchain governance and DAOs, digital democracy, and distributed organizational leadership

She co-founded Sideways.Earth, served on the Supervisory Council of SingularityNET, and has worked with hundreds of decentralized organizations on practical governance, incentive design, and real-world coordination.
Her work centers on infrastructure for collaboration—how communities make decisions, how decentralized systems scale, and how identity frameworks respect personal autonomy. These themes closely reflect DIF’s core values of individual agency, interoperability, and innovation without gatekeepers.

A Message from Grace
“Digital Identity is a hot topic these days, and it’s also a hot mess…”

As I write this post, I reflect on the e-mail I received this morning from my national Digital ID provider here in Slovenia. I went to the post office yesterday to get a higher security rating so I can access my medical records. At the post office, I presented my government-issued residency card. The e-mail from this morning informs me that my physical presence accompanied by my national residency card is not adequate to prove I am human enough to make a doctor’s appointment.

What I need is a passport or national identification document. By the way, if I used a different bank, I could use a bank card, but my particular bank does not have the right security rating. Presumably, of course, my online banking is much easier than other people’s banking as a result of whatever way they do or do not apply digital certificates and 2FA in their system. In any case, off I go this morning to have another in-person experience with the main post office in the nearby city (not the local inferior branch, mind you).

“Complicated processes with heavy bureaucracies for essential services leave us feeling powerless.”

Digital Identity is a hot topic these days, and it’s also a hot mess, as this story demonstrates. Complicated processes with heavy bureaucracies for essential services leave us feeling powerless and at the mercy of large entities. On the opposite end, we feel creeped-out by seamless experiences such as departing from a London airport with absolutely no human looking at any document from the moment we enter the airport to the moment we board (when someone might potentially check our seat number).

I’m absolutely thrilled to step into the Executive Director Position at DIF at this critical moment for digital identity. I was first introduced to DIF Foundation as part of my expertise in DAOs in Web3. Startups like UPort and Sovrin were positioned to make SSI a part of the Web3 ecosystem. But here I am, 5 years later, scratching my head and wondering how we ended up with SBTs, POAPs and EAS. As a Web3 person who has done a deep dive into identity, reputation, and governance, I know the solutions exist, and yet, digital identity implementation is still squarely in the purview of governments and corporations.

“As a Web3 person who has done a deep dive into identity, reputation, and governance, I know the solutions exist.”

Even more disappointingly, while there is public discourse about digital identity, the public seems unaware of what their real choices are. The UK argument begins and ends with “just say no”. The Swiss referendum’s passing with such a thin margin puts tremendous stress on the government to get the implementation perfect. Both of these examples point to missed communication between institutions and citizens, and the UK example shows a gap in public education about how citizens can have more proactive influence in implementation and design of identity solutions that serve the public good.

As Executive Director, one of my objectives is to bring in a wider group of participants to the discussion. I hope to have more outreach to our partners in different areas of the globe, and will be spending the first quarter of 2026 located in Southeast Asia to get to know those of you in that area of the world. As a “crypto native” and governance expert, I’ll be inviting in more members from the Web3 and Network State communities, as we deepen our relationship with the Ethereum Foundation and others in the space.

Finally, I’d like to thank a few people who kidnapped me in Vienna on the first day of September in 2019 and coerced me to attend some thing called, unappealingly enough, RWOT. The kidnapping clan included long-time DIF board member Marcus Sabadello, Kaliya Identity Woman (who made it sound like I was just going to have a nice weekend in Vienna with cool people), Joe Andrieu (who pretended I had submitted a “paper” idea and gave me the RWOT discount rate), and Adrian Gropper (who spent a 4-hour train ride to Prague drawing little diagrams and explaining to me what DIDs and VCs were). Extra special thanks to Kaliya for sending me an email two months ago saying “you might want to apply for this job.”

Thanks to all of you for once more welcoming me into the identity fam. It’s going to be a fun ride!

— Grace Rachmany

Moving Forward Together

Grace’s priorities—expanding participation, inviting overlooked communities into the conversation, and strengthening ties across Web3 and open-source identity ecosystems—align with the direction many in our community have been moving toward. We share the belief that digital identity must be shaped not only by institutions, but by the people who will use it.

In the months ahead, we’ll continue evolving how DIF supports its contributors, including finding better ways to serve our community globally and ensuring our work remains interoperable, usable, and grounded in real-world needs.

Thank you for continuing to advance digital identity ecosystems that empower people and strengthen trust.

Monday, 01. December 2025

FIDO Alliance

FIDO Alliance Announces First Authenticate Conference for the Asia-Pacific Region

The industry’s premier event dedicated to digital identity and authentication expands globally with Authenticate APAC 2026 in Singapore SINGAPORE, 02 December – The FIDO Alliance today announced the expansion of […]

The industry’s premier event dedicated to digital identity and authentication expands globally with Authenticate APAC 2026 in Singapore

SINGAPORE, 02 December – The FIDO Alliance today announced the expansion of its flagship event series with the launch of Authenticate APAC 2026. This marks the first time the industry’s only conference dedicated to digital identity and phishing-resistant authentication will be held in the Asia-Pacific region. The inaugural event will take place on June 2 – 3 2026, followed by a FIDO Member Plenary from June 4 – 5, 2026, at the Grand Hyatt in Singapore.

As organizations worldwide accelerate the shift from passwords to passkeys and begin to unlock the potential of verifiable digital credentials, Authenticate APAC will serve as a regional hub for education, collaboration, and innovation. The decision to bring Authenticate to the region builds on the success of the FIDO APAC Summit held over the last two years. It also reflects the region’s growing influence in the cybersecurity landscape, where recent momentum in government digital identity initiatives and widespread commercial passkey deployments are helping to drive the global standard for secure, user-friendly authentication.

“The FIDO Authenticate conference has become the defining event for the authentication community, and we are proud to extend this platform to the Asia-Pacific region,” said Andrew Shikiar, CEO of the FIDO Alliance. “There is tremendous innovation happening across APAC, and this event will provide a dedicated space for local and global leaders to collaborate and help build the future of a secure, user-friendly and interoperable internet.”

The Authenticate conference series delivers high-quality content with a highly engaged community of professionals committed to advancing passkeys, digital credentials and related technologies. It is designed to bring together CISOs, business leaders, product managers, security strategists, and identity architects to advance their knowledge of digital identity and shape the future of authentication. 

Call for Sponsors and Registration
The FIDO Alliance will offer a wide range of sponsorship opportunities designed to maximize brand exposure and reach target audiences. The 2026 Prospectus detailing sponsorship packages also launched today and is available here

Whether you are new to FIDO, in the midst of deployment or somewhere in between, Authenticate conferences have the right content, and community, for you. Registration for attendees will open later this year.

To stay up to date on speakers, sponsorship opportunities, and registration details, please visit the Authenticate APAC 2026 website, @FIDOAlliance on X, and sign-up to the newsletter.

About Authenticate
Authenticate is the premier conference dedicated to advancing digital identity and authentication, with an emphasis on phishing-resistant sign-ins using passkeys. Hosted by the FIDO Alliance, this event brings together CISOs, security strategists, product managers and identity architects to explore best practices, technical insights and real-world case studies in modern authentication.

Authenticate is hosted by the FIDO Alliance, the cross-industry consortium providing standards, certifications and market adoption programs to accelerate utilization of simpler, stronger authentication with innovations, like passkeys.

About the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.

Contact
press@fidoalliance.org

Tuesday, 25. November 2025

We Are Open co-op

Solidarity for Freelancers in 2026

This post is an extended way of saying come to WAO's *Open* Xmas Party! We Are Open Co-op (WAO) members want to collaborate. It’s why, almost a decade ago, we formed a cooperative – so that we could continue to work together. When we started WAO,

This post is an extended way of saying come to WAO's *Open* Xmas Party!

We Are Open Co-op (WAO) members want to collaborate. It’s why, almost a decade ago, we formed a cooperative – so that we could continue to work together.

When we started WAO, one of the things we talked a lot about was “solidarity for freelancers” because it's what we wanted most: a cooperative where we could find solidarity and support.

Lately, we've been reminded of this idea. Partly that's because this year has felt strenuous. We’ve felt like a lot of people have been struggling. We have struggled too – while trying to be there for other people as we collectively navigate the harsh realities of doing business in the mid 2020s.

We feel privileged and downright lucky that we are mostly OK. Wanting to go into 2026 both reinvigorated and in a spirit of solidarity, we want to make an offer to you: Let’s find ways to work together in the new year – we’re sure we can help each other.

We recently made a list of what WAO has to offer, which you can see on our wiki. The TL;DR, however, is...

1. Collaboration over Competition

For decades, we’ve contributed to open source projects, run open meet-ups, given “free” advice, met people and worked out loud. We know that the benefits of collaboration aren't just financial, but being able to pay the rent/mortgage kinda helps.

Nevertheless, we don’t have to compete with one another. We can collaborate. We can help you learn new skills around open working, and we're sure you've got stuff to teach us, too. In fact, we're always picking up vital lessons from people in our networks.

2. Networking over Not Working

Speaking of networks, we’re so pleased to have wide networks of really awesome people doing badass things. We want to be part of that even more than we already are and to grow the people with whom we're connected

Since the pandemic, there have been far fewer offline events, which means that online interactions are more important than ever. Let's find ways to get to know one another, discovering our interests, talents, and weird little quirks ;)

3. Solidarity over Solitude

Ultimately, this is all about building the world which we want to exist. That's only going to happen through solidarity and having each others' backs. It's easy to think that the world is against you when things aren't going well, but everyone has a talent and something to offer.

We'd like to find ways in which we can all help one another. That happens by showing up for one another rather than retreating into darkened rooms and doomscrolling on our mobile devices.

So if this post resonates with you and you feel like doing something about it, why not start with WAO's *Open* Xmas party? It's online at 14:00 GMT on Thursday 11th December and YOU are invited. Yes, you.

​Bring your cocoa, wear your hat, and come along for a festive chat! Join us for a laugh or two and we might just have some games for you... 🎄 🎁 🎅


Velocity Network

Get More out of Open Badges: How Velocity Network complements Open Badges to ensure trust, privacy and utility

The post Get More out of Open Badges: How Velocity Network complements Open Badges to ensure trust, privacy and utility appeared first on Velocity.

MyData

When Children Design AI: What We Learned by Actually Listening

What if children aren’t just AI users to be protected, but experts we should be learning from? At MyData 2025, the MyData4Children workshop turned the usual conversation on its head. […]
What if children aren’t just AI users to be protected, but experts we should be learning from? At MyData 2025, the MyData4Children workshop turned the usual conversation on its head. […]

Friday, 21. November 2025

FIDO Alliance

Financial IT: HYPR and Yubico deepen partnership to secure and scale passkey deployment through automated identity verification

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and […]

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.


Biometric Update: Regulatory clarification sets stage for major FIDO biometrics uptake in South Korea

South Korea has eliminated a significant barrier to the usage of the FIDO protocol for passwordless authentication by confirming that it falls outside the scope of a requirement for user […]

South Korea has eliminated a significant barrier to the usage of the FIDO protocol for passwordless authentication by confirming that it falls outside the scope of a requirement for user consent to process biometrics.

Members of the FIDO Alliance Korea Working Group (FKWG) submitted an official inquiry to the Korea Personal Information Protection Commission (KPIPC), which has responded by stating that the consent rules do not apply to biometric processes performed entirely on user-controlled devices. Since biometric data is not collected, stored or processed by the organization requesting FIDO authentication, the process does not qualify as processing personal information under the Personal Information Protection Act.


Cyber Insider: Bitwarden brings passkey login support to Chrome extension

Bitwarden has rolled out support for passwordless login via passkeys across its browser extensions and web vault, allowing users to authenticate without entering a username, password, or two-factor code.

Bitwarden has rolled out support for passwordless login via passkeys across its browser extensions and web vault, allowing users to authenticate without entering a username, password, or two-factor code.


WebProNews: Passkeys Rise as Black Friday’s Fraud Shield

As Black Friday 2025 approaches, passwords remain digital security’s weak link, exploited by AI-driven scams. Dashlane CEO John Bennett champions passkeys for frictionless, phishing-resistant authentication, with e-commerce leaders like Amazon […]

As Black Friday 2025 approaches, passwords remain digital security’s weak link, exploited by AI-driven scams. Dashlane CEO John Bennett champions passkeys for frictionless, phishing-resistant authentication, with e-commerce leaders like Amazon leading adoption. Dashlane’s tools and deals bolster fraud protection for shoppers and businesses.


IDAC Podcast: The FIDO Alliance’s Next Frontier: Digital Credentials and Wallets

Live from Authenticate 2025, Jeff Steadman and Jim McDonald sit down with the Cal Ripken of IDAC, Andrew Shikiar, Executive Director and CEO of the FIDO Alliance. Andrew shares exciting […]

Live from Authenticate 2025, Jeff Steadman and Jim McDonald sit down with the Cal Ripken of IDAC, Andrew Shikiar, Executive Director and CEO of the FIDO Alliance. Andrew shares exciting updates on the incredible progress of Passkeys, revealing that over 3 billion are now in use securing accounts. We discuss the key themes of the conference, including the ongoing arms race with AI in security and the critical role of identity verification. Andrew also unveils the new Passkey Index, an initiative to provide industry benchmarks for deployment success. Looking ahead, the conversation shifts to the FIDO Alliance’s broadening focus on digital credentials and wallets, aiming to solve the usability and certification challenges that have held the space back. Finally, we hear about the global expansion of the Authenticate conference brand, with a new event launching in Singapore.

Listen to the podcast: https://www.identityatthecenter.com/listen/episode/29aaaa94/384-the-fido-alliances-next-frontier-digital-credentials-and-wallets


Velocity Network

Elements of a Community-Governed Trust Framework (and how Velocity Network delivers them all TODAY)

The post Elements of a Community-Governed Trust Framework (and how Velocity Network delivers them all TODAY) appeared first on Velocity.

Wednesday, 19. November 2025

Hyperledger Foundation

Boosting Besu Performance: 2025 Accomplishments and 2026 Roadmap

Looking back to the beginning of 2025, improving performance in Besu was one of our major goals for the year. In this blog post we want to share what specific improvements we’ve made towards that goal in 2025, as well as some of the directions we’ll be focusing on for the year to come. 

Looking back to the beginning of 2025, improving performance in Besu was one of our major goals for the year. In this blog post we want to share what specific improvements we’ve made towards that goal in 2025, as well as some of the directions we’ll be focusing on for the year to come. 


FIDO Alliance

Beyond the Protocol: The Human-Centered Shift Defining the Future of Workforce Security

By FIDO Alliance UX Working Group’s Enterprise Subgroup leaders Patryk Les, Yubico and Philip Corriveau, RSA As we celebrate Passkeys Week 2025, the momentum around passwordless authentication is undeniable. Across […]

By FIDO Alliance UX Working Group’s Enterprise Subgroup leaders Patryk Les, Yubico and Philip Corriveau, RSA

As we celebrate Passkeys Week 2025, the momentum around passwordless authentication is undeniable. Across industries, organizations are taking real steps toward a future where passwords – and the risks they bring – finally fade away.

Recent research from the FIDO Alliance and its members shows that over 85% of enterprises are implementing or evaluating passkeys. The question is no longer if your organization will deploy them – it’s how you’ll do it effectively.

And that’s where the next chapter begins. Because the hardest part of passwordless security isn’t the cryptography – it’s the culture.

People Are Not the Weakest Link – They’re the Strongest Asset

For years, cybersecurity has been framed as a struggle to “fix” users – those who forget passwords, fall for phishing, or sidestep controls. But people aren’t the problem. They’re responding to systems that often work against natural human behavior.

Passkeys flip that model. They align authentication with how people already act – using biometrics, devices, and gestures they trust. When security design works with human tendencies, compliance becomes intuitive and adoption accelerates.

This is more than a technical improvement. It’s a leadership opportunity.

Three Lessons from the Front Lines

The FIDO Enterprise UX Subgroup’s research with enterprise deployments uncovered one clear truth: the biggest challenges are human, not technical. Here’s what leading organizations are learning.

1. Enrollment Is the First Moment of Trust
The first time a user registers a passkey isn’t just a setup step – it’s their first interaction with your new security culture. Complex flows or unclear prompts can create frustration and mistrust before the rollout even begins.

Leaders who treat enrollment as change management – offering clarity, support, and communication – set the tone for success.

2. Users Need a Mental Model, Not a Cryptography Lesson
Practitioners told us: “Give me a one-sentence definition users actually understand.” That’s because awareness without understanding is ineffective. The best explanation we heard?

“A password is an easy-to-copy key you remember.
A passkey is a hard-to-copy key your device remembers.”

Simple, relatable language builds trust far better than technical jargon.

3. Consistency Builds Confidence
When authentication looks different across browsers and devices, it creates decision fatigue and confusion. This isn’t just a UX problem – it’s a behavioral one. Inconsistency erodes confidence; consistency builds it.

Forward-thinking leaders now recognize that usability isn’t a luxury – it’s a security control.

Redefining Success: From Compliance to Culture

Traditional cybersecurity programs measure success through compliance metrics: completed trainings, documented policies, audit readiness. But those measures miss what truly matters – behavioral outcomes.

Leading organizations are shifting to human metrics:

Adoption and retention rates User satisfaction (CSAT) Reduced authentication-related support tickets

One organization exemplified this shift during the passkey rollout: when satisfaction dipped below their 4.0 target, they paused to improve the experience before resuming rollout. That’s human-centered leadership – prioritizing outcomes that strengthen both trust and security.

Leadership in the Human Era of Security

When deployments struggle, it’s rarely due to user resistance – it’s because systems weren’t designed with human behavior in mind.
Leaders now have a clear mandate:

Simplify choices and reduce cognitive load Segment workforce experiences (field staff ≠ office staff) Establish feedback loops to learn and iterate

The most successful organizations treat passkey deployment as a cultural transformation, not a technical upgrade. They recognize that security performance is shaped by psychology, environment, and design – not just protocols.

The Path Forward: Share Your Voice

This Passkeys Week, we invite workforce leaders everywhere to help shape the next wave of adoption.

Your insights – what worked, what didn’t, and what surprised you – can help the entire community deploy smarter, faster, and more human-centered systems.

Share your experience and help shape the future of workforce authentication.

Your stories power our collective learning – and move the industry forward.

Closing Thought

The technology is ready. The future of workforce authentication now depends on how we lead.

When we design for human nature instead of against it, security becomes intuitive, sustainable, and strong. The workforce isn’t the weakest link – it’s our greatest asset.

Let’s make Passkeys Week 2025 the moment we prove it.


Pocket-lint: Windows 11 is about to work way better with passkeys

It’s no secret that Microsoft is on board with ushering in a fully passwordless computing future — specifically one that’s powered by a newfangled technology known as passkeys. Back in June, the tech […]

It’s no secret that Microsoft is on board with ushering in a fully passwordless computing future — specifically one that’s powered by a newfangled technology known as passkeys. Back in June, the tech giant confirmed its intention to bring so-called plugin passkey provider integration to Windows 11 in a future update, and, as of the recently-released November 2025 security update, the functionality is now live for a growing number of PC users running the latest version of the operating system.


Next Level Supply Chain Podcast with GS1

Hook, Line, and Data: How Beaver Street Fisheries Ensures Seafood Safety with Tech

Behind every safe meal is someone who got the data right. Brandon Ballew, Senior Product Information Analyst at Beaver Street Fisheries, joins hosts Liz Sertl and Reid Jackson to discuss how the seafood industry is preparing for FSMA Rule 204, and how GS1 standards like GLNs, GTINs, and 2D barcodes are helping ensure food safety from port to plate. Brandon shares how his team uses data to impr

Behind every safe meal is someone who got the data right.

Brandon Ballew, Senior Product Information Analyst at Beaver Street Fisheries, joins hosts Liz Sertl and Reid Jackson to discuss how the seafood industry is preparing for FSMA Rule 204, and how GS1 standards like GLNs, GTINs, and 2D barcodes are helping ensure food safety from port to plate.

Brandon shares how his team uses data to improve visibility, partner collaboration, and customer confidence, proving that accurate and standardized information benefits everyone in the supply chain.

In this episode, you'll learn:

How GS1 standards support FSMA 204 traceability

Why data quality impacts both compliance and customer trust

How Beaver Street Fisheries uses GLNs to connect digital and physical supply chains

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(03:51) Preparing for FSMA 204 and data-driven traceability

(06:08) Lessons from SIMP and early adoption of seafood standards

(09:04) Creative uses of GLNs across operations

(12:31) Data synchronization with GS1 Data Hub and 1WorldSync

(15:34) The cost of inaccurate data in online ordering

(19:00) Educating trading partners about FSMA compliance

(22:11) The benefit of 2D barcodes for consumers

(26:37) Brandon's favorite technologies

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn This episode is brought to you by: Avery Dennison and Syndigo If you're interested in becoming or working with a GS1 US solution partner, please connect with us on LinkedIn or on our website. Connect with the guests: Brandon Ballew on LinkedIn Check out Beaver Street Fisheries

Tuesday, 18. November 2025

FIDO Alliance

9TO5Mac: Apple @ Work Podcast: State of the union for passkeys

In this episode of Apple @ Work, Rew Islam from Dashlane joins the show to talk about the company’s new report: The 2025 Dashlane Passkey Power 20.

In this episode of Apple @ Work, Rew Islam from Dashlane joins the show to talk about the company’s new report: The 2025 Dashlane Passkey Power 20.


9TO5Mac: Apple @ Work Podcast: State of the union for passkeys

In this episode of Apple @ Work, Rew Islam from Dashlane joins the show to talk about the company’s new report: The 2025 Dashlane Passkey Power 20. Listen to the podcast.

In this episode of Apple @ Work, Rew Islam from Dashlane joins the show to talk about the company’s new report: The 2025 Dashlane Passkey Power 20.

Listen to the podcast.


Oasis Open

Coalition for Secure AI Releases Two Actionable Frameworks for AI Model Signing and Incident Response

Boston, MA – 18 November 2025 – OASIS Open, the international open source and standards consortium, announced the release of two critical publications advancing AI security practices from the Coalition for Secure AI (CoSAI), an OASIS Open Project. These new resources provide practical frameworks to help organizations strengthen the security and trustworthiness of their AI […] The post Coalition

OASIS Open Project Delivers Practical Tools to Build Trust and Defend AI Systems at Scale

Boston, MA – 18 November 2025 – OASIS Open, the international open source and standards consortium, announced the release of two critical publications advancing AI security practices from the Coalition for Secure AI (CoSAI), an OASIS Open Project. These new resources provide practical frameworks to help organizations strengthen the security and trustworthiness of their AI systems. CoSAI’s Software Supply Chain Security for AI Systems Workstream released “Signing ML Artifacts: Building towards tamper-proof ML metadata records” and the Preparing Defenders for a Changing Cybersecurity Landscape Workstream published “AI Incident Response Framework V1.0.” Together, these frameworks address key aspects of the full lifecycle of AI assurance, from preventing tampering before deployment to responding effectively when systems are attacked.

Model Signing: Building Trust in AI Supply Chains

Workstream 1’s publication, “Signing ML Artifacts,” addresses one of the most pressing challenges in AI deployment: verifying the authenticity and integrity of AI models before integrating them into mission-critical systems. As AI becomes woven into critical business processes, the question is no longer whether to implement model signing, but how quickly organizations can move to adopt it. Workstream 1’s guidance offers both the technical depth and implementation roadmap needed to accelerate adoption while ensuring interoperability across the AI ecosystem and maintaining the security, trust, and compliance their businesses demand.

“Model signing delivers tangible business value: reduced security risk, streamlined compliance, and increased stakeholder trust. This framework gives enterprises the tools to confidently deploy AI while maintaining visibility and control over their most valuable ML assets throughout their entire lifecycle,” said Workstream 1 Leads Andre Elizondo of Wiz, Matt Maloney of Cohere, and Jay White of Microsoft.

The publication introduces a staged maturity model designed to help organizations adopt model signing effectively, beginning with establishing basic artifact integrity through digital signatures, ensuring that models can be verified against unauthorized changes. It then advances to incorporating signature chaining and lineage, which create clear provenance trails and enable traceability across the entire AI supply chain. Finally, it integrates structured attestations and policy controls to support comprehensive AI governance frameworks that align with organizational security and compliance requirements.

AI Incident Response: Preparing Defenders for Evolving Threats

AI systems face unique threats including data poisoning, model theft, prompt injection, and inference attacks that traditional incident response frameworks aren’t designed to handle. Workstream 2’s “AI Incident Response Framework V1.0” equips security practitioners with comprehensive, AI-specific guidance to detect, contain, and remediate these emerging threats.

“AI adoption is reshaping enterprise security, and operationalizing incident response with rapidly changing technology presents new challenges,” said Vinay Bansal of Cisco and Josiah Hagen of Trend Micro, CoSAI’s Workstream 2 Leads. “This framework presents incident examples over common AI use cases and provides playbooks specific to new risks in AI systems, helping organizations move from theory to practice.”

The framework complements existing guidance by addressing capabilities and gaps unique to AI. It helps defenders minimize the impact of AI exploitation while maintaining auditability, resiliency, and rapid recovery, even against sophisticated threats. The guide also tackles the complexities of agentic AI architectures, emphasizing forensic investigation and providing concrete steps to prioritize security investments, scale mitigation strategies, implement layered defenses, and navigate AI governance challenges.

Industry Collaboration and Impact

Together, these publications – developed from the collaborative efforts of CoSAI’s more than 40 industry partners, including Premier Sponsors EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Snyk, Trend Micro, and Zscaler – build on and reinforce CoSAI’s broader initiatives, including the recent Strategic Update, the donation of Google’s Secure AI Framework (SAIF), and the Principles for Secure-by-Design Agentic Systems.

Technical contributors, researchers, and organizations are welcome to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

Both frameworks are publicly available on the CoSAI GitHub pages: 

Signing ML Artifacts: Building towards tamper-proof ML metadata records AI Incident Response Framework V1.0

About CoSAI

The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide. CoSAI operates under OASIS Open, an international standards and open-source consortium. www.coalitionforsecureai.org

Media Inquiries: communications@oasis-open.org

The post Coalition for Secure AI Releases Two Actionable Frameworks for AI Model Signing and Incident Response appeared first on OASIS Open.


The Engine Room

Psychosocial support and digital safety: A conversation with Fundación Acceso on spyware attacks and collective care

As part of our work within the spyware network, The Engine Room collaborated with Fundación Acceso to investigate the psychosocial impact of spyware attacks and develop resources to strengthen support and accompaniment. The post Psychosocial support and digital safety: A conversation with Fundación Acceso on spyware attacks and collective care appeared first on The Engine Room.

As part of our work within the spyware network, The Engine Room collaborated with Fundación Acceso to investigate the psychosocial impact of spyware attacks and develop resources to strengthen support and accompaniment.

The post Psychosocial support and digital safety: A conversation with Fundación Acceso on spyware attacks and collective care appeared first on The Engine Room.


MyData

MyData in Practice: Transforming Job Seeking By Aggregating #SkillsData

One of the most frequent questions we hear at MyData Global is: “Who’s actually doing MyData in practice?” It’s a fair question. While the principles of human-centric data control sound […]
One of the most frequent questions we hear at MyData Global is: “Who’s actually doing MyData in practice?” It’s a fair question. While the principles of human-centric data control sound […]

Monday, 17. November 2025

DIF Blog

Authorising Autonomous Agents at Scale

Series: Building AI Trust at Scale Part 4 · By DIF Ambassador Misha Deville View all parts → Think about how you share a Google Doc. You can add specific people to an access list or use “Anyone with the link”. The first gives some control but requires
Series: Building AI Trust at Scale
Part 4 · By DIF Ambassador Misha Deville
View all parts →

Think about how you share a Google Doc. You can add specific people to an access list or use “Anyone with the link”. The first gives some control but requires manual approvals. The second scales effortlessly but gives no control over who the link gets passed to. This captures two fundamental access models: identity-based, which grants access based on who you are, and capability-based, which grants access based on what you possess.

OAuth Was Built for Humans

OAuth 2.0 lets you grant applications access to your resources without sharing passwords. When you authorise Slack to access your Google calendar, Slack receives an OAuth access token that lets it retrieve event data on your behalf. This works because: 1) a human validates the request, 2) permissions can be broad and long-lived, or narrowly scoped at issuance, and 3) one identity represents one user.


Abstract Protocol Flow, The OAuth 2.0 Authorization Framework[1]

In practice, OAuth primarily treats tokens as impersonation rather than delegated authority. ‘Sign in with Google’ grants full access within your application context. The credential represents you, not a scoped capability. This trade-off prioritises usability and session continuity over least-privilege delegation. For human sessions, implicit impersonation is acceptable. For autonomous agents, it can be catastrophic[2].

Agents need to act without constant human approval. They require fine-grained, time-limited permissions that can be safely delegated through chains of sub-agents. Multiple agents can represent one user or act for multiple users. But OAuth has no concept of delegation chains. As Microsoft’s Alex Simons writes, agents need “their own defined set of privileges - not just proxy a user’s rights”[3]. Actions must be traceable, distinguishing if an agent is acting for a user, itself, or through a chain of agents.

The fundamental limitation emerges across security boundaries. As Nicola Gallo, creator of ZTAuth* and co-chair of DIF’s Trusted AI Agents Working Group, explains, “Each web API is its own security boundary. When invoking a service that resides behind a separate security boundary, trust cannot rely on the token alone. Establishing trust requires a verifiable trust chain that validates both the token and the identity of the entity that forwarded the request.”

Most agent interactions today are not fully autonomous[4] and operate within limited scopes via protocols like MCP[5]. MCP builds on OAuth and standardises connections between AI models and other services (think USB ports for agents). This works quite well within single security boundaries, where MCP provides a common language between known agents and servers. But it doesn't address attenuated delegation, ephemeral agent lifecycles, or cross-boundary trust establishment.

“[MCP] is limited in its full scope towards authorized delegation, enabling only system communication and optionally access controls rather than broader authentication and identity management”[6].

Three Breaking Points
1/ The human-in-the-loop and ‘prompt fatigue’: 1Password’s Secure Agentic Autofill lets AI agents complete browser logins by injecting credentials without exposing them to the agent[7]. But it still requires human approval for each access. Frequent authorisation prompts can lead to “prompt fatigue”[8], where users mindlessly approve requests without scrutiny. This may work for one agent and a few resources. It breaks down completely with hundreds of autonomous agents crossing multiple security boundaries, delegating permissions to sub-agents unpredictably.

Andor Kesselman, co-founder of the Agentic Internet Workshop and co-chair of DIF’s Trusted AI Agents WG, puts it plainly: “Imagine sitting at your job, just clicking approve, approve, approve for every single OAuth request coming in from your agents. We would have created a completely dystopian world.” The paradox is the more effective your agents become, the more difficult the human-in-the-loop problem is to manage.

2/ Missing attribution: When multiple agents operate under the same account credentials, they become indistinguishable from each other both in real-time and retrospect. GitHub sees actions by “alice@company.com” but has no way to know which specific agent executed the task.

A typical organisation of 100 employees using Claude, ChatGPT, and Gemini to interact with external services might spawn an average of 10 instances per day, that’s 3000 agent instances operating daily. When something goes wrong (e.g. private internal data written to a public GitHub repository) you might see that an MCP server acted on behalf of alice@company.com’s token, but you can’t identify which agent was compromised or investigate how. Revoking one token might break multiple agents. You either keep all agents connected or shut them all down.

Andor uses the analogy of a car key. If you give your key to someone, who then gives it to someone else, who gives it to another person, and someone crashes your car, you can’t prove who did it because every access to the car used your key. This also means you remain accountable. Without distinct agent identities, delegation chains aren’t traceable, auditable, or debuggable.

3/ Agent Lifecycles: Agentic systems frequently spawn short-lived agents for specific tasks. They might exist for 15 minutes, then terminate. But OAuth assumes authorisation relationships persist over time. For ephemeral agents, you either grant overly broad access using your credentials, or create individual authorisations that require human approval, leaving stale tokens that outlive the agent.

Even longer-lived agents break OAuth’s model. Agents coordinating distributed transactions across multiple identity providers (IdPs) must have tokens that persist so the system can maintain state and handle failover. But when an agent connects to multiple IdPs with different trust relationships, it creates what Nicola calls the ‘Internet of Shared Credentials’ paradox. It also makes dynamic recovery impossible because you can’t reauthorise all transactions in real time.

OAuth’s basic model of “trust established at issuance, valid until expiration” fails for both scenarios: agents that spawn dynamically and disappear, and agents that orchestrate workflows across multiple trust domains.

Why Object Capabilities Alone Aren’t Enough

If identity-based access models don’t work for autonomous agents, what about capability-based systems? Object capabilities grant access through unforgeable tokens scoped to specific actions. They enable delegation chains, provide automatic least privilege through attenuation, and can be time-bounded. As Alan Karp writes, “Without chaining, every private in the army is saying ‘Yes sir, Mr. President.’ Without attenuation, that private ends up with permission to launch nukes.”

Much like “Anyone with the link” for Google Docs, you set capabilities upfront (e.g. ‘view’, ‘comment’, or ‘edit’) and pass them along attenuated chains to then-unknown second and third parties.

Alan’s access management use cases[9] form a key conceptual foundation for DIF’s Trusted AI Agents WG approach, and illustrate the power of capabilities through a backup scenario.

Alice tells her agent Bob: “Backup X to Y.” Bob, as a backup service provider, has broad permissions. Bob uses a copy service provided by Carol, and passes those permissions onward. But Alice changes her instruction: “Backup X to Z.” If Z is owned by Carol, she ends up overwriting her own resource. If Z belongs to someone else, Alice enables unauthorised updates to resources she doesn’t control.


‘Transitive Access Problem’, Use Cases for Access Management

This is the ‘confused deputy vulnerability’, where a privileged service can be tricked into misusing its authority. With capabilities, Alice designates resources by delegating specific tokens: query X, update Y. Bob executes using Alice’s capabilities, not his own broad permissions. Z is safe because Alice never granted that capability.

Capabilities solve critical access control problems. But for autonomous agents operating at scale, they’re not enough on their own. Primarily because you can’t predict all capabilities upfront. And agents need capabilities to be expressed in much more limited scopes than human users. As Andor says, “you can have an agent specifically scoped to email, but humans encompass a much broader set of capabilities”.

Take Nicola’s travel example: An agent books a rental car to line up with a flight time, then the flight becomes unavailable. The agent needs to cancel that specific car booking, but you couldn’t have known which car would need cancellation when you granted capabilities. Granting “cancel any booking” upfront is too broad for an agent. But without knowing which specific agent made the booking, you can’t dynamically scope the cancellation capability to just that transaction.

”While capability-based models provide strong security guarantees and natural least privilege, they are not trivial to apply in dynamic or stateful distributed environments”, explains Nicola, “Recovery, rollback, and context rehydration require additional mechanisms beyond static capability assignment”. Even if you can apply fine-grained capabilities for security, excessive granularity becomes impractical. Too many capabilities require frequent user approvals or inter-agent exchanges. You have to balance security and operational practicality.

Dmitri Zagidulin, co-chair of DIF’s Trusted AI Agents Working Group, further explains the “patchwork problem”. Without agent identifiers, capabilities from multiple sources can be composed in unintended ways. An agent might get read access from one source and write access from another, then combine them in unintended ways, creating access patterns you never explicitly authorised.

With autonomous agents, you need to prove who they are, who spawned them, who they’re acting on behalf of, and if they have the authorisation to carry out the task they’re requesting. All in a format that the requested service can verify without “phoning home” or checking with a central authority. To do this, you need both identifiers and capabilities working together across boundaries.

Attaching Identity to Capability
“What’s becoming clear to everybody that has anything to do with AI agents is that agents and any sort of software need their own identity,” explains Dmitri, “Not just for access control, but for quality assurance.” Akin to how mobile apps have guardrails of least privilege access to other apps on your device, and registered accountability to their creators.

“When you’re interfacing with an agent, you actually do care a lot about who built the agent, where they came from, how the agent got spun up, and where it’s running,” says Andor. But as Alan notes, what matters isn’t the arbitrary identifier, it’s that the identifier enables verification of the agent’s provenance and capabilities. For example, NANDA’s first project focuses on agent discovery[10]. The system allows agents to advertise their capabilities dynamically through cryptographically verified “AgentFacts”, which declare what the agent can do, under what conditions, and with what limitations. When an agent needs specialised functionality, it queries for agents offering those services, verifies their credentials, and grants them attenuated access strictly scoped to the sub-task. This enables composable authorisation, where agents can discover and delegate to each other on the fly, rather than via centralised orchestration.

This represents a fundamental mental model shift in how we think about authorisation. Traditional identity systems operate on “authority by identity”. You prove who you are, and the system checks what you can access against a central registry. This worked for humans with relatively stable roles, particularly within organisations. Whereas agents require “authority by possession”, they prove they hold a capability token that grants specific permissions. But critically, as Nicola frames it, “Trust doesn’t come with impersonation. Trust comes from knowing who is executing the action”. You need to know who to hold accountable if something goes wrong. To establish this accountability across organisational boundaries, you need distributed and dynamic identity systems rather than centralised, static-ones.

The Case for DIDs and VCs
Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs) provide the technical foundation for agents to operate across trust boundaries.

DIDs offer cryptographically-anchored identity that’s portable across platforms and organisations. Unlike email addresses or OAuth client IDs that belong to specific providers, a DID is controlled by the entity it identifies. An agent can prove its identity cryptographically without depending on a centralised authority to vouch for it every time. This portability is critical when agents need to work across multiple services that don’t have federation agreements.

VCs provide a standardised format (W3C VC Data Model, or ISO/IEC 18013-5) for expressing certificate capabilities and attestations (e.g. ZCAP-LD[11]). They’re cryptographically signed by issuers, making them tamper-evident. They can include delegation chains showing how authority flows from human to platform to agent to sub-agent. They support selective disclosure, whereby an agent can prove it has a capability without revealing more information than necessary. And crucially, they have independent lifecycles from identity, meaning capabilities can be issued and revoked separately from the agent’s core identity.

Together, they enable hierarchical identity with delegation chains, dynamic capabilities, cross-boundary trust, and complete audit trails.

Now when an agent books CAR-123, the rental service automatically issues a VC with a scoped capability: ‘“cancel”, booking ID “CAR-123”, “only by the agent that created this booking”, valid until pickup’. If the flight becomes unavailable, the agent has exactly the capability it needs without requiring human approval or having overly broad permissions.

There are of course still practical challenges with this approach. DID registration has performance implications. Revocation checking impacts latency. Key management, storage, and rotation for ephemeral agents creates new operational needs.

The alternative however, of trying to scale OAuth-based systems to billions of autonomous agents crossing organisational boundaries, is significantly more problematic. The major payment networks, Google, and Microsoft aren’t building agent identity systems because they love new standards. They’re building them because they know the current models break at multi-agent scale.

The Path Forwards
The pragmatic path forwards, as Andor explains, is to start exposing external agents with decentralised identifiers. Organisations can maintain their existing OAuth infrastructure for internal systems while giving agents portable identities for cross-boundary interactions.

Solutions like SPIFFE[12] represent a middle path. Not fully decentralised, but enabling workload identities to establish trust across organisational boundaries without requiring universal federation. SPIFFE provides cryptographically verifiable identity for workloads, creating a foundation where agents can prove ‘who is executing’ a task. However, you still need to be able to verify delegation context and trust positions. Emerging frameworks, like ZTAuth*, WIMSE[13], and others, are developing complimentary approaches that combine workload authentication with explicit delegation mechanisms.

Marketplaces provide a good business case for this. “Marketplaces are great because you can rarely capture both sides of the market,” Andor says, “You’re normally on one side or the other, so that means you have to interact with each other, and you can’t always be in your federation.” Using DIDs and VCs, agents from different platforms can establish trust and verify delegation chains without requiring pre-existing reciprocity of identification systems.

Admittedly, both Andor and Dmitri think the catalyst for change may likely be a significant security incident attributable to inadequate agent identity management, or regulatory requirements for accountability that current systems can’t satisfy. It’s better to get ahead of both.

Get Involved
DIF’s Trusted AI Agents WG is actively defining an opinionated, interoperable stack to enable trustworthy, privacy-preserving, and secure AI agents. The first work item, Agentic Authority Use Cases is focused on anchoring the important work on agents with real human led use cases, to help prioritize and discover where things break down.

Autonomous agents executing complex tasks across organisational boundaries has captured significant investment and attention, but as Andor puts it, “Identity is the first problem that needs to be solved in the agentic web to make it happen and for agents to scale”.

A huge thank you to Andor Kesselman, Dmitri Zagidulin, Nicola Gallo, and Alan Karp, for their time and insights in preparing this article.

To learn more or get involved with DIF’s Trusted AI Agents work, visit the Trusted AI Agents working group page.

Building AI Trust at Scale — Series ← Previous in this series: Part 3 – Why your content needs an ingredient list By DIF Ambassador Misha Deville View all parts Hardt, Ed. (2012). “The OAuth 2.0 Authorization Framework”. IETF. ↩︎ Lab42AI (2025). “When OAuth Becomes a Weapon: AI Agents Authentication Crisis”. Hackernoon. ↩︎ Simons, Alex (2025). “The Future of AI Agents - Why OAuth Must Evolve”. Microsoft Entra. ↩︎ Feng, K. et al. (2025). “Levels of Autonomy for AI Agents (Working Paper)”. Arvix. ↩︎ Anthropic (2024). “Introducing the Model Context Protocol”. Anthropic. ↩︎ South, T. et al. (2025). “Authenticated Delegations and Authorized AI Agents”. Arvix. ↩︎ Wang, Nancy (2025). “Closing the credential risk gap for AI agents using a browser”. 1Password. ↩︎ South, T. et al. (2025). “Authenticated Delegations and Authorized AI Agents”. Arvix. ↩︎ Alan Karp (2025). “Use Cases for Access Management”. Alanhkarp.com. ↩︎ NANDA: The Internet of AI Agents. ↩︎ Lemmer-Webber, C. et al. (2025). “Authorization Capability for Linked Data v.0.3.”. W3C. ↩︎ SPIFFE Overview. ↩︎ WIMSE IETF Working Group. “Workload Identity in Multi-System Environments”. GitHub. ↩︎

Friday, 14. November 2025

FIDO Alliance

Security Boulevard: HYPR and Yubico Deepen Partnership to Secure and Scale Passkey Deployment Through Automated Identity Verification

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and […]

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.

Thursday, 13. November 2025

FIDO Alliance

Digital Trends: Windows 11 finally lets you use Passkeys through your own password manager

Microsoft is making Windows 11 a lot friendlier to your favorite password manager. Windows 11 now supports third-party passkey managers, meaning you’re not locked into Microsoft Password Manager anymore. Passkeys are part of the FIDO […]

Microsoft is making Windows 11 a lot friendlier to your favorite password manager. Windows 11 now supports third-party passkey managers, meaning you’re not locked into Microsoft Password Manager anymore.

Passkeys are part of the FIDO standard, a newer authentication method that replaces passwords with secure, device-bound cryptographic keys. Unlike passwords, passkeys can’t be phished, reused, or stolen from the cloud.


Kantara Initiative

Kantara Achieves Historic First: Accredited to Certify Against the UK DIATF

First Conformity Assessment Body (CAB) certifying against the UK Digital Identity and Attributes Trust Framework (DIATF) THAMES DITTON, England, 13 November 2025 — The Kantara Initiative (Kantara), the acknowledged expert […] The post Kantara Achieves Historic First: Accredited to Certify Against the UK DIATF appeared first on Kantara Initiative.

First Conformity Assessment Body (CAB) certifying against the UK Digital Identity and Attributes Trust Framework (DIATF) THAMES DITTON, England, 13 November 2025 — The Kantara Initiative (Kantara), the acknowledged expert […]

The post Kantara Achieves Historic First: Accredited to Certify Against the UK DIATF appeared first on Kantara Initiative.

Wednesday, 12. November 2025

DIF Blog

DIF Newsletter #55

November 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Steering Committee Election Results

November 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Special Interest Group Updates; 4. User Group Updates; 5. Announcements; 6. Community Events; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Steering Committee Election Results

We're pleased to announce the results of our Steering Committee elections, welcoming three new members who bring diverse expertise and perspectives to DIF's leadership:

JC Ebersbach (identinet) Matt McKinney (ArcBlock, AIGNE) Eric Scouten (Adobe)

We welcome re-elected DIF Steering Committee members, including:

Sam Curren (Indicio) Rouven Heck (Fidenexum) Markus Sabadello (DanubeTech)

These members will guide DIF's strategic direction, oversee working group activities, and strengthen connections across the growing decentralized identity ecosystem. Their combined experience in standards development, enterprise implementations, and emerging technology governance will be invaluable as DIF enters its next phase of growth.

🛠️ Working Group Updates

Browse our active working groups here

Trusted AI Agents Working Group

The Trusted AI Agents Working Group has made substantial process on its inaugural Agentic Authority Use Cases work item by creating use case clusters and architectural component mappings. Use cases span enterprise workflows, travel booking, calendar management, and supply chain scenarios. The group established a governance process for evaluating and advancing use cases, with emphasis on identifying stakeholders willing to drive implementation. Key discussions have focused on agent discovery mechanisms, trust registries, and the balance between decentralized identity approaches and traditional federation models.

The working group is exploring how existing DID and verifiable credential standards can be adapted for AI agents while considering unique requirements like delegation chains, authorization boundaries, and human oversight mechanisms. With strong participation from across the DIF community, the group aims to deliver concrete specifications and reference implementations by early 2026.

Hospitality & Travel Working Group

The HAT Pro specification advanced with Neil Thomson's automated schema generation tools using PlantUML and JSON Schema. The team developed comprehensive data models for travel profiles including identity, preferences, accessibility requirements, and relationships. Key architectural decisions established distinct branches for parallel development. Marketing expanded with outreach to Oracle, Marriott, and Hospitality Solutions. The group is developing use cases for AI-driven travel agents that maintain traveler control over personal information.

👉 Learn more and get involved

Creator Assertions Working Group

Finalized an interim trust model valid through March 2027, leveraging existing S/MIME governance programs. Integrated with IPTC Verified Publishers lists and Mozilla root stores with email trust bits enabled. Developed best practices for media identifier standards using EIDR and DDEX. Beginning work on agentic identity and AI-generated content provenance.

👉 Learn more and get involved

Applied Crypto Working Group

BBS+ work item reviewed recent academic security analyses, confirming robustness for practical applications. Advanced integration with device-bound credentials combining BBS with ECDSA for hardware security modules. Evaluating approaches for pseudonym systems and blind signatures. Examining post-quantum combinations for "everlasting privacy" guarantees and coordinating with IETF standardization efforts.

👉 Learn more and get involved

DID Methods Working Group

Conducted deep dives into did:webs (DID:web with KERI) and MDIP (Multidimensional Identity Protocol). MDIP uses IPFS for content addressing, supports multiple blockchains including Bitcoin and Litecoin testnets, with ~18,000 active DIDs. Refined the DIF recommendation process with updates to coordinated release strategy and formal review requirements.

👉 Learn more and get involved

Identifiers and Discovery Working Group

Finalized DID Traits v1.1 updates including well-known resource specifications and IANA registration processes. Patrick St-Louis showcased comprehensive DID:webvh server implementation with Explorer UI, GraphQL APIs, and BC digital trust integration. Addressed DID URL path resolution and service endpoint handling with proposed cascading fallback algorithm.

👉 Learn more and get involved

DIDComm Working Group

Explored post-quantum encryption approaches for DIDComm V3, focusing on key encapsulation mechanisms (FIPS 203) and hybrid encryption schemes. Reviewed new protocol proposals for workflow management, payment processing, vault coordination, and multi-signature documents. Evaluating CBOR encoding implementation and session management for ephemeral key exchange.

👉 Learn more and get involved

Claims & Credentials Working Group

Following the Credential Schema Specification 1.0 release, expanded work on business employment credentials and verified person credentials. Explored assurance levels for identity verification strength across different use cases. Began developing DIF membership credentials to demonstrate practical schema applications.

👉 Learn more and get involved

🌎 DIF Special Interest Group Updates

Browse our special interest groups here

DIF Hospitality & Travel SIG

The SIG hosted two key presentations this month. Google Wallet's team demonstrated their comprehensive digital identity vision for travel, including TSA integration, mobile driver's licenses, and privacy-forward data sharing through zero-knowledge proofs. The presentation drew over 100 participants and sparked discussions on wallet interoperability and EIDAS compliance.

Passive Bolt showcased frictionless hotel check-in using NFC technology, addressing the industry's low mobile key adoption rates. Their solution works with both existing infrastructure and future state-issued digital IDs, with deployment costs as low as $2,000 annually for a 200-room property.

👉 Learn more and get involved

APAC/ASEAN SIG

Terminal 3's Gary Liu presented on decentralized identity and privacy-enhancing technologies for AI agents, addressing trust and authorization challenges in agentic AI systems. The group explored Terminal 3's integration with Hedera blockchain and Open Campus ID, discussing regulatory compliance across APAC jurisdictions and data sovereignty requirements.

👉 Learn more and get involved

DIF Africa SIG

Focused on mobile-first credential management and offline verification capabilities for limited connectivity environments. Participants shared insights on regulatory frameworks across African nations and mechanisms for establishing trust without consistent internet access.

📢 Announcements Ethereum Foundation PSE Grants Available for did:ethr Development

The Ethereum Foundation's Privacy and Scaling Explorations team has announced grant opportunities for advancing the did:ethr method specification. This funding opportunity aims to support development and standardization efforts for Ethereum-based DIDs, strengthening the intersection of blockchain technology and decentralized identity. Interested parties can learn more and apply at https://esp.ethereum.foundation/applicants/rfp/did_ethr_method_spec.

DIF Collaborates with Decentralization Research Center on Treasury Response

The Decentralized Identity Foundation collaborated with the Decentralization Research Center in responding to the US Treasury's Request for Comments, emphasizing the critical importance of self-sovereignty and fit-for-purpose design in systems handling people's identity data. The response highlighted how decentralized identity architectures can address regulatory concerns while maintaining user privacy and control. Read more about the collaborative response here.

🎉 Community Events DevConnect and ZKID Day Coming Soon

Mark your calendars for DevConnect, featuring the dedicated ZKID Day focused on zero-knowledge identity solutions and privacy-preserving technologies. This event, starting November 17th, brings together developers, researchers, and implementers working at the intersection of zero-knowledge proofs and decentralized identity. Learn more about DevConnect at https://devconnect.org/ and register for ZKID Day at https://devconnect.org/calendar?event=zkid-day.

TRUSTECH 2025

DIF is partnering with TRUSTECH for their upcoming international event dedicated to innovative payment and identification solutions. TRUSTECH has provided the following information about their December 2-4 event at the Paris Porte de Versailles Exhibition Centre:

TRUSTECH offers a complete programme featuring conferences, keynotes, and pitch sessions, allowing participants to discover major innovations in key sectors such as: Biometrics, Innovative Payments, Cryptocurrencies, Smart Cards or Digital Identity.

TRUSTECH offers a complete programme featuring conferences, keynotes, and pitch sessions, allowing participants to discover major innovations in key sectors such as: Biometrics, Innovative Payments, Cryptocurrencies, Smart Cards or Digital Identity.

The event welcomes upstream technology providers in the payment sector offering solutions like smartcard manufacturing, secured frameworks and transaction processing infrastructures, as well as identification solutions providers for both private and public sectors, offering Civil ID documents, authentication systems and Identity Access Management solutions.

TRUSTECH promises three intense days of networking to exchange ideas and insights, discuss trends, discover the latest innovations and solutions, while connecting with an international audience from Europe, Africa, North and South America, and Asia.

👉 Learn more about the event program
👉 Register using DIF's partner code

Browse the DIF Calendar:

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.


Digital ID for Canadians

Statement on Bill C-8: Strengthening Cybersecurity While Preserving Digital Trust

November 12, 2025 Bill C-8 establishes the Critical Cyber Systems Protection Act (CCSPA) and enhances federal oversight of telecommunications to protect Canada’s critical infrastructure from…

November 12, 2025

Bill C-8 establishes the Critical Cyber Systems Protection Act (CCSPA) and enhances federal oversight of telecommunications to protect Canada’s critical infrastructure from cyber threats. DIACC recognizes the urgent need to protect vital systems underpinning our digital economy while maintaining the trust foundations essential to Canada’s prosperity.

Key Provisions

Bill C-8 establishes mandatory cybersecurity obligations for designated operators in telecommunications, banking, energy, transportation, and nuclear sectors:

Cybersecurity programs are required within 90 days, with annual reviews Incident reporting to the Communications Security Establishment within 72 hours Supply chain risk management and record-keeping in Canada The federal government may issue confidential, binding directions without prior consultation Penalties up to $15 million per violation for organizations Critical Considerations

Encryption and Privacy Protections
Provisions grant broad powers to direct telecommunications providers “to do anything or refrain from doing anything.” The Privacy Commissioner noted Bill C-8 could result in the collection of subscriber information, communication data, metadata, and location data. The Intelligence Commissioner questioned whether warrantless seizure of information can be constitutionally justified.

Technical experts warn these powers could require weakening encryption standards. Encryption is foundational infrastructure for digital trust, protecting financial transactions, healthcare communications, and secure authentication systems that enable digital identity solutions.

Transparency and Accountability
The bill authorizes confidential directions without requiring consultation with affected operators or notification to the privacy oversight body. The Privacy Commissioner recommended that government institutions notify his Office of cybersecurity incidents involving material privacy breaches. The absence of privacy impact assessment requirements represents a significant safeguard gap.

Interoperability and Standards
Cybersecurity measures should align with frameworks, including DIACC’s Pan-Canadian Trust Framework (PCTF), which provides consensus-based protocols for digital identity and authentication. Consistency between federal cybersecurity requirements and provincial privacy regimes is essential for seamless digital services and interprovincial trade.

Economic Impact
Limited implementation detail exists, with specifics deferred to future regulations. The absence of exemptions for organizations with mature cybersecurity protocols and the lack of financial incentives for proactive investments may disproportionately impact small and medium enterprises. Requirements diverging from international standards could affect Canada’s competitiveness as a trusted destination for digital business.

DIACC’s Recommendations

DIACC encourages policy frameworks that:

Strengthen security without compromising privacy: Preserve encryption standards and privacy-enhancing technologies, enabling trusted digital interactions Promote transparency and accountability: Implement privacy impact assessments and meaningful consultation with oversight bodies. Ensure interoperability: Align federal requirements with provincial frameworks and international standards Balance security with civil liberties: Maintain robust Charter rights protections while securing critical infrastructure. Foster innovation: Enable Canadian organizations to compete globally while maintaining high security standards

Canada can establish cybersecurity governance that protects critical infrastructure while preserving trust, privacy, and innovation. DIACC encourages ongoing consultation to ensure Bill C-8 achieves security objectives while maintaining digital trust foundations essential to Canada’s economic prosperity and democratic values.

Joni Brennan
President, DIACC


Statement on Bill C-4: Balancing Economic Relief with Privacy Considerations

November 12, 2025 Bill C-4 introduces essential economic relief measures for Canadians, including tax reduction, housing incentives, and cost-of-living support during challenging times. These provisions…

November 12, 2025

Bill C-4 introduces essential economic relief measures for Canadians, including tax reduction, housing incentives, and cost-of-living support during challenging times. These provisions respond to real pressures facing Canadian households and businesses, and represent meaningful efforts to provide fiscal relief when it is needed most.

However, Part 4 of the bill warrants careful consideration by policymakers and stakeholders across Canada’s digital trust ecosystem. This section amends the Canada Elections Act regarding how federal political parties handle personal information. According to the bill’s summary, Part 4 “amends the Canada Elections Act to make changes to the requirements relating to political parties’ policies for the protection of personal information.”

Key Provisions

The amendments would require parties’ privacy policies to be available in both official languages and written in plain language, stating “the types of personal information in relation to which the party carries out its activities” and explaining “using illustrative examples, how the party carries out its activities in relation to personal information.” These transparency requirements represent positive steps toward helping Canadians understand how their data is used in the political process.

However, the bill also includes a provision stating that “a registered party … cannot be required to comply with an Act of a province or territory that regulates activities in relation to personal information … unless the party’s policy … provides otherwise.” This clause raises questions about the interoperability of federal and provincial privacy frameworks, particularly as provinces continue to strengthen their own privacy legislation.

Considerations for the Digital Trust Economy

Privacy protection is a cornerstone of digital trust and civic confidence in democratic institutions. As Canadians increasingly engage with political processes through digital channels, the handling of personal information by political parties becomes more consequential. The data collected, ranging from contact information to political preferences and engagement patterns, requires robust safeguards that align with contemporary privacy standards.

Some stakeholders have raised questions about how these amendments align with evolving privacy standards across jurisdictions and sectors. One analysis suggests the changes could create “a regime where parties are held to standards far below those governing businesses, governments, and national security agencies.” While political parties operate in a unique context with constitutional dimensions around freedom of expression and association, the question of appropriate oversight mechanisms merits thoughtful examination.

Provincial privacy commissioners and data protection authorities have developed significant expertise in overseeing privacy practices across various sectors. The relationship between federal electoral processes and provincial privacy frameworks presents both jurisdictional complexities and opportunities for collaborative governance approaches.

DIACC Recommendations

As a multi-stakeholder organization focused on digital identity and trust, DIACC offers the following recommendations to strengthen Bill C-4 while maintaining its economic relief objectives:

Establish Independent Oversight: Consider establishing an oversight role for the Office of the Privacy Commissioner of Canada regarding federal political parties’ handling of personal information, with appropriate investigative and enforcement mechanisms that respect the unique context of democratic processes. Maintain Baseline Provincial Standards: Amend the provision to ensure federal political parties remain subject to applicable provincial privacy laws as a baseline, while allowing parties to adopt higher standards voluntarily. This would maintain consistency with the principle of cooperative federalism and avoid creating a privacy protection gap. Align with Modern Privacy Principles: Ensure party privacy policies align with the core principles of PIPEDA and contemporary provincial privacy legislation, including consent, purpose limitation, data minimization, accuracy, and accountability. Implement Transparency and Reporting: Require federal political parties to publish annual transparency reports detailing the types and volumes of personal information collected, purposes of use, data retention periods, and any third-party sharing arrangements. Enable Technical Interoperability: Encourage alignment with recognized privacy frameworks such as the Pan-Canadian Trust Framework (PCTF) to facilitate consistent privacy practices across federal and provincial jurisdictions and sectors. Conduct Privacy Impact Assessments: Require political parties to conduct and publish privacy impact assessments when implementing new data collection technologies or significantly changing data handling practices. Establish a Review Mechanism: Include a mandatory parliamentary review provision within three years to assess the effectiveness of these amendments and their alignment with evolving privacy standards and technologies. Enhance Public Education: Support Elections Canada in developing public education resources to help Canadians understand their privacy rights in the political context and how to exercise control over their personal information.

DIACC encourages ongoing consultation between federal and provincial authorities, privacy commissioners, political parties, and civil society stakeholders throughout the implementation of these amendments. Strong privacy safeguards and economic relief need not be mutually exclusive; both are essential to building a resilient digital economy and maintaining trust in Canadian institutions.

By strengthening the privacy provisions in Part 4 while maintaining the essential economic relief measures in other parts of the bill, Parliament can demonstrate that protecting Canadians’ personal information and supporting their economic well-being are complementary priorities.

Joni Brennan
President, DIACC


Blockchain Commons

Announcing the 10-Year SSI Revision Project

In 2016, I published The Path to Self-Sovereign Identity (https://www.lifewithalacrity.com/article/the-path-to-self-soverereign-identity/) and with it proposed ten foundational principles for digital identity systems. These principles were centered on human dignity, agency, and consent and quickly became a touchstone in the emerging world of Self-Sovereign Identity. At the time, I asked for support

In 2016, I published The Path to Self-Sovereign Identity (https://www.lifewithalacrity.com/article/the-path-to-self-soverereign-identity/) and with it proposed ten foundational principles for digital identity systems. These principles were centered on human dignity, agency, and consent and quickly became a touchstone in the emerging world of Self-Sovereign Identity.

At the time, I asked for support in refining those principals. And, we tried: at ID2020, Rebooting the Web of Trust, IIW, and elsewhere. But for nearly a decade, the original principles have remained largely unchanged, inspirational yet sometimes misunderstood.

Now, in 2025, as the Self-Sovereign Identity movement turns ten next year, I’m renewing that invitation. And I’m asking for your participation.

📌 The 10-Year Revision Project

The 10-year SSI Revision Project is an effort to revisit and refine the original SSI principles, not as a rigid standard, but as a living framework for people designing, governing, and deploying identity infrastructure that respects and protects the people it serves.

SSI is no longer a theory. Its infrastructure has been adopted by governments, companies, communities, and protocols. As that adoption accelerates, the foundational values must evolve to meet today’s ethical, legal, and technical challenges including coercion, use of biometrics, AI agency, exclusion by design, and gamified behavioral manipulation.

The goal is ultimately to revisit old principles and to propose new ones while making a renewed call to protect the dignity of all identity holders.

📅 Join the Collaboration

To support this project, I’ll be hosting a series of open online calls over the next year to co-develop and discuss revised principles, new proposals, and system guidance. These sessions will welcome technologists, designers, researchers, regulators, and community stewards from across the SSI and identity ecosystem.

Kickoff Meeting 1 — EU/US time compromise

Tuesday, December 2nd, 2025 10:00am PT / 7:00pm CET

Kickoff Meeting 2 — EU/Tokyo time compromise

Tuesday, December 9th, 2025 (Tokyo local) 3:00pm Tokyo / 7:00am CET / (10:00pm PT Monday for the host)

The goal of these first two meetings is to discuss opportunities for different topics, with the goal of writing up some initial rough “concept papers” (ala RWOT’s “topic papers”) to scope some of ideas that people have of what needs to be done.

Hopefully you’ll join us for these initial calls. No longer commitment is required, but the intent is to investigate your participation in writing some papers on this topic by May! However, the most important thing is ultimately that your ideas and your feedback help to shape the continued development of Self-Sovereign Identity.

Please let me know that you’re interested in joining us!

🧭 Team Topics

If we get enough participation, I expect we may split up into teams, to cover some of the various topics that bear discussion as we rethink SSI.

Some early broad topics that we are considering currently are:

Beyond Property: Principal Authority and the Legal Foundation of SSI. Agency law, principal authority, and revamping or expanding the SSI principles based on them. Anti-Coercive Design and Cognitive Liberty. Avoiding coercive design, which will likely include more academic discussions of philosophy and may reveal new principles. From Principles to Properties: Operationalizing SSI. A deep dive into the CSSPS 42-property framework published in IEEE Access, looking for objective design principles that might contribute to SSI. And forward from that. More Than a Digital Shadow: Rewriting Principle 1 – Existence. Reclaiming the original intent of the first principle, that every person has an identity that precedes any digital system, drawing on generative identity, Ubuntu philosophy, feminist sovereignty, decolonial theory, legal personhood guarantees, and real-world harms.

The intent is to write articles on each of these topics, and hopefully some more, by May 2026, in time for the anniversary and to use those articles to revise, revamp, and expand the original principles. But to get there from here, we need to start coordinating now!

🌱 Why This Matters

SSI has always been more than a technical spec. It’s a movement for restoring dignity, agency, and trust in a digital world that too often erodes all three. As its adoption spreads, we must ensure that the principles at its foundation still serve the people they were meant to protect.

Let’s not let another ten years pass before we act.

📙 Requested Reading

I’ve written a number of articles about SSI over the years. I think three of them are particularly important to these discussions, and I suggest that people read them as part of this process:

The Path to Self-Sovereign Identity (2016). My original article, which lays out the pre-history of SSI and the initial 10 principles. Origins of Self-Sovereign Identity (2021). A look at the philosophical and political roots of SSI, including its lineage in civil liberties, cryptographic activism, and human rights frameworks. Principal Authority: A New Perspective on Self-Sovereign Identity (2021). How identity should not be framed as property, but as a domain of agency governed by fiduciary duty and inalienable rights.

I am also working on an annotated syllabus of some of the most important papers and articles published the last 9 years. I’ll share my initial pass before our first meeting in December.

For more info on everything, see the Revisiting SSI website.

🤝🏼 Join Us

Some of the people that have expressed interest in joining us for this effort are Kim Hamilton Duffy (DIF), Rodolfo Costa (University of Coimbra), Georgy Ishmaev (Inria), Vinay Vasanji (EF), Ian Grigg, and Philip Sheldrake. This includes a mixture of both critics and supporters of SSI, coming from a variety of backgrounds, from academia to technology. What we are weak on so far are people from law and regulation.

You can email me directly and let me know you’d like to be involved, or sign up for an announcements-only #RevisitingSSI email list or alternatively join our Signal group, which we’ll be using to coordinate our initial calls.

If you or your organization wishes to demonstrate its support for goals of Self-Sovereign Identity, I am seeking financial sponsors for this project. Contact me about different sponsorship opportunities, or you can directly support my work on these kinds of efforts via GitHub (via a one-time donation or ongoing monthly patronage) at https://github.com/sponsors/ChristopherA.

I look forward to collaborating with you!

Monday, 10. November 2025

FIDO Alliance

WinBuzzer: Microsoft Edge Now Syncs Passkeys Across Windows Devices, Bolstering Passwordless Push

Microsoft is rolling out a significant update to its Edge browser that allows users to save and sync passkeys across their Windows devices. Announced on November 3, the new feature integrates passkeys directly […]

Microsoft is rolling out a significant update to its Edge browser that allows users to save and sync passkeys across their Windows devices. Announced on November 3, the new feature integrates passkeys directly into the Microsoft Password Manager, starting with Edge version 142.

Addressing a key weakness in the company’s passwordless strategy, this move untethers passkeys from a single machine. By enabling cloud synchronization for these phishing-resistant credentials, Microsoft aims to make secure, password-free logins more practical for everyday use. F

or now, the feature is limited to Windows desktops, with support for other platforms planned for the future.


Biometric Update: iProov certified for biometric deepfake protection with Ingenium IAD test

iProov’s biometric injection attack detection technology has passed an evaluation by Ingenium Biometrics to the Level 2 (High) standard set out in Europe’s CEN TS 18099. Ingenium carried out independent testing of […]

iProov’s biometric injection attack detection technology has passed an evaluation by Ingenium Biometrics to the Level 2 (High) standard set out in Europe’s CEN TS 18099.

Ingenium carried out independent testing of iProov’s Dynamic Liveness technology, which uses patented Flashmark signals to confirm a user’s real-time presence. The European standard is the only one established for defending against deepfakes and synthetic media, and will be used as the starter document for a global ISO/IEC standard.


WebProNews: WhatsApp Rolls Out Biometric Passkeys for Encrypted Chat Backups

WhatsApp has introduced passkey-encrypted chat backups using biometric authentication like Touch ID or Face ID, simplifying end-to-end encryption and replacing cumbersome 64-digit keys. This enhances security for cloud-stored messages amid […]

WhatsApp has introduced passkey-encrypted chat backups using biometric authentication like Touch ID or Face ID, simplifying end-to-end encryption and replacing cumbersome 64-digit keys. This enhances security for cloud-stored messages amid rising cyber threats, potentially setting a new standard for messaging apps and promoting broader privacy adoption.

Sunday, 09. November 2025

Digital Identity NZ

Kiwi Access Card goes digital in new Hospitality New Zealand and NEC partnership

The iconic Kiwi Access Card (formerly 18+ Card) is going digital. Hospitality New Zealand has partnered with NEC New Zealand, a leader in biometrics and digital identity, to deliver a secure new way for people to prove who they are – straight from their smartphone. The post Kiwi Access Card goes digital in new Hospitality New Zealand and NEC partnership appeared first on Digital Identity New Zea

WELLINGTON, New Zealand, 4 November 2025 – The iconic Kiwi Access Card (formerly 18+ Card) is going digital. Hospitality New Zealand has partnered with NEC New Zealand, a leader in biometrics and digital identity, to deliver a secure new way for people to prove who they are – straight from their smartphone.

This collaboration marks a major step in advancing New Zealand’s digital identity ecosystem and supports the Government’s goal of a trusted, privacy-preserving digital identity future. It comes as Minister for Digitising Government Judith Collins’ urges the public and private sectors to “move from discussion to delivery”.

For more than two decades, the Kiwi Access Card has been one of New Zealand’s most recognised forms of identification, helping hundreds of thousands of people prove their age and identity. The new digital Kiwi Access Credential will build on that legacy, offering a secure, convenient way for people to verify who they are directly from their smartphone while maintaining control of their personal information.

“Hospitality New Zealand has shown real leadership in modernising one of the country’s most trusted credentials,” said Steven Graham, Head of Identity Cloud Establishment ANZ at NEC New Zealand.

“This partnership reflects NEC’s global commitment to building secure and interoperable digital identity platforms, with our Identity Cloud capability being established across Australia and New Zealand. We’re proud to help shape the future of identity in New Zealand as a transformation partner alongside Hospitality NZ.”

Built on NEC’s Identity Cloud Platform, the new credential uses global verifiable credential standards and NEC’s expertise in digital identity and biometrics to deliver benefits, convenience and trust for individuals and businesses:

Reduces identity theft and fraud through cryptographically secure, tamper-resistant credentials. Limits data sharing, ensuring only essential information is exchanged with consent. Simplifies compliance with the Sale and Supply of Alcohol Act and other verification requirements. Improves trust and efficiency across everyday age and identity checks.

Will Kim, representing Hospitality New Zealand, said: “The Kiwi Access Card has always been about trust and accessibility. Working with NEC allows us to carry that trust into the digital era, giving customers more privacy, helping businesses meet their obligations, and making verification faster and safer.”

This partnership reflects growing momentum behind New Zealand’s digital identity programme and the Government’s drive for trusted, privacy-preserving services.

NEC New Zealand is preparing to support Hospitality NZ through accreditation under the Digital Identity Services Trust Framework, ensuring the Digital Kiwi Access Credential meets the national standards for trusted and secure digital identity.

The post Kiwi Access Card goes digital in new Hospitality New Zealand and NEC partnership appeared first on Digital Identity New Zealand.

Saturday, 08. November 2025

Human Colossus Foundation

Switzerland’s E-Challenges — And What the World Can Learn

As a global leader in direct democracy, Switzerland faces a unique test: how to scale secure, private, and verifiable E-Collecting, E-Voting, national E-ID, patient health records across 26 cantons — without eroding public trust and losing digital sovereignty.

Geneva, November 8th 2025 — As a global leader in direct democracy, Switzerland faces a unique test: how to scale secure, private, and verifiable E-Collecting, E-Voting, national E-ID, patient health records across 26 cantons — without eroding public trust and losing digital sovereignty.

Democracies worldwide grapple with the same tensions between security, privacy, and trust in digital public services, Switzerland — as a neutral, innovative, business friendly and small alpine nation — is uniquely positioned to experiment, refine, and export globally viable solutions.

on anonymity, linkability and the need of continuous governance

“Voting Without Tracing: A Holistic Look at Privacy in Digital Democracy” by Michal Pietrus

This article written by a core contributor to the Human Colossus Foundation’s open protocols Decentralised Key Management System (DKMS) and Overlays Capture Architecture (OCA), offers a solid blueprint:
✅ Cryptographic tools (Merkel trees, zero-knowledge proofs, end-to-end verifiability) can preserve ballot secrecy and enable audit trails — critical for Swiss federalism
✅ Governance must match technology: transparency, cantonal autonomy, and public oversight are non-negotiable. Therefore digital governance has to become continuous, not fragmented in isolated digital signature events.
✅ Switzerland’s experience is not an exception — it’s a prototype for democracies worldwide grappling with digital transformation

🌍 The global takeaway: In an era of rising digital authoritarianism and voter distrust, secure, privacy-preserving e-voting isn’t optional — it’s a democratic necessity. Switzerland has the chance to leverage its historical direct democracy tradition to lead the way in the digital era. The world should be watching.
🔗 Read the full analysis

🔗 Read HCF news post with more details on Human Colossus Foundation contribution to the E-Collecting program

Dontate today to the Human Colossus Foundation Contribute to the Dynamic Data Economy Overlays Capture Architecture Decentralised Key Management System

Permalink

Friday, 07. November 2025

FIDO Alliance

Biometric Update: New benchmarking tool shows passkeys boost conversion success by 30%

FIDO Alliance and Liminal collaborate on utilization snapshot The FIDO Alliance, in collaboration with digital identity consultancy Liminal, has unveiled the Passkey Index — a new benchmarking tool that tracks the adoption, […]

FIDO Alliance and Liminal collaborate on utilization snapshot

The FIDO Alliance, in collaboration with digital identity consultancy Liminal, has unveiled the Passkey Index — a new benchmarking tool that tracks the adoption, performance and impact of passkey authentication across leading online services.

Launched alongside Liminal’s Passkey Adoption Study 2025, the Index offers the most comprehensive view to date of how passkeys are reshaping digital authentication. “The data in the Passkey Index marks the first time we have been able to measure the actual utilization and performance of passkeys,” says Andrew Shikiar, CEO of the FIDO Alliance.

“The FIDO Alliance intends to grow this program over time as a benefit to service providers within our membership, a guideline for newer implementers and an industry benchmark to track ongoing growth of passkey utilization over time.”


Forbes: Cybersecurity Is A Digital Identity Problem And We Must Deal With It

Digital Identity Means Security One particular leaf of that nettle is authentication, and here I think we Brits can have some optimism. NCSC is working with the government and the […]
Digital Identity Means Security

One particular leaf of that nettle is authentication, and here I think we Brits can have some optimism. NCSC is working with the government and the FIDO Alliance on improving the adoption of “passkeys” across the public and private sectors. If you are not familiar with passkeys (which are already widely used), imagine you want to sign in to your Google Account on a new device. Instead of entering a password, a passkey allows you to log in to your account with a device you’ve already verified (e.g., your phone). You don’t need to remember a password and no-one else can log in as you because they don’t have your phone.

Thursday, 06. November 2025

FIDO Alliance

Biometric Update: Passkeys mature to occupy critical role in authentication for digital ID systems

The passkey tipping point may be fast approaching. As the anointed successor to passwords, passkeys are seeing increased support from huge global companies, improved data analysis and better resources. And, significantly from […]

The passkey tipping point may be fast approaching. As the anointed successor to passwords, passkeys are seeing increased support from huge global companies, improved data analysis and better resources. And, significantly from an industry standpoint, the FIDO Alliance appears to be on the verge of reorienting its priorities to encompass more work on account recovery and digital credentials – a sure sign that, even if passkeys do not deliver the fatal blow to passwords many have predicted, they are established enough for their primary defender to declare a kind of victory in its primary mission.


Mastercard Cybersecurity Blog: Reimagining online authentication to outfox AI-powered cyber scammers

The Mastercard Newsroom recently sat down with Andrew Shikiar, FIDO’s Executive Director and CEO, to learn how the FIDO’s Payments Working Group is helping bolster protection in a rapidly changing […]

The Mastercard Newsroom recently sat down with Andrew Shikiar, FIDO’s Executive Director and CEO, to learn how the FIDO’s Payments Working Group is helping bolster protection in a rapidly changing digital world.


Digital Identity NZ

Mahi Tahi: It takes a village to build an open, safe digital future

As we approach the business end of the year, the hard mahi is ramping up more than anything. The strength of the fabric of our DINZ community can be seen right across the ecosystem from the Reference Architecture working groups expertly facilitated by Christopher Goh, to the heavy lifting being undertaken by so many government and industry practitioners. The post Mahi Tahi: It takes a village to

Kia ora,

As we approach the business end of the year, the hard mahi is ramping up more than anything. The strength of the fabric of our DINZ community can be seen right across the ecosystem from the Reference Architecture working groups expertly facilitated by Christopher Goh, to the heavy lifting being undertaken by so many government and industry practitioners.

Reference Architecture Working Group progress

It is wonderful to see the Policy & Technical Reference Architecture Working Groups progressing so well:

Priority reference use cases voted on and confirmed Mapped mDoc standard to the NZ regulatory framework Sovereign namespace* proposals under consideration – paving the way for the benefits of RealMe without a centrally issued ID card or super credential Ecosystem wide open for industry players and consortiums to emerge.

*A namespace is a unique digital “domain” that distinguishes identifiers and credentials within a system – similar to how an internet domain name (example.nz) separates one website from another. In self-sovereign identity SSI, namespaces ensure that identifiers, schemas, and credentials are globally unique, resolvable, and trustworthy.

Trusted Credentials Adoption (TCA) Group

The recently incarnated Trusted Credentials Adoption (TCA) Group foundations have now been established by the founding team:

Shared commitment to collaboration under principles of trust, transparency, and equity
Te Tiriti-based approach ensuring iwi involvement and Māori representation from the outset
Focus on priority sectors: payments fraud reduction, health, and education
Coordinated strategy for consistent market messaging and shared standards.

DINZ Strategy 2026: The Year of Digital Identity

It is estimated that at least 4 million+ credentials will be issued to New Zealanders in the next 12 – 24 months.

In 2026 DINZ will deliver a coordinated, sector-wide program built on structured engagement, clear communications, and measurable outcomes. 

The DINZ Executive Council has identified the following areas of work and will be prioritising based on available resources and member guidance and engagement.

Structured Industry Engagement: Sector-specific forums (finance, health, education, construction, travel, Māori data, public sector) driving co-design and standards alignment. Each will be framed as mini-ecosystem pilots with measurable adoption targets in line with interoperability standards (e.g., ISO 18013-5 mDL, W3C VC).

Capability Building: Customised training equips practitioners with the knowledge and skills necessary to navigate and leverage technologies such as decentralised identity, digital wallets, and verifiable credentials – DINZ Academy modules accredited under the DISTF capability framework supported by best practice modules.

Workshops and Seminars: Interactive sessions that delve into the latest trends, tools, and technologies in digital identity. Comprehensive guides, briefings, case studies, and reference materials to support ongoing learning.

Marketing & Adoption: Launch nationwide storytelling initiative showcasing digital-trust champions, iwi partnerships, and citizen success stories.

Communications & Reach: National campaign, accessible resources, and regional roadshows to grow public awareness and trust.

Expertise & Experience: Thought leadership in identity policy, trust architecture and cross-sector facilitation ensures efficient delivery.

Innovation Enablement: Integrated Ecosystem Innovation Hub with industry funded grants and pilot funding for emerging TrustTech solutions. DINZ will host an Ecosystem Innovation Hub to co-fund proofs of concept with DIA and industry. Each pilot will demonstrate measurable public-value outcomes (privacy protection, fraud reduction, access equity).

Inclusion & Access: Partnerships with Māori tech networks, SMEs, and training providers to extend digital trust capability nationwide.

DINZ Council election — voting opens Monday

Nominations for the new DINZ Council are now closed. Online voting will open next Monday, 10 November (please note only member primary contacts are eligible to vote). I could not be more grateful or excited about the quality of those nominated and I wish all well for what is shaping up to be a closely contested election. 

View the nominees here.

Key dates and upcoming events

10 November: List of nominees issued to Digital Identity voting members and electronic voting commences 13 November: Any proposed notices, motions, or remits to be advised to Digital Identity NZ 13 November: Trusted Credentials Adoption (TCA) Working Group Session 2, in person at Wynyard Quarter, Auckland 17 November: NZ vendor VC Solutions Showcase at NZTA – part of ISO’s international interoperability test event 25 November: Half day Identity Workshop focused on open banking and financial services hosted by Xero  2 December: The last informal DINZ Coffee Chat with Andy Higgs for the year 4 December: DINZ Annual Meeting & Council election results  4 December: TCA Working Group Session 3, in person at Wynyard Quarter, Auckland 11 December: Save the Date for our End of year Celebration – those that do the mahi should get the treats after all!


Check your inbox for invites or visit the events page on the DINZ website.

Recent highlights in digital public infrastructure

There have been several highlights in the past month across the identity specific digital public infrastructure landscape:

Excellent whitepaper published by Payments NZ – Digital Identity in the Digital Economy NEC NZ’s exciting announcement re iconic Kiwi Access Card (formerly 18+ Card)  The OPC’s Biometric Processing Privacy Code creating new rules for biometric processing comes into effect this month (applies to any new collection and use of biometric information from 3 November 2025 ) Reference Architecture Policy and Technical Working Group on track to deliver by Christmas Trusted Credentials Adoption (TCA) Group established.

More big announcements are imminent making for an action-packed end to the year.

The AWS NZ Region is now open. Build, scale and innovate locally.

The AWS Asia Pacific (New Zealand) Region is here, bringing game-changing opportunities to our shores. Three Availability Zones, single-digit millisecond latency, renewable energy and your data stays in Aotearoa. NEXTGEN, a trusted AWS distributor, together with AWS have hand-picked expert AWS partners to help with everything from AI tools and communications to security, cloud migration, contact centres and apps. Meet the partners, read success stories and book a free cloud assessment. 
Visit the hub → 

This is a sponsored member promotion for AWS/NEXTGEN. 

A warm welcome to all our new and returning members. Mahi tahi, when we work as one, the load is lighter and the journey richer. It’s up to each of us to keep the fabric of our community strong, connected, and vibrant.

Tihei mauri ora!

Andy Higgs
Executive Director
Digital Identity NZ

Read the full news here: Mahi Tahi: It takes a village to build an open, safe digital future

SUBSCRIBE FOR MORE

The post Mahi Tahi: It takes a village to build an open, safe digital future appeared first on Digital Identity New Zealand.

Tuesday, 04. November 2025

FIDO Alliance

WUSA: Why you should consider passkeys instead of passwords for online safety

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance tells us why passkeys are superior to passwords for online safety. The simple step to protect yourself online is to […]

Andrew Shikiar, Executive Director and CEO of the FIDO Alliance tells us why passkeys are superior to passwords for online safety. The simple step to protect yourself online is to upgrade to passkeys.

Monday, 03. November 2025

Digital ID for Canadians

DIACC AI Consultation Submission to the Federal Government

October 31, 2025 – Canada has the opportunity not only to develop world-class AI capabilities, but also to build an ecosystem where AI innovation and…

October 31, 2025 – Canada has the opportunity not only to develop world-class AI capabilities, but also to build an ecosystem where AI innovation and responsible deployment are enabled by a strong foundation of digital trust, identity, authentication, and interoperability. DIACC’s mission is to accelerate the adoption of digital trust by enabling privacy-respecting, secure, interoperable digital trust and identity verification services through the DIACC Pan-Canadian Trust Framework (PCTF).

In this submission, we outline how investments in trust infrastructure, standards and verification can help deliver four key outcomes: scale Canadian AI champions, attract investment, support adoption and foster responsible, efficient deployment of AI systems.

About DIACC

The Digital ID and Authentication Council of Canada (DIACC) is a non-profit public–private coalition created following the federal Task Force for the Payments System Review. DIACC’s mission is to accelerate the adoption of digital trust by enabling privacy-respecting, secure, and interoperable identity systems.

DIACC is the steward of the Pan-Canadian Trust Framework (PCTF)™ — a public and private sector, industry-developed, standards-based, technology-neutral framework designed to enable scalable, certifiable digital trust infrastructure that meets the needs of governments, businesses, and individuals.

The DIACC PCTF has been developed in collaboration with experts from federal, provincial, and territorial governments as well as industry and civil society. It supports verifiable credentials, authentication services, fraud prevention, and information integrity across the Canadian digital economy.

Scaling Canadian AI champions and attracting investment

A major barrier for Canadian AI firms is not solely algorithmic innovation, but the ability to build scalable, trusted solutions that can be easily integrated with government and industry systems — particularly in regulated sectors. To scale, Canadian AI companies must demonstrate trustworthiness, security, privacy compliance, identity/credential verification, and interoperability — all of which raise costs and complexity when the underlying infrastructure is fragmented or weak.

Further, investors increasingly look for ventures that not only have technical sophistication but also strong risk management, data provenance, identity assurance and governance frameworks;   Canada can differentiate itself by emphasizing trusted AI ecosystems.

Recommendations:

Recognize identity, authentication, verification and trust-framework services (e.g., the DIACC PCTF) as critical infrastructure to underpin secure and trustworthy AI ecosystem scaling — and include funding streams, procurement support and regulatory recognition accordingly. Introduce targeted incentives (grants/tax credits) for Canadian AI firms that embed standards-based verifiable credentials, identity proofing and interoperability from day one — thereby lowering investor risk and improving export readiness. Foster public-private collaborations where government platforms adopt standards-based digital credentials (for authentication, identity verification, data-sharing) and invite Canadian AI firms to build on those platforms — this creates domestic anchor opportunities and global reference cases. Promote and fund initiatives that allow Canadian AI firms to export trust by aligning Canada’s trust-framework credentials with international equivalents (e.g. UK identity frameworks) so that Canadian-built AI solutions come with built-in identity/credential assurance for global markets. Enabling adoption of AI across industry and government

Adoption by industry and government is facilitated when the infrastructure for authenticating, verifying identity, sharing data, and managing credentials is streamlined and standards-based. AI solutions deployed in real-world workflows often hinge on knowing who is interacting, what credentials they hold, which data sources are valid — not just the AI model itself.

Fragmentation in identity verification, digital credentials and interoperability across jurisdictions (federal/provincial/territorial) also increases friction, slows procurement and reduces the number of “ready” integration points for AI vendors.

Recommendations:

Deploy a reusable digital credential/single sign-on system for government services (federal, provincial, municipal) modelled on widely used private-sector login tools. This makes it easier for government agencies and vendors (including Canadian AI firms) to plug in. Encourage government procurement frameworks to demand standards-based trust services (identity proofing, verifiable credentials) as part of AI solutions — thereby embedding adoption readiness from the procurement side. Provide and consume standardized capability services offered by the public and private sectors (identity/credential verification, verifiable data sources, API hubs) that AI firms can access respecting privacy, leveraging a consent-based framework,  rather than each reinventing, reducing cost and time-to-market. Support industry-government collaborations in regulated sectors (e.g. health and finance) where trust and identity verification matter first — by creating pilot environments that leverage trustworthy identity and credentials as the foundation for AI deployment. Building safe, reliable and trustworthy AI systems, and strengthening public trust

Public trust in AI is undermined when the authenticity of interactions, data and verified identities cannot be reliably determined — for example, synthetic identities, manipulated documents, fraud-enabled onboarding, and unverified credentials all impact trust and impede safe AI deployment.

Identity assurance, verifiable credentials and trustworthy provenance of data and interactions are vital to enable AI in environments where safety, ethics, regulation, and accountability matter (e.g. financial decisions, cross-border labour credentials).

A standards-based trust framework such as DIACC’s PCTF can support traceability, transparency and audit capability in AI workflows, making systems safer, more explainable, and more investable.

Recommendations:

Fund the adoption and certification of privacy-respecting, standards-based identity, verification and credential-issuance systems (e.g. the DIACC PCTF) across sectors that will use AI. Recognize identity verification, credentialing and data provenance as core components of AI governance frameworks (not just “nice to have” add-ons), and include them in AI risk-assessment, certification and procurement guidance. Invest in research and development of identity and credentialing tools that are specifically tailored for AI use-cases (e.g. verifying data source authenticity). Building enabling infrastructure, including data, connectivity and skills

While data and connectivity are widely recognized as AI-enablers, equally critical is the infrastructure of trust, including identity frameworks, verifiable credentials, authentication services, and certification of trust services — without which data sharing, inter-jurisdictional collaboration, and large-scale deployment face bottlenecks.

Digital sovereignty is also critical. Canada must ensure that infrastructure (cloud, data centres, identity/trust services) aligns with domestic values, jurisdictional control and regulatory frameworks in order to attract both domestic and foreign investment that values provenance and security.

Recommendations:

Invest in Canadian-based trust infrastructure, including domestic cloud and data centres, specifically for identity/credential/trust-services, to support AI readiness, digital sovereignty and economic resilience (as previously recommended by DIACC). Ensure that interoperability standards for identity, credentials and trust-services are integrated into AI infrastructure planning — enabling cross-sector and cross-jurisdiction data flows, credentials reuse, and reduced duplication of onboarding/verification. Support development of shared digital identity and credential hubs, which can serve as infrastructure building blocks for AI-enabled systems, enabling smaller firms or remote/Indigenous communities to access AI infrastructure. Link infrastructure investment to skills and operational readiness, and include training programs for identity/trust-service management, credential issuance and verification, and interoperable system design, ensuring the human infrastructure is aligned with the technical. Conclusion

Scaling Canada’s AI champions, attracting investment, accelerating adoption, and building safe and trusted AI systems all rest on a foundation of digital trust, verifiable identity, credentialing and interoperability. By recognizing and investing in trust infrastructure as a core enabler alongside data and connectivity, Canada can create a differentiated and competitive AI ecosystem.

DIACC welcomes further collaboration with federal partners and key stakeholders to implement standards-based trust frameworks, support interoperable credentialing and enable Canada’s AI ecosystem to flourish on the global stage.

Thank you once again for the opportunity to provide this input.

Joni Brennan
President, DIACC


Human Colossus Foundation

Think Globally, Act Locally: HCF at the Intersection of Global Digital Governance and Local Implementation

Think Globally, Act Locally. Geneva, November 3rd 2025 — The participation of the Human Colossus Foundation in both an international and Swiss events underlines the necessity of new globally accessible protocols for digital authenticity and integrity. Digital Transformation: Global Forces, Local Realities Digital transformation is not a policy choice — it is a technological tide reshapi

Geneva, November 3rd 2025 — The participation of the Human Colossus Foundation in both an international and Swiss events underlines the necessity of new globally accessible protocols for digital authenticity and integrity.

1. Digital Transformation: Global Forces, Local Realities

Digital transformation is not a policy choice — it is a technological tide reshaping governance, democracy, and civic participation worldwide. Its impact is simultaneously global in scale (AI, data flows, platform power) and local in consequence (voter access, municipal infrastructure, cultural trust).

Last week, the Human Colossus Foundation’s participation in both the UNDP Global Conference on “New Ways of Governing” (Track 1: Governing Data & AI) and the Swiss “E-Collecting Hackathon” reflects a strategic positioning: to bridge the global discourse on digital sovereignty with the grounded, technical work of implementing democratic innovation at the national and municipal level.

2. Global Engagement: Shaping the Framework

On Oct. 28/29, at the UNDP conference in Oslo. The forum organised by the UNDP Global Policy Center for Governance the Foundation engaged with global thought leaders — from Indigenous data sovereignty advocates to EU regulators and Global South innovators — to help define what democratic digital governance looks like in a multipolar world. Key themes aligned with HCF’s mission:

New Ways of Governing: Track 1 : Governing Data & AI

Sovereignty as plural: Not state vs. tech, but hybrid models combining community control, technical design, and regulatory oversight.

Innovation at scale: Identifying pathways to move from projects to systemic change — a core challenge for HCF’s global network.

Governance through an ecosystem architecture: How data flows across independent sovereign partners, and interface design embed democratic values — or undermine them.

This global engagement ensures HCF contributes to — and is informed by — the emerging norms, frameworks, and political economies shaping digital governance worldwide.

3. Local Implementation: Trust Through Verifiability — “Trust But Verify” in Practice

Swiss SRF news - Saturday November 1st

On Oct. 31st and Nov. 1st in Bern, the Foundation brought its distinctive approach, coined Dynamic Data Economy (DDE), to the Swiss E-Collecting Hackathon: “Trust But Verify” — a design philosophy that ensures every participant can verify what concerns them, without needing to trust any single technological intermediary.

This is not just a slogan — it is an architectural principle applied consistently across scales:

🔹 Globally — In ecosystem design, DDE ensures that data flows, authentication, and consent mechanisms are transparent and auditable. No black boxes. No hidden intermediaries. Everyone — citizen, regulator, platform — can verify the authenticity and integrity of data flowing through the ecosystem.

HCF contributed with a team dedicated to developing verification as the vector of trust

🔹 Locally — In the Swiss e-collecting prototype, HCF advocated for paper/digital hybrid solutions where:

- Communes/Cantons can integrate in their systems softwares tools to help validate eligibility and produce encrypted authentic attestation without long term storing additional sensitive data.

- Committees, relying on the integrity of communes encrypted attestations, can compile initiatives or referendum for the Federal Chancellery without accessing voter identity. Real-time dashboard enables Committees to track the progress on verifired signatures.

- The Federal Chancellery can perform a second level control — through digital proofs, receipts, open APIs to secured registeries. Final decision on the acceptance of intitive or referandum to be backed by an auditable processes respecting the federal structure of Swiss democratic institutions. The problem we did not address during the hackathon is the essential ceremonial of depositing the boxes of signature form in Bern by the initative committee. Transfering a set of cryptographic key is less visual than a pile of boxes.

Dealing with accessibility with the representatives of the Incluthon Initiative that makes digital product available for people with disabilities

This is verifiable democracy: not just secure, not just private — but transparently accountable to all stakeholders. Trust must be earned, not assumed. The same underlying digital protocols that enable global data ecosystems also enable local voting systems.

Once the signature process is securely digitalised and harmonised with legacy systems, an enhanced form of participative democracy can emerge with new possibilities unavailable with solely paper based solution. During the hackathon, we discussed, for example, the possibilities that

Citizen can verify their signature was counted.

Swiss citizen abroad can participate as if they were in Switzerland, no time-lag.

Vision impaired persons are offered specific digital support with the same level of security without adding complexity to the communes, initiative committee or chancellery process

Tradition of signature in the street is kept and dedicated secured app for collecting agencies are developed

Extension of the system to cantonal petition without creating a separate infrastructure

Here is the “solution stack” we proposed to address the ten topics of the E-Collecting process.

For more information, connect to the E-Collecting website of the Swiss Chancellery for our report.

4. Synergy: From Global Discourse to Local Impact

The combination of these two events enables the Human Colossus Foundation to:

✅ Contribute to global vision — shaping how digital sovereignty, AI ethics, and participatory governance are defined — always through the lens of verifiability.

✅ Develop models locally — validating principles against real-world constraints in Switzerland’s federal, multilingual, and highly regulated context — with “Trust But Verify” as the anchor.

✅ Build bridges — between global policy, local governance, and technical implementation — creating feedback loops that make both levels stronger, more resilient, and more trustworthy.

This dual presence is not coincidental — it is strategic. Digital transformation cannot be governed from above or below alone. It requires actors who can operate across scales — and the Human Colossus Foundation is positioning itself as one of them, with a clear, consistent, and verifiable approach to building trust in digital systems.

Follow us through our different channels for more information.

Dontate today to the Human Colossus Foundation Contribute to the Dynamic Data Economy Overlays Capture Architecture Decentralised Key Management System

Permalink

Friday, 31. October 2025

FIDO Alliance

PC Mag: Passkey Adoption Sees Striking Progress, With One Obvious Leader

Things really have improved, according to a new Dashlane study, and yet we’re sure that many of the sites you use all the time have yet to get the memo […]

Things really have improved, according to a new Dashlane study, and yet we’re sure that many of the sites you use all the time have yet to get the memo about passkeys.

Dashlane’s latest report about passkeys doesn’t offer fresh insights about why adoption of this account-security upgrade remains so uneven, but it does draw out two selfish reasons for sites to deploy it: either they’re afraid of a sign-in snafu costing them a single sale, or they fear a compromised login will cost them a customer’s money and then all of that person’s future business. In fewer words: Greed clarifies

Read the Article


Member Report: The 2025 Dashlane Passkey Power 20

Why passkeys The password problem persists, but the solution is accelerating Despite years of warnings from security experts, passwords remain the Achilles’ heel of digital security. According to Dashlane data, […]
Why passkeys The password problem persists, but the solution is accelerating

Despite years of warnings from security experts, passwords remain the Achilles’ heel of digital security. According to Dashlane data, the average person now manages 301 passwords across their personal and work accounts. Yet, credential abuse remains the most common initial attack vector.1 For CISOs and IT leaders, the problem is clear: The authentication method designed to protect users has become their greatest liability.

“The FIDO Alliance’s own data shows that passkeys significantly reduce sign-in time and have a much higher success rate compared to other authentication methods, meaning customers are able to get to the checkout cart more easily.”

Andrew Shikiar, CEO and Executive Director, FIDO Alliance

Read the Report

Thursday, 30. October 2025

Oasis Open

OASIS Launches Initiative to Standardize Exposure Management Practices in Cybersecurity

BOSTON, MA, 30 October 2025 — As cybersecurity organizations face increasingly complex technology footprints and evolving cyber threats, a unified approach to exposure management has never been more critical. OASIS Open, the global open source and standards organization, is launching the Open Exposure Management Framework (OEMF) Technical Committee (TC) to create a community-driven, standards-base

GuidePoint Security, IBM, Tenable, and Industry Partners Collaborate to Establish Framework for Preventing, Assessing, and Resolving Technology Exposures

BOSTON, MA, 30 October 2025 — As cybersecurity organizations face increasingly complex technology footprints and evolving cyber threats, a unified approach to exposure management has never been more critical. OASIS Open, the global open source and standards organization, is launching the Open Exposure Management Framework (OEMF) Technical Committee (TC) to create a community-driven, standards-based framework to prevent, assess, and resolve organizational exposures in organizational technology.

“Having focused on find-and-fix security for the last decade, I understand the importance of having specific guidance on managing technology exposure,” said Chris Peltz, GuidePoint Security and OEMF TC Convener. “I’m excited to be part of this group of stellar professionals building the Open Exposure Management Framework, which will deliver guidance on best practices and enable organizations to finally begin preventing exposure at scale.”

The OEMF TC will develop a comprehensive exposure management lifecycle and capability requirements that integrate with existing cybersecurity frameworks such as NIST, CIS, and Gartner. Its deliverables will include vendor-agnostic best practices, a maturity assessment model, and tactical implementation guidance to help organizations maximize their security investments.

The TC’s work will also address data inconsistencies across disparate exposure sources and bridge secure design practices with operational security activities. By establishing a functional lifecycle, mapping capability requirements to recognized frameworks, and defining an industry-accepted maturity scale, the framework will equip organizations with the tools to prevent, assess, and resolve technology exposures. These resources will be particularly valuable for larger enterprises, public entities, and organizations that design their own infrastructure and applications.

The OEMF TC welcomes contributions from cybersecurity professionals, security vendors, enterprise practitioners, and anyone committed to advancing exposure management practices. The first meeting is Friday, 31 October 2025. To learn more about how to get involved in this collaborative effort, contact join@oasis-open.org.

Support for the OEMF Technical Committee

GuidePoint Security
“GuidePoint Security is proud to contribute to the development of the Open Exposure Management Framework, helping define what effective Exposure Management looks like across the industry. This collaboration marks a key milestone in uniting the cybersecurity community around a common approach to reducing exposure and commitment to staying ahead of evolving threats.”
-Chris Peltz, Director, Strategy and Solutions Architecture at GuidePoint Security

Tenable
“Exposure management is a transformational mindset shift and strategic approach to how organizations measure and reduce cyber risk. Instead of reacting, exposure management enables organizations to get ahead of attackers by resolving issues before they can be exploited. This is why it’s so important that Tenable collaborates with cybersecurity luminaries to build an exposure management framework that empowers organizations to successfully implement exposure management practices and focus on what matters most.” 
-Eric Doerr, Chief Product Officer, Tenable

Additional Information
OEMF Project Charter
OEMF TC Homepage

Media Inquiries:
OASIS Open: communications@oasis-open.org

The post OASIS Launches Initiative to Standardize Exposure Management Practices in Cybersecurity appeared first on OASIS Open.

Wednesday, 29. October 2025

The Engine Room

Mapping responses to TFIPV across the Majority World

In July 2025 with support from Numun Fund, The Engine Room conducted a brief research study to better understand the actors, needs and key trends in responding to technology-facilitated gender based violence (TFGBV) and intimate partner violence (IPV). The post Mapping responses to TFIPV across the Majority World appeared first on The Engine Room.

In July 2025 with support from Numun Fund, The Engine Room conducted a brief research study to better understand the actors, needs and key trends in responding to technology-facilitated gender based violence (TFGBV) and intimate partner violence (IPV).

The post Mapping responses to TFIPV across the Majority World appeared first on The Engine Room.


Next Level Supply Chain Podcast with GS1

Small Farms, Big Impact: Transforming Food Safety in Schools

A single contamination can have serious consequences for vulnerable populations, such as students. Traceability is essential to ensure food safety from the farm to the school cafeteria. In this episode, Jim White, President and Co-Founder of ENSESO4Food, and Candice Bevis, Farm Operations Manager at Spartanburg County School District 6, explain how digital traceability simplifies FSMA 204 compli

A single contamination can have serious consequences for vulnerable populations, such as students. Traceability is essential to ensure food safety from the farm to the school cafeteria.

In this episode, Jim White, President and Co-Founder of ENSESO4Food, and Candice Bevis, Farm Operations Manager at Spartanburg County School District 6, explain how digital traceability simplifies FSMA 204 compliance and strengthens confidence in the food supply chain.

They discuss how affordable technology and GS1 standards help small farms operate with the same precision as large suppliers, connecting farms, processors, and cafeterias for a safer food system.

In this episode, you'll learn:

How to ensure accountability in food sourcing and delivery

Why simplicity and affordability matter for technology adoption

Ways schools are using visibility to improve food safety

Things to listen for: (00:00) Introducing Next Level Supply Chain (04:22) How the Trakkey partnership began (07:59) Connecting farms, processors, and schools (12:33) How digital tools simplify compliance (18:03) Teaching students where their food comes from (20:56) Making food safety simpler for farmers (23:31) Reducing waste and improving efficiency (26:32) Jim and Candice's favorite tech

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn This episode is brought to you by:AccuGraphix If you're interested in becoming or working with a GS1 US solution partner, please connect with us on LinkedIn or on our website. Connect with the guests: Jim White on LinkedInCheck out ENSESO4Food and Spartanburg County School District Six

Tuesday, 28. October 2025

Blockchain Commons

Musings of a Trust Architect: The Exodus Protocol

ABSTRACT: Digital infrastructure is built on sand due to its control by centralized entities, most of which are focused on profit over service. We need Exodus Protocol services that build infrastructure without centralization, ensuring its continuation into the far future. Bitcoin offers our prime example to date. Five design patterns suggest how to create similar services for coordination, collabo

ABSTRACT: Digital infrastructure is built on sand due to its control by centralized entities, most of which are focused on profit over service. We need Exodus Protocol services that build infrastructure without centralization, ensuring its continuation into the far future. Bitcoin offers our prime example to date. Five design patterns suggest how to create similar services for coordination, collaboraiton, and identity.

It’s 2025. Digital infrastructure has become the heart of not just our economy, but our culture.

But it can be taken from us in a heart beat.

Those of us in the decentralized space have warned against this future for a long time, but it first hit me in a truly visceral way a decade ago when I was teaching technology leadership at Bainbridge Graduate Institute. I supported my students with a powerful coordination system that tied together del.icio.us bookmarks, Google Reader, and Google Docs. My students could discover information through RSS feeds, collaboratively bookmark and annotate it, and then work with their peers using that shared data. It was an effective tool for both learning and cooperative action that was soon adopted by the whole school.

But then Yahoo sold del.icio.us and Google shut down Reader. Without warning, without a chance to migrate, our learning community’s entire digital infrastructure collapsed. A generation of learners was quietly deplatformed from the tools that had empowered them to think, share, synthesize and learn together.

By now, everyone probably has a story of digital infrastructural loss. How they lost their Google+ circles. How their internet radio turned off forever one day. How their digitally stored MP3s disappeared into the ether. It’s a pattern that’s encouraged by the perverse incentives of capitalism. A useful service becomes essential infrastructure. Companies move in to collect rent on the technology. Then, they exert their power by reducing features and increasing distractions. Eventually, it becomes non-profitable and they kill it. (Cory Doctorow calls this pattern enshittification.)

Which leads to the question that haunts me: how can we create digital infrastructure that can’t be taken from us?

Enter the Exodus

To resolve this problem, we need what I call Exodus Protocols. These are systems that free us from the control of external sources (like Google or Yahoo! or Sony) by creating infrastructure that doesn’t require infrastructure.

What in the world do I mean by that?

There’s actually a well-known Exodus Protocol: Bitcoin. It provides financial infrastructure, allowing users to transfer value among themselves, but it does so without enshrining permanent infrastructure or empowering centralized authorities.

Miners can come and go. Full nodes can exist as services, but you can also spin them up locally. Some type of network is important for miners to collect transactions and form them into blocks, but the average user doesn’t need that: they can create their own transactions air-gapped and transfer them using QR codes. It’s generally hard to censor Bitcoin, and it would be unthinkable for the entirety of it to disappear in any short amount of time.

Bitcoin demonstrated something profound: that fundamental capabilities can exist as mathematical rights rather than centralized privileges. When your ability to transact depends on a bank’s approval, it’s not a right, it’s permission. Bitcoin restored transaction as a right through autonomous infrastructure. That’s an Exodus Protocol.

Unfortunately, Bitcoin only creates an Exodus Protocol for a small (but important) corner of what we do on the internet: value transfer. It does have some cousins, such as IPFS for data storage, but there aren’t great Exodus Protocols for the vast majority of what we do within the digital sphere. We need more Exodus Protocols, to free us from dependency on centralized services, so that our carefully constructed infrastructures don’t suddenly disappear, as happened for my students at BGI. We need to empower coordination (whether it be for my students or a board of directors), collaboration (at a forum or on a shared artists’ whiteboard), and identity (to correct the missteps made by the self-sovereign identity movement).

Five Patterns for Creating Autonomous Infrastructure

An Exodus Protocol is only successful if it’s designed to actually empower through autonomous service. We don’t want to just create a new digital prison. To design for success requires five architectural principles that help to create the architecture of autonomy itself.

An Exodus Protocol must …

1. Operate Without External Dependencies

If something requires permission to operate, it’s not autonomous. If it stops working when a company fails or a government objects, it’s infrastructure built on sand.

We instead need truly independent architecture. This can be accomplished either through objects that are self-contained, with everything needed for operation either within the object or derivable through math (e.g., autonomous cryptographic objects such as Gordian Clubs); or through distributed operations without centralization, where any operator can be replaced.

₿ — Bitcoin’s approach: Bitcoin took the distributed approach, with validation, verification, and recording of value transfers done by hundreds of thousands of replaceable independent nodes. There’s no central server or authority.

2. Encode Rules in Mathematics, Not Policy

Policy means that someone else decides how a service works. They can make arbitrary or biased decisions. They can succumb to coercion, and they can censor.

In contrast, math doesn’t discriminate, doesn’t take sides, and doesn’t change its mind under pressure. Replacing policy rules with mathematical rules means introducing cryptographic proofs such as private keys and signatures. They make verification deterministic: the same inputs always produce the same outputs.

₿ — Bitcoin’s approach: With Bitcoin, the control of value is ultimately determined by who holds a private key. Sophisticated systems such as threshold signatures and secret sharding offer more nuanced control.

3. Make Constraints Load-Bearing

Constraints make systems less flexible. But, that’s not necessarily a bad thing. We just need to ensure that those constraints serve dual purposes by also supporting a system’s autonomy.

We must be aware of what constraints mean: how they’re helpful and how they’re harmful. Then we must design constraints that create the autonomous infrastructure that we want. As an example: if a credential can’t expire, then it works forever. Similarly, if it can’t be revoked, then it offers perfect access to past content.

₿ — Bitcoin’s approach: Bitcoin offers a number of constraints that strengthen its autonomy largely by building upon mathematics rather than arbitrary policy. Transactions can’t be reversed, which means: 🟥 that you can’t walk back a mistake; but also that 🟢 your funds can’t be seized by fiat. Rules changes require consensus, which means: 🟥 that important updates can require months or years of coordination; but also that 🟢 your funds can’t be threatened by an arbitrary increase of supplies.

4. Preserve Exit Through Portability

Centralized systems lock you in, which is the opposite of sovereignty. True autonomy requires not just the ability to leave, but the ability to take everything with you. Without the ability to walk away, consent collapses into coercion.

Escaping lock in requires interoperability and open standards. Data and credentials must be portable across implementations without proprietary formats that trap users.

₿ — Bitcoin’s approach: Bitcoin keys work in any wallet. Standards for the use of seeds to generate HD keys and the use of wallet descriptors further this interoperability.

5. Work Offline and Across Time

Infrastructure that requires connectivity can be denied connectivity. Infrastructure that requires specific platforms can be denied those platforms.

True autonomy works with whatever channels remain available when coercion attempts to deny others. It requires asynchronous operations, creating services that work during outages and across decades regardless of infrastructural changes.

₿ — Bitcoin’s approach: Bitcoin transactions can be signed offline and broadcast later. The protocol doesn’t care about internet connectivity for core operations.

Foundation, Not Fiat

Not every digital service needs to be an Exodus Protocol. In fact, there are definitely services where you want centralization. You want more vulnerable people to be able to recover their funds in case of fraud. You want parents to be able to act as guardians for their children.

But there are services that are irreplaceable and so would benefit from Exodus. These are digital services that store data, create identity, and manage assets that would be difficult to replace. And there are times when Exodus Protocols become more important than ever: when we’re under threat, under siege, or just struggling to survive.

We still want to offer the ease of access and usability of centralized services in those situations where they’re appropriate, but we want to build those centralized services upon a strong, unwavering foundation of Exodus. If the centralized services fail, there must still be foundations that cannot fall.

Exodus Technology

Early in the month, I introduced Gordian Clubs. They’re another example of the Exodus Protocols that I discuss in this article.

Here’s how Gordian Clubs, which are Autonomous Cryptographic Objects (ACOs) that can be used to coordinate (by sharing data) and collaborate (by updating data), fulfill the five patterns.

(This is a repeat of a list from the Gordian Clubs article.)

Gordian Clubs …

Operate Without External Dependencies. Everything you need is within the Gordian Club: data and permissions truly operate autonomously. Encode Rules in Mathematics, Not Policy. Permits are accessed through mathematical (cryptographic) constructs such as private keys or secret shares. Make Constraints Load-Bearing. Members can’t be removed from a Gordian Club Edition, but that also means permissions can’t be unilaterally revoked. Gordon Clubs don’t have live interactivity, but that means they can’t be censored by a network. Preserve Exit Through Portability. An ACO that can be freely passed around without network infrastructure is the definition of portability. Work Offline and Across Time. Gordian Clubs are meant to be used offline; archival is a major use case, allowing access across a large span of time.

There are numerous other technologies that can enable Exodus Protocols. Many of them are puzzle pieces that could be adopted into larger scale solutions. These include:

QR Codes. Data that can be printed or displayed on air-gapped devices and that can be automatically read into other systems. Bluetooth. Another method for transmitting data when networks are down. Threshold Signatures. A method of coordination (signing) that typically does not require live interactivity.

Though we haven’t previously used the term, Blockchain Commons technologies are often built as Exodus Protocols:

Animated QRs. Animation extends QRs to allow the automated transmission and reading of larger quantities of data. Gordian Envelope. Envelopes are built to allow selective disclosure of information without a network. They’re the foundation of Gordian Clubs. XID. Also built atop Gordian Envelope, XIDs (eXtensible IDentifiers) are decentralized identifiers that are truly autonomous, avoiding the failures of the SSI ecosystem. From Five Patterns to Six Inversions

The threat of our digital infrastructure being taken away from us is part of a larger issue that I call “The Six Inversions.” It’s the systematic transformation of rights into revokable privileges in the digital world.

In the physical world, we have property rights that assure us of access to infrastructure, but in the digital world, centralized entities can take away that infrastructure at any time for any reason. We have no rights to property, to justice, to transparency, or to exit. Our contractual power is neutered, and our identity is sold. As a result, digital infrastructure is unstable, which is why we need to create infrastructure without infrastructure, in the form of Exodus Protocols.

I’ll write more about the Six Inversions in the future, but for the moment I wanted to point it out as an underlying philosophy for why digital infrastructure is unreliable and must be reimagined.

It’s because we don’t have the rights that we expect from the physical world.

Conclusion

As I said, Exodus Protocols aren’t for everyone or everything, but there are situations and services where they’re critical.

When we’ve identified these cases, we can then deploy Exodous Protocol patterns: operating without dependencies, encoding rules in mathematics, making constraints load-bearing, preserving exit through portability, and working offline across time. This creates a blueprint for infrastructure that holds when everything else fails.

What is your critical infrastructure? What have you spent years building and would be hurt without? What infrastructure’s loss would damage your ability to identify yourself, to communicate, to cooperate, or to collaborate? I’d love to hear what they are and to work with you to design the next generation of autonomous infrastructure.

Bitcoin is just the beginning.

Thursday, 23. October 2025

DIF Blog

Steering Committee Election 2025: Candidate Statements

Quick links: Sam Curren (Indicio, US) Jan-Christoph Ebersbach (Identinet, Germany) Rouven Heck (Independent, US) Matthew McKinney (ArcBlock, US) Doug Rice (Hospitality Technology Network, US) Markus Sabadello (DanubeTech, Austria) Eric Scouten (Adobe, US) Sam Curren (Indicio, US) Interest and qualifications Sam Curren has been involved with the DIF since the organization&

Quick links: Sam Curren (Indicio, US) Jan-Christoph Ebersbach (Identinet, Germany) Rouven Heck (Independent, US) Matthew McKinney (ArcBlock, US) Doug Rice (Hospitality Technology Network, US) Markus Sabadello (DanubeTech, Austria) Eric Scouten (Adobe, US) Sam Curren (Indicio, US)
Interest and qualifications

Sam Curren has been involved with the DIF since the organization's very origins. With relevant work both in and out of the digital identity space, Sam has been involved in a number of efforts, including DIDComm and its transition from the HL Aries WG into the DIF and subsequent v2 release of the spec. Sam has an MS in Computer Science.

Answers to Questions What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

Choosing an org path and main goals (either within or without LF) is a main focus. Additional challenges involve balancing promotion of newer technologies while making strong statements about technology and its effects. This requires the balance of a mostly-unopinionated org open to development of new technologies and approaches, while making strong statements and issuing strong guidance on particular issues. This has been a past benefit, but is difficult to navigate. 

If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

I believe we as an org will need to choose what items will flourish under a fully-LF model, and focus on those items. We should seek to become the arm of the LF (or perhaps LFDT) focused on specific topics well aligned with our larger org(s). This will ease the pain of not being able to support a full ED for DIF the way we have in the past.

If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

Our new freedoms will require willing donors, and our advocacy will need to align with our best sources of income. Staying close to our sponsoring orgs will need to become a high priority. This move can allow us to make very strong statements, but will also need to be carefully navigated to maintain our ability to conduct pre-standards-body efforts.

(back to quick links)

Jan Christoph Ebersbach (Identinet, Germany)
1. What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

I believe DIF's biggest challenge over the next four quarters centers on accelerating real-world adoption of Decentralized Identity. While digital identity has emerged as a game changer—particularly with governments around the world recognizing its transformative potential—and decentralized approaches offer significant advantages over traditional digital identity schemes, we face a critical gap between our technical ideas and practical implementations. Real-world adoption is the cornerstone of success for both DIF and its members, and bridging this gap must be our primary focus.

DIF currently supports adoption through three key channels: developing specifications, providing forums for members to exchange ideas and collaborate through working groups and labs, and maintaining an active presence in conversations with relevant third-party entities. I believe the organization should continue to excel in all three areas. Specifically, I would advocate for greater standardization of our specifications accompanied by reference implementations that make them truly usable for practitioners. By delivering high quality tools alongside our technical standards, we can remove friction from the adoption process and demonstrate the practical value of decentralized identity solutions to organizations considering implementation. This also entails archiving unmaintained and incomplete works and ensuring proper guidance for contributions to strengthen our public image and attract new members.

(back to quick links)

Rouven Heck (Independent)
Interest and qualifications

My background is in computer science and banking, I spent eight years at ConsenSys, where I founded uPort and later served as the identity lead. I have been a DIF board member since its inception (as one of its founders) and serving as executive director for several years.
Currently, I work as an independent advisor and board member for a KYC identity-related project while actively conducting research and experiments in AI.
1b) Statement: 
As independent advisor, I currently play a less active role in the identity ecosystem. My main contribution to the Steering Committee and future Executive Director will be extensive background in DIF & the collaboration with Linux Foundation and other organizations. 

Questions: 
What do you think DIF's biggest challenge are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

DIF holds immense potential, but with the industry evolving and key players shifting their focus or exiting, it’s crucial for DIF to solidify its role within the broader ecosystem to maintain its influence. Identity remains highly relevant across many facets of the digital world—empowering individuals, combating fake content, and more—especially as the web becomes increasingly agent-driven.
To achieve this, it’s essential to pinpoint the industry's major pain points where decentralized identity concepts and technologies can offer superior solutions. Additionally, we must identify partners and members who share this mission, foster cross-industry collaboration, and position DIF as the central hub for these efforts.

If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

The key question is: how will the non-commercial ideals and mission of the DIF endure within a market-driven organization? Large corporations heavily influence the Linux Foundation, and their interests often diverge from the DIF's mission. Maintaining a strong and independent voice will be the greatest challenge, especially if management prioritizes the demands of their largest financial contributors. Ideally, the DIF can sustain its independent funding and secure donations or contributions from non-profit organizations that align with its vision. 

If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

In addition to membership fees, seek independent funding from mission-aligned non-profits. As an independent organization, DIF gains greater flexibility; however, its governance and leadership must ensure that DIF continues to operate effectively and strategically within the industry, as it may risk losing some credibility due to its association with the Linux Foundation 

(back to quick links)

Matthew McKinney (ArcBlock, US)


1. What is DIF’s biggest challenge in the next 4 quarters, and how can the organization best help its members rise to that challenge?

Our biggest challenge is converting industry awareness into adoption. While our specs are strong, I believe that potential adopters still face too much friction, and the business value isn't always clear.

To solve this, I will drive a single, shared plan with the Steering Committee and working group chairs focused on three things:

Treat our standards like products. For each priority spec, we will ship an "adoption kit" containing a live sandbox, developer libraries, and clear tutorials. Our goal is to make it possible for a developer to issue their first credential in a day and complete a verifiable action within a week. Listen to our members and act. We'll run at least quarterly surveys and monthly office hours to identify the top blockers to adoption. We will then reserve roadmap space to fix those issues and publicly report on our progress. Prove the business value. We will amplify member success stories through case studies that focus on quantifiable ROI: time saved, fraud reduced, and compliance simplified. We will adopt vibe marketing playbooks to ensure our stories are seen in the right locations at the right times. Demonstrating ROI in this investment is core to this. 

2. If DIF were to integrate more closely with the Linux Foundation, what should DIF focus on?

Our focus should be to leverage the LF's scale while protecting our speed and flexibility. In collaboration with the Steering Committee and LF counterparts, we would:

Expand our go-to-market reach. We would plug into LF’s marketing, events, and developer relations programs. This would put our members and their solutions in front of a global audience of enterprise buyers and developers, dramatically increasing top-of-funnel awareness. I would also engage our members to identify other members who can help facilitate these activities. Integrate with the enterprise stack. We would partner with other major LF projects in security, cloud, and AI. The goal is to position DIF as the default, built-in identity layer for C-suite priorities like Zero Trust architecture and supply-chain integrity. Deliver clear business solutions. We would co-publish reference architectures and live demos that map our technology directly to the challenges faced by CISOs and CTOs, making it easier for them to adopt our work. Success wouldn't be measured by page views, but by enterprise trials, certified implementations, and real-world deployments.

3. If DIF operated as a fully independent organization, what should be its focus for the next 4 quarters?

As an independent organization, we would need to be relentlessly member-centric and commercially sustainable. This requires a clear, disciplined plan co-owned by the Steering Committee and working group leads.

Focus exclusively on member value. We would use surveys and direct feedback to validate the top 2-3 "jobs-to-be-done" for our members and dedicate our resources to solving them. We would build only what our members need to succeed in production. More value will drive more "skin in the game" participation and membership. Let our products drive our growth. Our friction free adoption kits would become our primary marketing tool. We would supplement this with published ROI stories and calculators that members can take directly to their budget owners. Create sustainable revenue streams. We would introduce new value-add programs that also ensure our long-term health, such as paid conformance and certification, a public directory for verified implementers and auditors, and premium support tiers. We must be lean, transparent, and completely aligned with delivering measurable outcomes for the organizations that fund our mission.

(back to quick links)

Doug Rice (Hospitality Network, US)
Interest and qualifications

I have been an active participant and supporter of decentralized identity and DIF since the formation of the Hospitality and Travel SIG in 2020 or 2021. For more than two years I have led twice-weekly meetings of the self-attested hospitality travel profile effort, which evolved in spring 2025 to become the Hospitality & Travel Working Group. I was the initial chair of that group and now serve as one of the two co-chairs.

I spent most of my career in senior roles in the hospitality and travel industry. Identity is a critical issue for hospitality and travel and will become more so in the AI era, and I have been a vocal supporter of DIF’s role in addressing the issue. In addition, I bring significant nonprofit management experience, having founded, bootstrapped, and for 13 years led (initially as ED, then as CEO) a highly successful trade association in the hospitality tech space (4500 members globally, $2.5 million budget, 10 staff when I retired in 2015). In that role I got to know most of the senior executives in the industry around the world (hotel and tech vendor), and still have a strong network that I can and do tap into to publicize DIF’s efforts.

I currently sit on several boards and advisory boards for vendors within the hotel tech industry, many of which can benefit (at least in the longer term) from DIF’s and other SSI efforts. I have written extensively on a wide range of topics, with a style designed to explain technical solutions to a business or semi-technical audience, and have spoken hundreds of times at industry events on a wide variety of topics (including self-sovereign identity).

While I have technical pedigrees from the distant past and still understand most relevant technical concepts, I have been a business executive for the past 30 years and will not pretend to have current technical skills. But having spent my professional career at the intersection of tech-speak and business-speak, I am an exceptionally competent translator between the two languages, and have had continued success in explaining complex technical concepts to business leaders so they can evaluate them thoughtfully and meaningfully.

Specific questions

1. What do you think DIF's biggest challenges are in the next 4 quarters, and how can the organization best help its members rise to that challenge?

I don’t have exposure to everything DIF does so I’m sure my list is not complete or properly prioritized, but based on what I have seen, I would say:

Finding more (and more effective) ways to communicate the value proposition of our work, and the business opportunities it creates, in ways that are clearer to nontechnical or semitechnical business audiences. Finding ways to increase engagement, membership revenue, and other sources of revenue. Effectively addressing the perpetual open-source standards issue of convincing users to pay for something that they can get for free. Confirming or evolving the current legal and organizational structure to ensure alignment with longer-term objectives. This may require ensuring that the objectives themselves align appropriately with the needs of members; even if they were fully aligned in the past, this needs to be continually evaluated.

2. If DIF were to integrate more closely to the rest of the Linux Foundation and assume a more traditional LF membership and staffing structure, what should DIF focus on?

I can’t comment on this directly as I have had very limited exposure to the LF membership and staffing structure. In concept, I believe that nonprofit organizations need to carefully consider each of their operating functions and how best to achieve it, consistent with their mission. If key functions (marketing/communications, human resources, operations, legal, finance/accounting, membership, etc.) can be done more effectively in an umbrella operation, they should be. If on the other hand nuances of the organization’s objectives, membership, community, culture, industry structure, technology needs, or other issues mean that some functions are better run independently, then that choice should prevail.

There is no single answer; when I ran a nonprofit we were always evaluating the options even though we started as, and remained, independent throughout my tenure as CEO. But that was based on our particular mission, industry structure, and level of maturity. Each organization is different and changes over time.

3. If DIF left the Linux Foundation and bootstrapped as a completely independent organization, what should be its focus for the next 4 quarters, to complement new freedoms and an altered sustainability model?

The first focus has to be on the sustainability of the financial model: initial funding (or sources thereof) and ongoing operations. Nothing succeeds if you run out of money. Having bootstrapped an organization from $0 revenue and grown it to $2.5 million, I understand this challenge intimately.

The secondary focus needs to be to define DIF’s position within the world of standards organizations. What are the sectors where DIF can win, where should we partner with others that are better positioned, what should we abandon? With limited volunteer resources, it’s critical to ensure that they are directed toward outcomes with the highest probability of success and adoption – and commercial success for contributors.

Within this, the ongoing relationship with LF matters. Linux and W3C are considered the gold standard for open-source ecosystems, and the migration of DIF efforts into W3C standards can amplify the visibility, credibility, and adoption of our efforts. There may be reasons for a divorce, but it needs to be amicable if DIF wants to continue to leverage LF’s credibility to spur adoption of its work.

(back to quick links)

Markus Sabadello (DanubeTech, Austria)

I think the biggest challenge for the future has to do with the question that I usually also asked during the recent series of interviews with Executive Director candidates. This question is how DIF can find the right balance between maintaining and growing its membership, and keeping its traditional open, un-opinionated, grassroots culture. The first objective may require more direct answers and more concrete technological choices to match the real-life needs of governments and corporations, while the second objective may sometimes stand in the way of that. This balance could of course be changed at some point, if we as DIF decide this together. In other words, there shouldn't be any dogmatic, religious rules about DIF's orientation. DIF members are very diverse when it comes to their motivations why they are participating, this needs to be taken into account. In terms of technical work, since everybody is talking about AI now, we should clearly make sure that we are a well-known actor when it comes to decentralized identity topics within the AI field. But I think we already have strong members and contributors in this field.

If DIF were to integrate more closely with the rest of the Linux Foundation, I think that while there will be changes in operational, financial, etc. aspects, DIF wouldn't necessarily have to change much content-wise. It might be smart however for DIF to align some of its Working Groups and Work Items more closely with other LF projects such as ToIP, Open Wallet Foundation, to reduce overlap and market confusion. We could potentially propose to move some Work Items from other LF projects into DIF, if we feel they would fit better (e.g. DID method specifications).

If DIF became bootstrapped as a completely independent organization, we would initially be busy for a while with adjusting, migrating, etc. the various infrastructure and processes. But I think such a step would also bring a certain new "freshness" to DIF, raise curiosity, and maybe new attention from actors and communities that we didn't previously expect. Content-wise, I feel like it would give us even more freedom, and a more independent perception. We should then focus on a really good website, cleaned up repositories, documentation, videos, etc., in order to make it easy for newcomers to feel comfortable.

(back to quick links)

Eric Scouten (Adobe, US) Interest and qualifications

I'm excited to stand for election for DIF's Steering Committee. Many of you know me through my role as Identity Standards Architect at Adobe. In this role, I help bridge the gap between content creators and their audiences. It is far too easy in this era to tell false stories about who created content and to use misinformation and disinformation to confuse and mislead.

The work of the content provenance ecosystem aims to provide more transparency about who is creating and distributing digital media, and to make it easier for authentically created content to stand apart from content that intends to mislead. A key part of this ecosystem is the use of individual and organizational identity credentials – tools that allow creators, publishers, and distributors to prove who they are and to establish trustworthy connections with their audiences. By combining provenance metadata with verifiable identity, we can build a stronger foundation for trust across the digital media landscape.

I’ve been honored to help lead this effort as co-chair of the Creator Assertions Working Group (CAWG) and to contribute to real-world adoption by implementing CAWG standards in open-source work sponsored by Adobe.

Earlier this year, CAWG became part of DIF, and the collaboration has already benefitted greatly from the expertise and feedback of the broader DIF membership. This partnership has strengthened CAWG’s work and, I believe, represents the kind of cross-community engagement that makes DIF so valuable.

I am eager to support DIF and to help ensure that DIF continues to be a vibrant and trusted home for innovation in decentralized identity and related ecosystems.

Response to questions

I haven't yet formed a position on whether DIF should align more deeply with the Linux Foundation, go it alone, or maintain status quo. I could make arguments for any of these paths and of course there are many factors that will go into making a well-considered choice. Top of mind for me is thinking through financial viability for DIF and its members – in other words, how do we encourage enough members to sign up for paid memberships to pay our staff and our bills, traded against the need to make participation appealing and feasible for a wide variety of companies, non-profits, and individual members.

(back to quick links)


FIDO Alliance

FIDO Webinar: Designing Passkeys for Everyone: Making Strong Authentication Simple at Scale

Attendees joined this webcast to hear from members of the FIDO Alliance’s UX Working Group explore the critical UX considerations in designing and deploying passkeys at scale, from initial user […]

Attendees joined this webcast to hear from members of the FIDO Alliance’s UX Working Group explore the critical UX considerations in designing and deploying passkeys at scale, from initial user onboarding to seamless cross-device synchronization. 

Speakers from Google, Microsoft and HID discussed how to address the challenges of simplifying complex security concepts for everyday users, and gain valuable insights into the future of authentication. 

Speakers shared insights about the key UX decisions, user research findings, and design strategies that are shaping the adoption of passkeys, and how the FIDO Alliance is working to make online security both powerful and effortless.

The Design Guidelines for Passkey Creation and Sign-ins are available at https://www.passkeycentral.org/design-guidelines/

Speakers included:

James Hwang, Microsoft Mitchell Galavan, Google Adrian Castillo, HID

Wednesday, 22. October 2025

EdgeSecure

EdgeCon Autumn 2025

AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm On October 9, 2025, EdgeCon Autumn, hosted in partnership with Rider University, brought together higher education technology… The post EdgeCon Autumn 2025 appeared first on Edge, the Nation's Nonprofit Technology Consortium.
AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm

On October 9, 2025, EdgeCon Autumn, hosted in partnership with Rider University, brought together higher education technology leaders and professionals from across the region for a day dedicated to accelerating institutional modernization. From cybersecurity and cloud strategy to campus networks and AI-driven student support, the event offered deep dives into the most pressing challenges and opportunities facing colleges and universities today. Through an engaging keynote panel and a full slate of breakout sessions, attendees explored emerging technologies, exchanged actionable insights, and built meaningful connections with peers and industry-leading vendors committed to driving transformation across the higher ed landscape.

Responsible Innovation in the Age of AI

As artificial intelligence and emerging technologies rapidly transform our world, innovation must evolve beyond efficiency and novelty to reflect deeper human, ethical, and environmental priorities. Among the morning’s breakout sessions was Designing for the Whole: A Multidimensional Framework for Responsible Innovation in the Age of AI presented by Michael Edmondson, Associate Provost, New Jersey Institute of Technology (NJIT). He introduced a multidimensional framework for responsible innovation organized around four core domains: Performance & Design, Creative & Cognitive Dimensions, Human-Centered Values, and Ethical & Governance Principles. Each dimension includes four key attributes—from Functionality and Originality to Empathy and Integrity—that collectively offer a holistic model for evaluating and guiding innovation in the AI era.

Protecting your Data and Reducing Institutional Risk

The limitations of legacy on-premise ERP systems are increasingly evident as cybersecurity threats grow and data regulations evolve. In Protecting your Data and Reducing Institutional Risk: SaaS ERP vs. On-Premise System, Stephanie Druckenmiller, Executive Director, Enterprise Technologies, Northampton Community College, and Bryan McGowan, Workday Principal Enterprise Architect, Workday, explored how shifting to a modern SaaS ERP like Workday can strengthen data protection, reduce institutional risk, and ensure long-term compliance. Druckenmiller and McGowan compared SaaS and on-premise systems in terms of governance, cybersecurity, and regulatory alignment, to show how a unified cloud platform enables real-time visibility, audit readiness, and consistent policy enforcement.

The session also highlighted how AI and automation built into SaaS ERPs proactively detects and mitigates risks—capabilities often lacking in older systems. Attendees learned how Workday supports institutional resilience through faster recovery, ongoing updates, and simplified compliance with emerging legal standards, and gained practical strategies for protecting data and managing risk in the years ahead.

“Very informative. The vendors were all great”

– Cherri Green
Procurement Coordinator
Princeton Theological Seminary

Using AI to Improve Data Accessibility

Bharathwaj Vijayakumar, Assistant Vice President, and Samyukta Alapati, Associate Director from Rowan University’s Office of Institutional Research and Analytics, shared insights into one of their key initiatives: an AI-powered tool designed to provide faculty, staff, and administrators with faster, easier access to real-time institutional data without coding or complex reporting required. Rowan’s users can ask questions like, “How many students are enrolled in a specific program this term?” or “Which majors are growing the fastest?” and receive immediate, accurate answers. The goal is to eliminate technical barriers and put actionable data directly into the hands of those who need it for advising, planning, and decision-making.

While the tool runs on Python, ThoughtSpot, and web technologies behind the scenes, the user experience is designed with simplicity and usability in mind. During their session, attendees experienced live demonstrations and left with practical strategies for improving data accessibility and increasing operational efficiency within their own institutions.

Evolving Toward an AI-Enabled Data Ecosystem

For institutions aiming to keep pace with the demands of digital transformation, modernizing fragmented data systems is a critical first step. From Data Chaos to Clarity: Evolving Toward an AI-Enabled Data Ecosystem, presented by Randy Vollen, Director of Data & Business Intelligence, Miami University, and Jon Fairchild, Director, Cloud & Infrastructure, CBTS, shared insights from a recent data modernization initiative focused on building a cloud-first infrastructure, creating scalable reporting environments, and preparing for AI-driven use cases.

The presentation discussed the non-exclusive implementation approach that used commercially available platforms to support data integration across enterprise systems, including HR and financial systems. This strategy led to improved internal data coordination, more consistent access to analytics, and a solid foundation for the responsible adoption of AI and automation technologies. Their experience offered attendees a clear blueprint for driving data transformation across complex institutional landscapes and lessons learned from integrating enterprise platforms to streamline analytics.

"Appreciate these opportunities to gather and share knowledge.”

– Jeff Berliner
Chief Information Officer
Institute for Advanced Study

Cybersecurity Maturity Model Certification Framework

Bobby Rogers, Jr., Virtual Chief Information Security Officer, Edge, shared a practical, leader-focused overview of the Cybersecurity Maturity Model Certification (CMMC) framework, explaining why it matters even beyond Department of Defense-funded projects, and what higher education leaders need to do to prepare. Featuring real-world case studies, this presentation highlighted the actual risks of non-compliance and the chance to take the lead with Edge’s scalable cybersecurity solutions.

Attendees reviewed real-world examples of costly non-compliance, gained clarity on the requirements of CMMC 2.0 and its alignment with frameworks like GLBA and NIST 800-171, and explored how Edge supports institutions in navigating challenges unique to higher ed environments. The session concluded with an actionable roadmap to help campuses assess their current posture and begin preparing for future compliance requirements.

Designing Digital Learning Environments that Are Accessible, Equitable, and Sustainable

In response to the federal mandate that all public institutions comply with revised Title II of the Americans with Disabilities Act by April 2026, The College of New Jersey (TCNJ) has launched a coordinated initiative to improve the accessibility of digital course materials and online environments. TCNJ’s Judi Cook, Executive Director, Center for Excellence in Teaching and Learning; Ellen Farr, Director of Online Learning; and Mel Katz, Accommodations Support Specialist for Curriculum and Assessment, led the breakout session Beyond Compliance: Designing Digital Learning Environments that Are Accessible, Equitable, and Sustainable.

Rather than approaching compliance as a legal checkbox, TCNJ has framed the work to fundamentally improve student and faculty experiences through inclusive design, transparency, and collaboration. This presentation shared a case study in progress, tracing their institutional journey from grassroots collaboration and capacity-building to structured, strategic initiatives. The session also highlighted sustainable strategies for advancing accessibility and faculty development through systemic change and the importance of approaching accessibility not as a project with an endpoint, but as a continual part of the digital transformation of teaching and learning.

A Proactive Approach to Student Success

An expert panel from the College of Health Care Professions led by David Bent, Vice President, Digital Services, Online; Joshua Mouton, CHCP BI/Developer; and moderator, Ross Marino, Account Executive, Proactive AI Agent Specialist, NiCE, shared how the organization drove conversions and improved student outcomes with Proactive AI Agent. Attendees gained an inside scoop on their approach, including details of their initial build, guardrails, and how they’re continuously improving journeys with data-driven enhancements. They also highlighted how they used this innovative technology to not only create excellent student experiences but find opportunities for synergy within their organization. 

Empowering Decision-Making and Driving Efficiency with Tableau Online

In the dynamic environment of higher education, data-driven decision-making is not a luxury, it's a necessity. Data in Action: Empowering Decision-Making and Driving Efficiency with Tableau Online led by Community College of Philadelphia’s Moe Rahman, AVP/CIO, and Laura Temple, Associate Director, explored how their community college leveraged Tableau to transform raw institutional data into interactive, insightful dashboards across key business areas including enrollment management, finance, student services, and academic affairs. By centralizing data visualization and analysis, they were able to empower stakeholders with real-time insights that drive efficiency, support strategic planning, and uncover opportunities for process improvement.

"Erin, Adam, and the entire team were outstanding. Great sessions too.”

– Ilya Yakovlev
Chief Information Officer
York College of PA

Protecting Privacy in the Age of AI Infused Pedagogy

With the increasing adoption of AI in educational environments, there are critical privacy and security considerations that arise. Teresa Keeler, Project Manager, NJIT, led the session, Protecting Privacy in the Age of AI Infused Pedagogy, and explored various ways AI is being utilized in education, from personalized learning platforms and intelligent tutoring systems to automated assessment tools, content generation, and administrative analytics. In higher education, this extends to research support, student success prediction, and advanced pedagogical tools.

Key concerns for many organizations include data storage, access protocols, the risk of de-anonymization, and the need to align with relevant data privacy regulations. Keeler discussed this “data dilemma” and the types of sensitive student and interaction data collected by AI tools. She also delved into the cybersecurity threats posed by AI, such as data breaches and sophisticated phishing attacks, and the challenge of AI-generated misinformation and its impact on academic integrity. Attendees learned about a proactive, multi-step approach for responsible AI integration, including developing clear institutional policies, conducting vendor vetting and providing comprehensive training for faculty and staff.

Modernizing Cybersecurity in Higher Ed

Modernizing Cybersecurity in Higher Ed: How Stevens IT Transformed User Risk Management explored how Stevens Institute of Technology overhauled its cybersecurity training by replacing outdated, static modules with a real-time, adaptive approach to user risk. Jeremy Livingston, CISO at Stevens, and David DellaPelle, CEO of Dune Security, discussed the implementation of Dune Security’s User Adaptive Risk Management platform, which enabled role-based testing and tailored training for faculty, staff, and students in response to increasingly personalized social engineering threats.

The session detailed how Stevens eliminated generic compliance training in under a month, introduced individual and departmental risk scoring, and integrated the platform with Workday and Okta to monitor user behavior and access. Attendees walked away with a blueprint for shifting from traditional awareness programs to action-oriented strategies, illustrating how educational institutions can build scalable, human-centered cybersecurity defenses.

"It was a nice event to attend and great to see some faces I hadn't seen in a while.”

– Ron Loneker Jr.
Director, IT Special Projects
Saint Elizabeth University

Real-World Applications and Outcomes of the New Higher Education Paradigm

EdgeCon’s keynote panel, AI in Action: Real-World Applications and Outcomes of the New Higher Education Paradigm, explored how artificial intelligence is actively transforming higher education, from teaching and research to campus services and operations. Featuring senior campus leaders Jeffrey Rubin, Senior Vice President for Digital Transformation and Chief Digital Officer, Syracuse University, and Devendra Mehta Digital Strategy and Data Analytics Officer, Fairleigh Dickinson University, the session showcased real-world case studies and data-driven strategies that demonstrate AI’s measurable impact across institutions.

The panelists shared practical insights on implementing AI at scale, highlighting lessons in policy development, digital strategy, and return on investment. Attendees gained actionable guidance on navigating the evolving AI landscape in academia, with a focus on delivering sustainable, high-impact solutions in today’s digital-first education environment.

Rethinking AI Readiness, Risk, and Responsibility

Higher education faces a pressing question: Are we truly ready to harness AI effectively, responsibly, and sustainably? In this breakout session, Nandini Janardhan, Programmer Analyst/Applications Manager, Fairleigh Dickinson University, and Sahana Varadaraju, Senior Application Developer, Rowan University, challenged institutions to go beyond AI awareness and critically assess their true readiness for responsible and sustainable adoption. They guided attendees through a comprehensive AI readiness framework covering technical infrastructure, institutional culture, and governance practices.

Participants learned to identify key barriers, ranging from financial constraints to ethical concerns, and evaluate sustainability through the lenses of equity, environmental impact, and algorithmic fairness. The session emphasized that effective AI implementation in higher education requires more than technology; it demands strategic alignment, thoughtful governance, and tailored solutions. Attendees left equipped with practical tools, including a self-assessment checklist and roadmap template, to begin or refine their institution’s AI journey.

"The breakouts were absolutely spectacular”

– Keri Salyards
Instructional Technologist
Mount Aloysius College

Turning Process, Architecture, and Data into Institutional Advantage

Strategic Foundations for AI: Turning Process, Architecture, and Data into Institutional Advantage debunked the myth that AI can be seamlessly integrated into higher education without foundational preparation. Instead, presenters emphasized that sustainable AI success starts with process clarity and disciplined system design. By mapping institutional operations across the student lifecycle and aligning enterprise architecture with mission, colleges and universities can create the strategic groundwork needed for AI to drive real impact.

Attendees learned about the importance of leadership in demanding alignment before adoption, treating data as a strategic asset through governance of the "Five Vs,” and preparing for real-time decision-making via HTAP platforms. Without these foundations, AI is a distraction; with them, AI becomes a catalyst for competitiveness, innovation, and student success.

Faculty-Informed Strategies to Improve Online Course Development

Based on qualitative research with faculty who collaborated on online course design, From Research to Practice: Five Faculty-Informed Strategies to Improve Online Course Development outlined five research-backed, actionable strategies to improve online teaching effectiveness and reduce faculty resistance. MaryKay McGuire, Ed.D, Learning Experience Designer, Siena College, and Danielle Cox, M.Ed. shared strategies to integrate adult learning theory into instructional design without overwhelming faculty, recommendations for improving collaboration between course designers and instructors, and ideas for scaling faculty support as AI and automation reshape online teaching. This session bridged the gap between institutional priorities and lived faculty experience offering a strategic and sustainable model for instructional improvement.

Solving the AI Faculty Development Puzzle

In Putting the Pieces Together: Solving the AI Faculty Development Puzzle, presenters Carly Hart, Director, Instructional Design & Technology, and Naomi Marmorstein, Associate Provost for Faculty Affairs, from Rutgers University-Camden explored the institution’s challenges and successes when implementing year-long, campus-wide AI faculty development programming. They shared how they navigated a wide spectrum of faculty attitudes, from enthusiastic early adopters to those who view generative AI as a fundamental threat to academic integrity. Their experience underscored that one-size-fits-all approaches fall short; instead, effective faculty development must address diverse pedagogical needs, discipline-specific concerns, and deeper philosophical questions around authorship, creativity, and knowledge creation in the age of AI.

Demystifying AI Adoption in Higher Education

For institutions looking to move beyond AI buzzwords and into real-world impact, the collaborative session, AI Unlocked: Resources, Policy, and Faculty Training, aim to demystify AI adoption in higher education from three critical vantage points. Dr. Forough Ghahramani, Associate Vice President for Research, Innovation, and Sponsored Programs, Edge, kicked things off with an insider’s tour of the National AI Research Resource (NAIRR) Pilot, an invaluable toolkit now available to educators and researchers nationwide. John Schiess, Technical Director. Office of Information Technology (OIT), Brookdale Community College, then explored institutional AI policy and regulation and shared actionable strategies for crafting guidelines that support innovation while managing risk.

Rounding out the session, Michael Qaissaunee, Professor and Co-Chair, Engineering and Technology, Brookdale Community College, revealed lessons learned from piloting faculty training programs designed to boost AI literacy and spark creative teaching applications. Attendees gained practical insights and walked away with curated instructional materials and resources to jumpstart their own AI journeys.

Learning Experience Design and Design Thinking Together

Brian Gall, Director, Learning Experience Design, Villanova University, examined the strategic expansion and integration of specialized design roles within Villanova University's Office of Online Programs in the session An Emerging Trend: Learning Experience Design and Design Thinking Together. Drawing from their organization’s structure, the session explored the distinct yet complementary roles of each design team member: Learning Experience Designers who focus on holistic student journey mapping and engagement strategies; Multimedia Experience Designers who create immersive, interactive content that enhances cognitive load and retention; and Instructional Designers who ensure the learning management system and learning technologies work together to achieve the goals of the faculty member.

Recognizing that not all institutions have the resources for extensive staffing, the session concluded with role hybridization models, technology solutions that amplify individual capacity, and practical strategies for implementing similar frameworks with smaller teams. Attendees also gained concrete tools for assessing their own organizational needs, building compelling cases for design team expansion, and implementing design thinking approaches regardless of team size.

"As always a great conference and networking event! Fantastic job done by the entire Edge Team! Thank You!”

– Ron Spaide
Chief Information Officer
Bergen Community College

Modernizing Virtual Desktop Delivery

Choosing the right virtual desktop solution is a critical yet complex decision for any institution. In this session, Chris Treib, Vice President of Information Technology, Geneva College, shared an insightful look into the college’s journey transitioning from VMware to Apporto. He shared Geneva College’s experience evaluating different virtual desktop approaches, the specific challenges they faced, and the factors that ultimately influenced their decision to explore alternatives.

The session offered valuable, real-world takeaways for IT leaders exploring or undergoing similar transitions. Attendees gained practical lessons on managing migration, evaluating platforms, and understanding the trade-offs involved. With a focus on outcomes and institutional fit, this session equipped decision-makers with the knowledge and confidence to assess their own virtual desktop strategies more effectively.

Path to GLBA Compliance

In the spring of 2024, Saint Elizabeth University was required to put into place compliance requirements for Gramm-Leach-Bliley Act (GLBA) to be in compliance with the FTC Safeguards Rule. During A Small University's Path to GLBA Compliance, Ron Loneker, Jr., Director, IT Special Projects at Saint Elizabeth University, presented how the university reacted to the requests of their auditors and how it was cleared by them and the Federal Student Aid Office. Following the presentation and Q&A, the session opened into an engaging discussion where other institutions shared their own challenges and experiences in working toward GLBA compliance.

Artificial Intelligence in Higher Education

As AI rapidly reshapes the academic landscape, it offers both transformative potential and pressing challenges. Artificial Intelligence in Higher Education: Threat or Opportunity? explored the four “evils” of artificial intelligence in higher education: The Hero, The Career Terminator, The Academic Cheat, and The Intel Spiller. Through real-world examples, the presentation examined how AI is impacting teaching roles, academic integrity, and data privacy, while also highlighting opportunities to enhance learning and streamline operations. This session also equipped attendees with practical strategies for ethical adoption, increased transparency, and meaningful collaboration, empowering institutions to leverage AI as a force for good rather than a disruptive threat.

Thank you Exhibitor Sponsors

The post EdgeCon Autumn 2025 appeared first on Edge, the Nation's Nonprofit Technology Consortium.


Blockchain Commons

2025 Q3 Blockchain Commons Report

It has been both an innovative and busy quarter at Blockchain Commons. Here’s some of the main things that we worked on this summer: Join the Club: Introducing the Gordian Club The Club Made Reality Next Step: Hubert New Dev Pages: Provenance Marks Gordian Clubs Hubert Again! Other Recent Work Presentations: Swiss e-ID TABConf 7 FROST Updates: FROST & Bitcoin FROST Demos FROST Verify ZeWIF Upda

It has been both an innovative and busy quarter at Blockchain Commons. Here’s some of the main things that we worked on this summer:

Join the Club: Introducing the Gordian Club The Club Made Reality Next Step: Hubert New Dev Pages: Provenance Marks Gordian Clubs Hubert Again! Other Recent Work Presentations: Swiss e-ID TABConf 7 FROST Updates: FROST & Bitcoin FROST Demos FROST Verify ZeWIF Updates: Error Handling Back to the Stack Work in Progress: The Architecture of Autonomy Learning FROST from the Command Line XID Tutorials Join the Club!

We’ve been introducing a lot of new ideas in the last year, and our newest is the Gordian Club, an autonomous cryptographic object (ACO)—though it’s based on ideas that go back decades!

Introducing the Gordian Club. So what’s a Gordian Club? The abbreviation ACO says it all: a Gordian Club allows for the autonomous and cryptographically protected distribution of information that can be updated over time. You don’t have to depend on a network, and you can’t be censored as a result! A Gordian Club provides resilience when systems are under stress and can support use cases such as disaster relief, protecting data over extended periods of time, and keeping data private in the face of corporate or government espionage. Read our introductory Musings on the topic for a lot more.

The Club Made Reality. The Gordian Club isn’t just a theory! We’ve released a Rust library and a CLI app so that anyone can test out the functionality and see its potential. Our October Gordian Meeting also included a full demo and presentation.

Next Step: Hubert. If you don’t depend on networks, how do you exchange Gordian Clubs? The beauty is that can do so in any way you see fit. You certainly can transmit over the network, with a service like Signal offering a secure, private way to do so. But you could also mail a thumb drive or even print a QR in the newspaper. We suspect the best methods will be the most automated, so we’re designing a “dead-drop” server that you can use to exchange Clubs. We call it Hubert after the “berts” of information that were exchanged in Project Xanadu (which was the inspiration for our own work on Gordian Clubs). Hubert is one of several works that we have in process as we page over to the new quarter, so more on that at year end!

New Dev Pages

The release of a number of new innovative technologies has resulted in the addition of new pages for developers. These are the places to go for the overview of our newest work and the links to all the details.

Provenance Marks. Provenance Marks provide a cryptographically-secured system for establishing and verifying the authenticity of works. Not only can you see that something was authentically signed, but you can also trace changes through a chain of authenticated provenances. We introduced provenance marks last quarter at a Gordian Developer Meeting, but we’ve now got a provenance mark developer page that has a lot more of the details.

Gordian Clubs. Our newest work, on Gordian Clubs, has a whole hierarchy of developer pages, including pages on why we think autonomy is important, how Gordian Clubs use our stack, how ocaps support delegation, and the history of the Clubs idea.

Hubert Again! We’ve also created a single page for Hubert, our info dead-drop hub. Though one of its earliest use cases is to distribute Gordian Clubs, it might also be used in other Multi-party Computation (MPC) scenarios such as FROST.

Other Recent Work. Our developer pages on cliques and XIDs are slightly older, but if you want to review the new ideas we’ve been presenting in the last year, those are the other two pages to look at! We also just expanded our Envelope pages with a look at permits, which is a fundamental feature of Gordian Envelope, but one that’s never had its own page before.

Presentations

We were thrilled that Christopher Allen was asked to present to two different groups, just as the quarter turned.

Swiss e-ID. Christopher has been talking with officials regarding the recently approved Swiss e-ID digital identity for a few months. On October 2, following the referrendum’s approval by the Swiss people, he was invited to give a presentation on “Five Anchors to Preserve Digital Autonomy & Deomcratic Soverignty”. We’ve also published an article that synopsizes the main points.


TABConf 7. Christopher was also invited to help kick off a digital-identity track at TABConf, a technical Bitcoin conference. He made two presentations there, on “Beyond Bitcoin: Engineering Exodus Protocols for Coordination & Identity” and on “The Sad State of Decentralized Identity (And What to Do About it)”. (No videos of these ones, sadly!)

FROST Updates

We’ve been delighted to continue our work with FROST this year thanks to a grant from HRF. We made some big progress in Q3.

FROST & Bitcoin. ZF FROST is perhaps the best FROST library out there (thanks to its completeness, usability, and security review). As the name indicates, it was created with Zcash in mind. We thought that one of the big things we could do with our HRF grant was to bring that capability to Bitcoin. To support that, we issued a PR for ZF FROST to support secp256k1, then created our own branch to support the tweak needed to send Bitcoins with Taproot. Together, these two updates provide everything you need to sign Bitcoin transactions with a FROST group.

FROST Demos. Putting together the puzzle pieces for FROST signing can still be a little complex, so we’ve created some demos for how it works. The demo for Trusted Dealer was held at our August Gordian Developers meeting (also see our step-by-step walkthrough). We then produced an additional video for signing after Distributed Key Generation (again, also see the step-by-step walkthrough).

TD Signing CLI Demo: DKG Signing CLI Demo:

FROST Verify. It turns out that there aren’t great tools for verifying FROST signatures, so we created one. This is made specifically to work with the FROST cli tools that are part of the ZF FROST Package.

ZeWIF Updates

Speaking of Zcash, we’ve also done a bit more work on ZeWIF, our Zcash Wallet Interchange Format, which was a major focus at the start of the year.

Error Handling. Our biggest mark of success for a project is when it begins to come into wider use, because we’re not trying to create theory at Blockchain Commons, we’re trying to create specifications that make real-life users more independent. So when a request came over from Zcash’s Electric Coin Company to redo how we report error messages in ZeWIF, we were happy to do so. As a result, all of the ZeWIF-related crates were updated in Q3.

Back to the Stack. This led to general updates across our entire stack, to move away from anyhow for error reporting (except in apps such as CLI tools). This was part of the continual updating of our stack that we do to keep it clean and ready to use (the last was in early July, this one in September). There were also documentation updates and light code cleanups that occurred here and there as part of this work.

Work in Progress

Although that feels like a healthy amount of work for the quarter, we also put considerable work into other projects that have not yet seen completion.

The Architecture of Autonomy. We mentioned last quarter that Christopher was invited to speak at the Bitcoin Policy Summit. That got him thinking about a lot of big picture stuff concerning what works and what doesn’t for identity online. We’ve worked through a few different iterations of a major policy work on the topic, at the moment called “The Architecture of Autonomy,” but have only shared it with a few who are major movers in the area of digital identity (if that’s you, drop Christopher a line!).

Learning FROST from the Command Line. Learning Bitcoin from the Command Line has long been one of our most succesful projects, so when we pitched HRF last year on continuing our FROST support, we suggested that we create a similar (but much shorter) tutorial for FROST: Learning FROST from the Command Line. We’ve drafted a bit more than half of the course so far (chapters 1, 2, and the first part of 3), so we’re definitely not done, but if you want to get a head start, you can look at it now. We’ll be finishing this up before year’s end.

XID Tutorials. Finally, we should point again to the XID core concepts docs that we prepared in the previous quarter, the first part of our XID Quickstart. The concepts docs are solid and a great hands-on look at much of our stack. The linked tutorials are still in process. (Another topic for Fall or maybe Winter, as Learning FROST is before it in our tech-writing queue).

That’s it for this quarter. We hope you’re excited by some of the new work we’re doing (such as Gordian clubs and Hubert) and some of our newest presentations. If you’d like to help support this work, please consider becoming a GitHub sponsor. If you’d like to make a larger contribution or if you want to partner with us directly to integrate some of our tech, please drop us a line.

Tuesday, 21. October 2025

FIDO Alliance

MobileIDWorld: Google Chrome Launches Automatic Passkey Generation for Android Users

Google Chrome has introduced a new automatic passkey implementation for Android that streamlines the user authentication process by automatically generating passkeys after password-based sign-ins. The development marks a significant advancement […]

Google Chrome has introduced a new automatic passkey implementation for Android that streamlines the user authentication process by automatically generating passkeys after password-based sign-ins. The development marks a significant advancement in the broader industry transition from traditional passwords to more secure authentication methods, following similar initiatives from Apple and Microsoft.


Biometric Update: BixeLab joins FIDO Face Verification program, certifies Aware 

Aware has received FIDO Alliance Certification for Face Verification, gaining recognition for its identity verification tech including liveness detection and facial matching capabilities. The certification affirms that Aware’s identity verification platform meets the […]

Aware has received FIDO Alliance Certification for Face Verification, gaining recognition for its identity verification tech including liveness detection and facial matching capabilities.

The certification affirms that Aware’s identity verification platform meets the FIDO Alliance’s standards for biometric performance, security and fairness. Testing was conducted by BixeLab — which recently revealed a new contract, CTO and facility — is one of only three labs globally accredited to evaluate biometric systems under the U.S. National Institute of Standards and Technology (NIST) NVLAP program.

“FIDO’s Face Verification Certification represents a powerful step toward a passwordless future built on trust, accuracy, and strong security,” said Ajay Amlani, CEO of Aware, Inc. “Earning this certification demonstrates not only our technological excellence but our deep commitment to transparency and innovation in biometrics.”


Biometric Update: HID upgrades passkey, FIDO authentication capabilities with IDmelon acquisition

Texas-based HID has reached an agreement to acquire Vancouver, Canada-based logical access control provider IDmelon to upgrade its portfolio of FIDO authentication offerings. The addition of IDmelon’s technology enables HID to easily implement customers’ […]

Texas-based HID has reached an agreement to acquire Vancouver, Canada-based logical access control provider IDmelon to upgrade its portfolio of FIDO authentication offerings. The addition of IDmelon’s technology enables HID to easily implement customers’ physical access cards and mobile devices as FIDO2 security keys, according to the joint announcement.

IDmelon software users can turn existing identifiers like biometrics, physical credentials and smartphones into enterprise-grade FIDO security keys. IDmelon also provides hardware to support passkeys and other FIDO standards for secure and convenient access control.


Techstination Radio/Podcast: What you should know about passkeys for online security

Interview with FIDO’s Andrew Shikiar on what you should know about passkeys for online security.

Interview with FIDO’s Andrew Shikiar on what you should know about passkeys for online security.


WDEF News: Switching to Passkeys for Safety

CHATTANOOGA, Tenn. (WDEF) – October is Cybersecurity Month, a reminder for everyone to take small but meaningful steps to stay safe online. 

CHATTANOOGA, Tenn. (WDEF) – October is Cybersecurity Month, a reminder for everyone to take small but meaningful steps to stay safe online. 


WTVM News: FIDO’s Megan Shamas talks online safety, using passkeys

Megan Shamas, CMO of the FIDO Alliance shares why passkeys may be more effective than passwords during Cybersecurity Month.

Megan Shamas, CMO of the FIDO Alliance shares why passkeys may be more effective than passwords during Cybersecurity Month.

Thursday, 16. October 2025

FIDO Alliance

Authenticate 2025: Day 3 Recap

By: FIDO staff The first two days of Authenticate 2025 delivered strong technical content, user insights and lots of thoughtful discussions. The final day of Authenticate 2025 went a step […]

By: FIDO staff

The first two days of Authenticate 2025 delivered strong technical content, user insights and lots of thoughtful discussions.

The final day of Authenticate 2025 went a step further taking attendees on a deep dive into really important current and emerging topics for authentication including biometrics, agentic AI and verifiable credentials.

Passkeys and Verifiable Digital Credentials are Not Competitors

A key theme across multiple sessions at Authenticate 2025 was the growing need and development of standards for Verifiable Digital Credentials.

In a session led by Christine Owen, Field CTO at 1Kosmos and Teresa Wu, Vice President, Smart Credentials & Access at IDEMIA Public Security, the roles of passkeys and verifiable digital credentials (VDCs) within the evolving landscape of secure digital identity were clarified.

They emphasized that passkeys and VDCs are not competing technologies. Instead, they are best used together to strengthen both authentication and identity verification processes. Passkeys offer privacy preservation and are resistant to phishing, while VDCs provide digital representations of identity attributes that can be selectively shared when needed.

Breaking Glass: Restoring Access After a Disaster

In a thought-provoking session, Dean H. Saxe, Principal Security Engineer, Identity & Access Management at Remitly, explored the challenges and importance of digital estate management, particularly in the context of disasters and emergencies. 

Saxe described how personal experiences and recent natural catastrophes highlight the necessity of preparing for sudden loss of access to digital assets.

A hands-on experiment conducted by Saxe tested how well a “break glass” process works when all personal devices are lost. The process included relying on physical identity documents and a safe deposit box to regain access to important accounts like 1Password, Apple iCloud, and Google services. Saxe faced unexpected obstacles, such as a missing credential and issues getting recovery codes, which illustrated the real-world difficulties of these situations.

The findings of Saxe’s experiment stressed the need for regular testing and updating of disaster preparedness plans.

“So the failure to test your backup strategy means that you do not have a valid backup strategy,” Saxe said.

From the Trenches: Passkeys at PayPal

PayPal is an early adopter of passkeys with the initial motivation being focused on reducing password reliance.

“It’s time to break free from the password prison,” Mahendar Madhavan, Director of Product, Identity at PayPal said.

PayPal launched passkeys in 2022, saw a surge in mid-2024, and now boasts more than 100 million enrolled users with a 96% login success rate. This surge has delivered results—phishing-related losses have dropped by nearly half compared to traditional password and OTP methods.

Mohit Ganotra, Identity PM Lead at PayPal explained that initial efforts zeroed in on user education and reducing friction during login. By optimizing the login experience and targeting enrollment prompts during checkouts and password recovery, PayPal now sees 300,000 incremental enrolments each month from checkout alone, plus 75,000 from automatic passkey upgrades.

“Passkeys is still a new technology, it needs to go through the adoption curve that every new technology has,” Madhavan said. “So you as a relying party need to nudge users, guide users, encourage users to adopt a passkey at various points in their journey and how you do it is, you hyper personalize the content for consumers and users, and you talk in their language.”

Safeguarding Enterprise Online Credentials Post Authentication

While passkeys solve authentication security, post-authentication remains vulnerable through bearer token theft and session hijacking. 

There are however numerous technical approaches that can help mitigate the risk, which were described in detail by An Ho, Software Solution Architect at IBM and Shane Weeden, Senior Technical Staff Member at IBM.

The session introduced two complementary technologies designed to address this vulnerability. DPoP (Demonstrating Proof of Possession) extends OAuth 2.0 to create sender-constrained access and refresh tokens for API flows, while DBSC (Device-Bound Session Credentials) binds browser session cookies to specific devices. Both technologies use asymmetric cryptography to ensure that stolen credentials become unusable by attackers, as they require proof of possession of private keys that only the legitimate client or browser holds.

“We believe that you need to look at a holistic view of your sessions,” Weeden said. “You need to look at not just how clients and users log in, but also how to maintain a form of continuous authentication with the client or browser that is utilizing that session.”

From the Trenches: Improving Experience and Security at Databricks with Passkeys  

Meir Wahnon, Co Founder of Descope, explored how Databricks approached the challenges of unifying authentication and improving security across multiple cloud-based apps.

Databricks partnered with Wahnon’s company to figure out the best approach. The fragmented login experience had made it hard for users and the IAM team to manage access and maintain full visibility. Databricks tackled this by adopting a centralized identity provider and federation to ensure a more seamless single sign-on process. A major focus was the decision to add passkeys as an optional multi-factor authentication method. This choice was driven by Databricks’ commitment to balancing strong security for customers with a smooth, low-friction user experience.

The deployment of passkeys came with careful attention to user adoption and support. Databricks made passkeys optional to minimize disruption, and included easy rollback options if customer uptake became a challenge.

“The balance between user experience and security is always a question when you build a user journey,” Wahnon said.

From the Trenches: Alibaba’s Passkey Story

Alibaba is expanding its use of passkey authentication across business units including AliExpress and DingTalk. 

Preeti Ohri Khemani, Senior Director at Infineon Technologies which works with Alibaba explained that the main goal was to improve security and user experience by reducing dependence on traditional passwords and costly SMS one-time passwords. The rollout has led to faster, more convenient logins and a smoother registration process for users.

On AliExpress, the deployment of passkeys simplified the login flow and eliminated extra steps for users. This change resulted in a reported 94% increase in login success rates along with an 85% reduction in login times. Users no longer need to manage passwords or wait for verification codes, which also lowered operational costs and security risks.

DingTalk, Alibaba’s internal messaging platform with 28 million daily active users, has similarly benefited from passkey integration. Engineers at Alibaba focused on making passkey adoption easy by sharing clear coding samples, open-source libraries, and helpful tools.

Keynotes: The Path to Digital Trust

Ashish Jain, CTO of OneSpan used his keynote to explore the ongoing challenge of establishing trust in digital interactions. Jain traced the journey from physical trust in face-to-face transactions to today’s anonymous digital world.

Ashish outlined the tension between user experience and security. He cited how complex password policies and frequent multi-factor authentication can frustrate users, yet they are essential for protection. The discussion highlighted how the industry is coming closer to a practical solution through the adoption of passkeys.

 “In the physical world, trust is emotional,” Jain said. “In the digital world, trust is an architecture.”

Keynote:  Biometrics Underpinning the Future of Digital Identity

Continuing on many of the same themes from Amlani’s keynote, Stephanie Shuckers, Director, Center for Identification Technology Research (CITeR), University of North Carolina – Charlotte and  Gordon Thomas, Sr. Director, Product Management, Qualcomm  provided more insights on the critical nature of biometrics.

Thomas noted that while face recognition remains popular, fingerprints offer enhanced privacy because they are less likely to be exposed online or through surveillance.

“It’s not really about proving who you are, but it’s about building and securing your digital identity layer by layer with trust every time you use it,” Thomas said.

Shuckers noted that there is a need for strong assurance levels in biometric technology on consumer devices. That’s where standards help ensure both user safety and usability. The FIDO Alliance’s programs test biometric systems for vulnerabilities such as deep fakes and injection attacks. These certifications are crucial for building trust in digital identity systems. 

Keynote: Microsoft Details What’s Needed to Authenticate Agentic AI

Pamela Dingle, Director of Identity Standards, Microsoft led a session on the challenges and opportunities in authenticating AI agents within enterprises. 

She stressed the importance of understanding what an agent is and pointed out that simply asking “who authenticates the agent” is not enough. Dingle highlighted the complexity that arises from having many agents running in different domains, each with unique tasks and identifiers. Administrators often struggle to see the full chain of actions, which complicates decision making and resource management.

Dingle introduced the idea of using “blueprints” and “task masters” to authenticate not just the agent but also the context and source of its tasks. She emphasized that knowing only the identifier is not enough. The future will require richer, composite data about each agent’s purpose and origin.

“The agentic AI push gives us an opportunity to build the tools enterprises need to run better.”

Keynote Panel: Digital Wallets and Verifiable Credentials: Defining What’s Next 

Verifiable credentials was a hot topic at Authenticate 2025 and it was one that was tackled in the final keynote panel.

The panel included Teresa Wu, Vice President, Smart Credentials and Access at IDEMIA Public Security, Loffie Jordaan, Business Solutions Architect at AAMVA, Christopher Goh, International Advisor, Digital Identity & Verifiable Credentials at Valid8 and Lee Campbell, Identity and Authentication Lead, Android at Google.

The discussion began with an overview of the ecosystem, emphasizing the interaction between the wallet, issuer, and relying party. This “triangle of trust” serves as the cornerstone for secure digital credential use. Panelists stressed the need for privacy, interoperability, and certification as this shift accelerates, highlighting lessons learned and ongoing challenges like fragmentation across platforms.

FIDO Alliance’s growing focus on digital credentials was described as a catalyst for industry progress. “FIDO is getting involved in the digital credential space,” Campbell said. “FIDO does an exceptional job at execution.”

That’s a Wrap!

Wrapping up the Authenticate 2025 program, FIDO Alliance Executive Director Andrew Shikiar emphasized that the event continues to grow year by years. 

For the 2025 event there were 150 sessions and 170 speakers. 

“Passkeys are driving measurable business outcomes,” Shikiar said. “One thing I thought was really cool this year about some of the presentations, it wasn’t just another ‘rah rah’ passkeys are great story, but also companies are coming back for their second time or third time, talking about progress and lessons learned and how they’re evolving, pivoting and growing.”

Speaking of growth, the Authenticate event is growing for 2026, with a new Authenticate APAC event set for June 2-3 in Singapore. Authenticate 2026 will be back in California at the same time next year.

Between now and then, the FIDO Alliance will be sharing lots of informative content and hosting educational events. Stay connected and sign up for updates.


Authenticate 2025: Day 2 Recap

By: FIDO Staff Following on the information-packed day one, day two of Authenticate 2025 continued the trend. Over the course of the day, users from across different geographic areas and […]

By: FIDO Staff

Following on the information-packed day one, day two of Authenticate 2025 continued the trend.

Over the course of the day, users from across different geographic areas and industry verticals detailed their experiences with passkeys. Discussion on how passkeys fit into the payment ecosystem and the intersection with agentic AI were also hot topics of discussion across multiple sessions. 

Keynotes: A Brief History of Strong Authentication

Christopher Harrell, Chief Technology Officer at Yubico, kicked off the morning keynote tracing the journey of authentication practices from basic shared secrets to the modern era. 

Harrell outlined how early systems based on shared secrets and memorized passwords often failed due to human error and simplicity. Multi-factor authentication was introduced to address these gaps by layering security, but still relied heavily on passwords or similar secrets. He noted that the evolution of the market to passkeys eliminates the vulnerabilities of shared secrets and reduces the chance of phishing, making access both safer and easier for users.

“Shared secrets were never meant for the internet, we need authentication that protects you without making you remember more,” Harrell said.

Keynotes: Passkey Adoption in the UK

The United Kingdom (UK) has taken a big leap into passkey, embracing its usage at the national level.

Darren Hutton, Identity Advisor for NHS England and Pelin Demir, UX Designer for NHS Login, detailed the adoption path and success of passkeys in the UK. The presenters shared how NHS Login serves as a nation-level identity provider for healthcare access, reaching almost the entire adult population. They discussed the evolution from passwords and OTPs to introducing passkeys. The move aimed to improve both security and accessibility for all users.

Insights from their user research revealed that although over three million users adopted passkeys within months, there were challenges. These included inconsistent user interfaces, confusion around technical terms, and accessibility barriers for screen reader users. The team found that clear guidance and familiar wording were critical to increasing adoption.

“Passkeys, is a beautiful balance of technology that brings security and usability together to create a really good service,” Hutton said.

Leaders from the National Cyber Security Center (NCSC) in the UK detailed the strong imperative to move to passkey, noting that the majority of cyber harm to UK citizens happened through abuse of legitimate credentials.

Keynote: Visa Details Payment Passkey Efforts

Ben Aquilino,VP, Global Head of Visa Payment Passkeys and Digital Identity at Visa explored the evolution of digital payment security from the earliest days of online commerce to the present. 

Aquilino used the history of Pizza Hut’s first online order in 1994 as a gateway to highlight how payment experiences have changed due to rising concerns over fraud, describing how simple early processes became more complex to counter increasingly sophisticated threats.

A significant portion of the session focused on the technological advancements used to combat payment fraud.

Visa’s recent efforts to innovate further by launching Visa Payment Passkeys. This new approach leverages passkeys and biometrics for payment authentication, aiming to offer better protection along with a seamless user experience

“Authentication doesn’t have to be a compromise between security and convenience; it can have both,” Aquilino said.

Keynote Panel: Quantifying Passkey Benefits from Early Adopters 

In a keynote panel session led by FIDO Alliance Executive Director Andrew Shikiar, industry leaders from PayPal, NTT DOCOMO and Liminal explored the ongoing shift in the authentication landscape.

Koichi Moriyama, Chief Security Architect at NTT DOCOMO and Rakan Khalid, Head of Product, Identity at PayPal, recounted the journey from initial pilots to broader adoption, detailing technical evolution and lessons learned. Khalid emphasized the impact of evolving authentication standards on customer experience, while Moriyama described Docomo’s commitment to ecosystem-wide security improvements.

A recurring message throughout was the proven effectiveness and industry momentum behind passkey authentication. Survey data from Liminal revealed that most decision-makers now rank passkeys as their top priority for authentication investments. 

“The big surprise in the survey was that passkeys really have moved from pilot to priority,”  Filip Verley, Chief Innovation Officer at Liminal said. “We’re seeing  huge adoption and nearly every adopter is very satisfied.”

Both PayPal and Docomo shared that organizational and customer metrics improved after moving away from passwords, including increased sign-in success and reduced account takeovers.

“When customers use passkey, we see about a 10-point increase in sign-in success rate over a traditional multi factor authentication.” Khalid said.

From the Trenches: Shipping Passkeys for Hundreds of Millions of users at TikTok

TikTok’s session offered a comprehensive look at its journey to implement passkeys as a login method for hundreds of millions of users. 

The team faced the challenge of introducing passkeys in a way that would not disrupt the user experience. TikTok chose to promote passkeys through a campaign on user profile pages, leading to high engagement rates and a marked increase in adoption. Most users who set up passkeys did so thanks to the visibility and education presented within the app.

Passkey login was not only made the default for users who had enabled it, but TikTok also streamlined the signup process. 

“Overall, it has been a great journey with Passkeys and TikTok,” Yingran Xu, Software Engineer at TikTok said. “Passkey remains one of the authentication methods with the highest success rate and fastest login experience.”

From the Trenches: Lessons Learned from Roblox’s Passkey Deployment

Roblox’s effort to deploy passkeys across its platform is a response to the complex security needs of a massive and diverse user base. 

With more than half of Roblox users under 13, the challenge was to design an authentication system that is easy for children while still robust enough for professionals handling accounts with significant financial stakes. The team aimed to make access secure and simple without passwords, reducing both user frustration and customer support issues tied to account recovery.

Through a phased rollout that began with passkeys in user settings and later added passkey options during account sign-up, Roblox has shown measurable progress. Eighteen percent of active users have adopted passkeys, which led to greater engagement and higher login success rates. Experiments with the user interface revealed that highlighting passkeys at pivotal moments, such as account recovery, can drive adoption as long as users are guided clearly and are not forced through abrupt changes.

Ongoing improvements focus on making passkeys easier to use and more accessible, especially as many Roblox players move between multiple device types. An adaptive login flow led to more passkey logins and fewer users defaulting to traditional passwords. There are also new protections for top game creators, who are frequent phishing targets, ensuring only secure login methods are available for valuable accounts.

“Our vision is that all Roblox users should have secure and accessible accounts without passwords, powered by passkeys,” Yuki Bian, Product Manager at Roblox said.

From the Trenches: Using Windows Hello to Enable Passkeys for SSO

Single Sign-On (SSO) is a common approach enabling users in enterprise environments to use a single credential to get access to multiple applications.

In a deep dive session, Amandeep Nagra, Sr. Director, Identity and Access Management at Crowdstrike detailed how Windows Hello for Business was implemented as a passkey solution for seamless Single Sign-On across enterprise devices. By turning device logins into trusted passkeys, users no longer needed to remember passwords or manage separate app authentications.

The solution involves generating a device-level PRT token using Windows Hello for Business pins, which enables SSO across various apps. The project saved 78,000 hours of work annually, 

“We turned the device login into your passkey—one sign-in, access to everything,” Nagra said.

From the Trenches: Modernizing Authentication with True Passwordless at Docusign

DocuSign is a leading provider of electronic agreement solutions that help individuals and businesses sign documents and manage contracts online. Security and identity verification are critical to its platform, as users rely on DocuSign to complete transactions that often involve sensitive or high-value documents, such as home purchases, business contracts, and legal agreements.

To meet rising threats and user demand for easier, safer access, DocuSign is working to make passwordless authentication the default experience.

The company’s authentication team has introduced passkeys, enabled biometrics, and streamlined account recovery methods. Their goal is to give users secure, reliable, and effortless ways to verify identity, whether that’s logging in to review paperwork or using a mobile device to approve a high-stakes deal.

Yuheng Huang, Engineering Manager at Docusign noted that the login success rate for passkeys on DocuSign is 99%. In contrast, the password login success rate is only 76%.

Going beyond just authentication Dina Zheng, Product Manager at Docusign explained that DocuSign is using a passkey with the company’s identity wallet.

“By combining capabilities with identity wallet, we’ve created a fully frictionless experience, secure enough for identity verification, yet simple enough that users barely notice the authentication step at all,” Zheng said. “This is a perfect example of how passkeys can go beyond just authentication. They’re becoming an enabler of trusted high assurance workflows across Docusign.”

Panel: Industry Perspectives on Securing Agent-Based Authentication

With the emergence of agentic AI, there are new concerns and challenges about how to secure and authenticate agents.

A panel with Lee Campbell, Identity and Authentication Lead, Android at Google,  Rakan Khalid, Head of Product, Identity at PayPal and Reid Erickson, Product Management, Network API at T-Mobile that was moderated by Eran Haggiag, CEO at Glide Identity, discussed the challenges of trust and security in agent-based authentication.

Key points included the need for phishing-resistant authentication methods like passkeys and verifiable credentials to ensure user intent and prevent fraud. The discussion highlighted the importance of standardization, context-aware authentication, and human-in-the-loop verification to mitigate risks. 

“There’s lots of work going on, lots of companies are involved, lots of standards bodies involved with every single standards body out there today having some agentic group,” Campbell said. “Everybody’s talking about it, and one of the challenges is getting everyone and all the right players in the same room to have these conversations. And I think FIDO is actually quite a good place to do this.”

The Big Finale is Coming on Day 3!

While the first two days of Authenticate 2025 were stacked top to bottom with insightful sessions, Day 3 will deliver even more content.

With even more users stories coming, discussion on verifiable digital credentials and digital trust Day 3 will not disappoint.

Not registered? Don’t miss out! Attend remotely and access all previous sessions on demand, and attend day 3 live via the remote attendee platform! See the full agenda and register.

Wednesday, 15. October 2025

Next Level Supply Chain Podcast with GS1

How Fragile is Your Supply Chain? Lessons from Resilient Companies

Efficiency works when everything goes to plan. But as disruptions grow more frequent and complex, resilience and preparation are what set strong supply chains apart. In this episode, logistics expert John Manners-Bell, founder and CEO of Transport Intelligence, joins hosts Reid Jackson and Liz Sertl to discuss what leaders need to know about supply chain risk, technology, and balance. With ov

Efficiency works when everything goes to plan. But as disruptions grow more frequent and complex, resilience and preparation are what set strong supply chains apart.

In this episode, logistics expert John Manners-Bell, founder and CEO of Transport Intelligence, joins hosts Reid Jackson and Liz Sertl to discuss what leaders need to know about supply chain risk, technology, and balance.

With over 40 years in the industry advising organizations like the World Economic Forum, the UN, and the European Commission, John shares hard-earned lessons from real-world crises and why efficiency is not enough.

Listeners will gain a sharper understanding of how to prepare for disruption, enhance visibility across their networks, and utilize AI and data to build more resilient operations.

In this episode, you'll learn:

How to measure the cost of supply chain risk

Why you need to prioritize resilience in supply chain strategy

How AI helps logistics leaders anticipate risks and plan accordingly

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(04:14) Why supply chain risk is everyone's problem

(06:41) Balancing efficiency and resilience for long-term success

(11:07) Why inventory alone won't save your business

(12:51) How visibility and data transform modern supply chains

(16:24) Cyberattacks, paper backups, and recovery stories

(18:18) The rise of AI and automation in logistics

(22:12) Lessons from companies that built resilience

(25:57) The mindset every future-ready supply chain leader needs

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: John Manners-Bell on LinkedIn Check out Transport Intelligence


Blockchain Commons

Musings of a Trust Architect: Five Anchors to Preserve Autonomy & Sovereignty

ABSTRACT: How do you protect autonomy and democratic sovereignty in digital identity systems? This article suggests five foundations: protecting choice by design; building for an extended future; maintaining platform independence; requiring duties; and implementing institutional safeguards. On September 28, 2025, Switzerland adopted the use of “electronic proof of identity,” or e-IDs, to be issued

ABSTRACT: How do you protect autonomy and democratic sovereignty in digital identity systems? This article suggests five foundations: protecting choice by design; building for an extended future; maintaining platform independence; requiring duties; and implementing institutional safeguards.

On September 28, 2025, Switzerland adopted the use of “electronic proof of identity,” or e-IDs, to be issued and administered by the Swiss government.

Use of the e-ID is meant to be voluntary and free of charge. However, there’s still real concern about the use of e-ID in Switzerland. The vote passed with just 50.4% of the voters in agreement. A previous vote on the same subject failed in 2021.

And, I think there’s real cause for concern.

Fortunately, I was able to talk directly about these concerns: thanks to previous work that I’d presented on “The Architecture of Autonomy” (which I’ll talk more about here in the future), I was invited to present at a meeting on October 2 for hundreds of people interested in (or concerned about) e-ID, incuding members of Swiss government and businesses.

The video of my talk is available, but what follows is a synopsis of my major points, focusing on how to keep digital identity systems safe.

The Unique Advantages of Switzerland

Swiss e-ID is ultimately a governmental digital identity system. That means it’s not self-sovereign: the government controls issuance and maintains the system. But, that’s not disqualifying. Though I’d (obviously) prefer a self-sovereign identity system, I hope that Swiss e-ID will put us on the path to eventually produce a LESS (Legally-Enabled Self Sovereign) system.

But for now, I think Switzerland is a place where these first steps can be reasonably taken by the government. Switzerland has a strong culture of individual autonomy. They have a constitutional principle that sovereignty resides in the people. That’s exactly what’s required if you must trust a centralized governmental entity with your identity.

So why the concern? Despite the best intentions of the Swiss government, it’d be quite easy for their new e-ID system to be subverted, much as has happened with self-sovereign identity. When I talked at the October 2 meeting, my goal was therefore to present solutions that would help to avoid potential subversion, both in Switzerland and for other governments who adopt Swiss policies and technologies without having the same philosophies of autonomy at the heart of their democracy.

I did this by presenting five “anchors” that I believe must be considered when designing digital identity system if we want to preserve both personal autonomy and democractic sovereignty. (And of course, ensuring autonomy and sovereignty ultimately puts us on the path to self-sovereign identity.)

1. Preserve Choice by Design

Choice disappears when alternatives become second-class.

Though the Swiss e-ID promises to be “voluntary and free of charge,” this is the first place that things could go very wrong, because voluntary-in-theory doesn’t necessarily mean voluntary-in-practice: it’s possible to follow the technical requirements of such a precept without folowing its intent. If the use of a digital identity system is incentivized, or if critical systems become digital-only, its usage effectively becomes mandatory. But that’s not the only place that individual choice can be subverted within digital identity: the very identity system can do so too if it’s too rigid in saying what individual elements of data a user can share.

To solve the first problem requires the issuer of the digital identity (in this case the Swiss government) to ensure that practices concerning the identity system aren’t coercive, and that nothing is denied to people who don’t (or can’t!) use the digital system. The second problem requires the creation of a technical architecture focused on user agency, so that a user can choose what to share, with whom, and when. That user agency must then be supported by a great UX that makes choosing to share only what’s absolutely required the simplest answer.

2. Build a 20-Year Architecture, Not 2-Year Product

MVP thinking optimizes for shipping, not decades of democratic evolution

When I co-authored TLS, I knew there were issues in the spec that needed to be addressed. I expected that to happen in 3-5 years, but it took 20! That sort of extensive bureaucratic delay isn’t unusual in the world of international standardization. This is why any digital identity system needs to make sure it’s ready for the next 20 years! Fundamentally, we need to design for the future, not for a short-term shipping deadline.

To solve this requires focusing on two critical issues: data minimization and resilience. Data minimization ensures that a system always sends out the least amount of data required for a specific need. It’s important for the long-term health of a system because it ensures we have variability: we can adjust what’s sent out as new democratic rules come into play, without having to redesign the system from scratch. (It also answers the second issue of choice, mentioned above.) Resilience means ensuring that digital identies will survive under a variety of adverse conditions, including total network failure. It readies us for changing conditions in the future.

3. Maintain Platform Independence

Platforms profit from lock-in, not user autonomy.

Though Swiss e-ID is being created by the Swiss government, it’s ultimately going to be dependent upon platform vendors such as Apple and Google and their app stores. Depending on how a digital identity is administered, platforms might be able to surveil its usage, block its usage, or engage in other anti-democratic actions. A creator of digital identity must avoid this to maintain technical sovereignty.

Addressing this problem requires asserting Swiss jurisdiction (or more generally, the jurisdiction of the organizations creating the identity system) over the platforms. Not only must anti-democratic actions be bluntly forbidden, but there also must be independent oversight with enforcement power. Those enforcement powers must be very strong, as we’ve already seen that platforms are willing to pay hundreds of millions of dollars to avoid following similar rules such as GDPR.

4. Require Duties for Non-Governmental Parties

The bigger risk isn’t government surveillance, it’s commercial profiling.

Ultimately, a digital ID system is going to be beholden not just to the platforms that enable it, but also to the parties that utilize it, and most of those are going to be commercial entities. Those commercial entities could misuse a user’s data after they acquire it. This can damage commercial sovereignty.

To prevent this abuse requires setting duties on entities that use a digital ID system such as e-ID. These duties should include: purpose limitation, restricting use to specific needs; verify and forget, forbidding the storage of e-ID data; and unlinkability, preventing tracking data across services or silently “pinging” some log when usage occurs. Generally, entities using a digital identity system must recognize a duty to the user as the principal holder of the identity. Again, strong enforcement will be required.

5. Implement Institutional Safeguards

Democratic oversight of digital power

Swiss democracy requires both empowering government to protect citizens from private sector abuse AND constraining government overreach.

As I said in the introduction to these anchors, I think that Switzerland has a government that is actually trustworthy to manage digital identity. However, there’s a big but. That might not be true for other governments that adopt their systems. In addition, the extreme regime changes we’ve seen in the United States over the last decade suggest that we must also be concerned about future governments. To ensure institutional sovereignty requires that any digital identity system protect itself not just from platforms and businesses, but also from the very entity that’s administering it!

Addressing this issue requires a number of things. First, the digital identity must remain politically independent. If it’s governmental, like Swiss e-ID, that requires things like cross-party appointments and fixed terms for directors as well as data governance being totally separated from politics. Second, it requires that the identity governance have duties to the users such as transparent enforcement, guaranteed human review, and service-level commitments. Third, it requires real care taken be taken with revocation, as that might be the area of a digital identity where corruption could do the most damage. This should include more duties such as two-party authorization, guaranteed quick court review, and restoration of revoked identity pending appeal.

The Ultimate Vision

What does success ultimately look like for a digital identity system like Swiss e-ID?

These five anchors provide a checklist for success:

☑️ Real Choice: Digital and physical options remain equivalent. ☑️ Sustainable Architecture: Design is dynamic, supporting a 20-year future. ☑️ Technical Sovereignty: Platforms have democratic oversight. ☑️ Commercial Sovereignty: Businesses follow strict duties to users. ☑️ Institutional Sovereignty: Digital ID system ensures oversight of itself as well.

These goals help to ensure that agency over our digital identity remains with us, the people who are the principals behind those identities, and that’s my ultimate goal, whether a system is fully self-sovereign or, like Swiss e-ID, a government deployment.

If you are a member of the Swiss civil service working with e-ID, or another interested party, please feel free to mail me directly for a personal presentation and/or to answer questions on these topics.

For more, you can see my videos and slides from the October 2 presentation:

Swiss e-ID Meeting: Slides: Slides w/Annotations:

Tuesday, 14. October 2025

FIDO Alliance

Best Stablecoin Wallets for Everyday Use in 2025

The rise of stablecoins has transformed how we handle digital payments, cross-border transactions, and everyday financial activities in the cryptocurrency ecosystem. With stablecoins like USDT, USDC, and DAI gaining mainstream […]

The rise of stablecoins has transformed how we handle digital payments, cross-border transactions, and everyday financial activities in the cryptocurrency ecosystem. With stablecoins like USDT, USDC, and DAI gaining mainstream adoption, choosing the right stablecoin wallet has become crucial for anyone looking to navigate the digital economy efficiently. What makes walllet.com revolutionary is its seedless security approach. Unlike conventional wallets that require users to manage complex 12 or 24-word seed phrases, walllet.com uses institutional-grade biometric security powered by proven technologies from Apple, Google, and the FIDO alliance. 


Authenticate 2025: Day 1 Recap

By FIDO staff Authenticate 2025, the FIDO Alliance’s flagship conference, kicked off day one on strong footing as passkey adoption continues to grow. The first day of Authenticate 2025 was […]

By FIDO staff

Authenticate 2025, the FIDO Alliance’s flagship conference, kicked off day one on strong footing as passkey adoption continues to grow.

The first day of Authenticate 2025 was loaded with insightful user stories, sessions on how to improve passkey adoption and technical sessions about the latest innovations.

Mastercard: Reimagining Online Checkout with Passkeys

Mastercard presented their ambitious vision to bring contactless payment-level security and convenience to online transactions through passkeys. The company is tackling three major e-commerce pain points: fraud from insecure authentication methods, cart abandonment and false declines of legitimate transactions. 

“There is no secret for this audience that one-time passwords are largely insecure and subject to phishing attacks,” Jonathan Grossar, Vice President of Product Management at Mastercard said. “So this is one big problem that we’re trying to address.”

Mastercard’s approach includes linking passkeys to payment card identities through bank KYC verification, adding device binding layers to meet regulatory requirements like PSD2, and ensuring banks retain control over authentication decisions even when Mastercard acts as the relying party on their behalf.

“When you have a passkey, that’s very easy, you can use it right away, and we see the conversion is just fantastic,” Gorssar said.

Passkey Mythbusters: Short Takes on Common Misunderstandings

As a relatively new technology, there are still a good deal of misunderstandings about passkeys.

In an engaging session led by Nishant Kaushik, CTO of the FIDO Alliance, Matthew Miller, Technical Lead at Cisco Duo and Tim Cappalli, Sr. Architect, Identity Standards at Okta debunked several key misconceptions about passkeys including:

Misconception #1 . Passkeys are stored in the cloud in the clear: The session clarified that passkeys are not stored in plain text. Reputable credential managers use strong end-to-end encryption, so even when passkeys are synced through the cloud, service providers cannot access the actual keys.

Misconception #2. Passkeys lock users into specific vendor ecosystems: The panel explained that new standards like the credential exchange protocol (CXP) and credential exchange format (CXF) enable secure transfer of passkeys between managers. 

Misconception #3. Phishing resistance depends solely on the relying party ID: Presenters emphasized that true phishing resistance comes from verifying the origin of authentication requests, not just matching the relying party ID. Proper server-side origin checks are essential for security.

Misconception #4 Cross-device passkey use enables remote attacks: The panel showed that cross-device authentication relies on proximity checks like Bluetooth, which prevent attackers from authenticating remotely even if they possess a QR code.

Misconception #5. Passkeys are not suitable for enterprise use: The panel highlighted that managed credential managers can offer strong policy control and high assurance for workforce applications, and that flexible management models fit both personal and enterprise contexts.

Misconception #6. Device management is always required for secure workforce passkeys: It was clarified that organizations can provide managed credential managers that enforce policies without requiring complete device management, allowing for greater flexibility.

Misconception #7. Passkeys cannot be used in mixed cloud and on-prem environments: The discussion explained that the right identity provider solutions and federation strategies can enable passkeys across a variety of application types.

What’s New in FIDO2: The New Features in WebAuthn and CTAP

There’s a lot going on with the underlying FIDO standards.

In his session, Nick Steele, Identity Architect at 1Password detailed the latest FIDO2, CTAP2.2 and WebAuthn updates. Steele explained how these new standards offer easier adoption, better security, and a smoother user experience for both enterprises and individuals.

Key technical improvements:

Hybrid transport for flexible authenticator connections Signals API for better credential management Conditional passkey enrollment and improved autofill UI Stronger encryption and HMAC secret extension Broader support for smart cards and related origins

“We really want to increase the risk signalling and the trust that enterprises can get in a single go from a passkey,” Steele said.

Credential Exchange in the Wild

One of the key misconceptions about passkeys is that they lock users into a particular platform. 

Among the reasons why that’s not accurate is the Credential Exchange format effort which was detailed in a session led by Rene Leveille, Sr. Security Developer at 1Password.

Leveille explained how the credential exchange format is designed to help password managers understand and transfer numerous credential types, making it easier for users to migrate securely between different services. He highlighted how this format, paired with a secure protocol, is the foundation for cross-platform compatibility.

Leveille outlined recent progress, including the move from early drafts to a proposed industry standard in August 2025. He discussed how both Apple and Android platforms have introduced APIs that are paving the way for seamless transfers between apps. 

Emphasizing the importance of this work, Leveille stated, “It is an extremely easy way to migrate from one credential manager to another and it is secure.”

From the Trenches: eBay

Among the earliest adopters of passkeys is eBay, which has a long history with FIDO specifications.

Ilangovan Vairakkalai, Senior Member Technical Staff at eBay detailed his organization’s journey and how it has managed to increase adoption.

“Every percentage point we gain in Passkey adoption is another user freed from password frustration,” Vairakkalai said.

Passkey adoption among mobile and native app users has climbed to an impressive 55% to 60%, reflecting how intuitive, nearly invisible authentication is a win for users. Desktop adoption, while more modest at around 20%, is steadily rising as eBay continues to innovate and collaborate with browser and device makers. 

From the Trenches: Uber

Reducing user friction is a primary reason why Uber has embraced passkeys.

Ryan O’Laughlin, Senior Software Engineer at Uber Technologies detailed his organization’s journey to deploy passkeys as a secure and user-friendly login option across its global consumer platform. 

While there was some quick success there were also some early challenges. Despite passkeys offering faster and more secure logins compared to passwords, many users continued using traditional sign-in methods, raising concerns about adoption and the prevalence of phishing risks.

To address these challenges, Uber introduced usability improvements such as clearer entry points for passkey login and proactive prompts encouraging registration. Experiments showed that enrolling users right after account sign-up or login led to a marked increase in adoption.

The company also piloted features like selfie-based account recovery, aiming for secure, phishing-resistant options as part of its broader vision for a passwordless future.

“Passwords just don’t really work for our platform. People forget them,” O’Laughlin 

said. “There is a very realistic future where we don’t have password passwords at all.”

From the Trenches: BankID

In Norway, the BankID system has been around for over two decades, providing a uniform authentication system for the country’s citizens.

Heikki Henriksen, Technology Partnership Manager, Stø AS (BankID BankAxept in Norway) explained that the BankID system started off with hardware devices but in recent years has made a move to mobile, software based approaches.

BankID began moving to passkeys after most users had adopted the BankID app. The transition away from SMS-based authentication finished in 2023. Passkeys were introduced quietly—users were not told about the technical change but were moved to the stronger, phishing-resistant credentials through regular app updates.

“We never bothered talking about passkeys, we got over half of the Norwegian population to use passkeys without ever using the term passkey,” Henriksen said. “People don’t know what passkeys are. They don’t need to understand it either. So they just use Bank ID and for us technical people we know that passkeys are running the tech behind it.”

Keynotes: FIDO Alliance Details the Path Forward

A highlight of every Authenticate event is the keynote address from Andrew Shikiar, Executive Director of the FIDO Alliance.

As part of his Day One keynote, Shikiar detailed the past, present and future of the organization he leads and the standards it develops.

“Our internal estimates point to over 3 billion passkeys securing consumer accounts – actual passkeys in use,” he said. “That’s a massive number, 3 billion in less than three years time.”

Shikiar also revealed new data from a new report, the Passkey Index, which aims to help quantify the impact of the technology. Among the standout figures:

An average 93% sign-in success rate using passkeys, which is more than double that achieved with other methods. A 73% decrease in login time when using passkeys. Up to an 81% reduction in login-related Help Desk incidents reported by some organizations.

No technology conversation in 2025 is complete without mention of AI and Shikiar didn’t disappoint. He noted that the FIDO Alliance is actively addressing agentic AI by launching targeted initiatives including the creation of a subgroup focused on agentic commerce, aiming to ensure secure authentication for human-authorized agents.

“We spent the past dozen years or so contemplating how to prevent bots from authenticating, and now we have to figure out how to enable them to authenticate,” he said.

Looking ahead, the need to eliminate knowledge-based recovery methods and improve user experience was stressed. Shikiar also talked about emerging efforts for digital credentialing, with FIDO Alliance developing foundational standards and certification programs to advance the digitization of identity documents and secure mobile credentials.

“We will create foundational specifications that are applicable to the market, building from CTAP to create a new protocol for cross device credential presentation, we’ll focus on enablement and usability,” Shikiar said.

Keynotes: Google Securing the Future of Account Management

Google’s Authenticate 2025 keynote focused on how account security and user experience are improving with the adoption of passkeys. 

With more than a billion users now signed into Google services using passkeys, it is clear these solutions are quickly moving into the mainstream. Chirag Desai, Product Manager at Google emphasized that passkeys make the sign-in process faster and easier for users and provide new opportunities for businesses looking to enhance safety and streamline account access.

“Just as the world moved from horses and carriages to cars and now even self-driving cars, we as an industry need to help our customers do the same thing,” Desai said. “We need to help make that transition from passwords to passkeys, with minimal friction.”

Beyond just passkeys for authentication Rohey Livne, Group Product Manager at Google addressed the critical role of digital credentials for account creation and recovery. These digital, device-bound documents offer stronger protection than emails or SMS, enabling selective disclosure and simplifying verification. They allow organizations to move beyond fragile legacy methods and create a fully secured account lifecycle.

“We’re not really solving account creation and account recovery with passkeys,” Livne said. “And so we are essentially trying to look at how the entire account lifecycle could be aided with digital credentials.”

Keynotes: Apple Details How to Get the Most Out of Passkeys

Apple is all in on passkeys. 

“Simply put, the world would be a better place if the default credential, the one that we all reached for first, was a passkey instead of a password,” Ricky Mondello, Principal Software Engineer at Apple said.

Mondello detailed multiple approaches that Apple is using to accelerate passkey adoption including:

Account Creation API (iOS/Mac apps): Pre-fills user information (name, email/phone) to create new accounts with passkeys in one step, avoiding passwords entirely from the start. Automatic Passkey Upgrades: Seamlessly adds passkeys to existing password-based accounts without showing upsell screens when users sign in with their password manager. Already supported on Apple platforms and Chrome desktop. Prefer Immediately Available Credentials: Shows users their saved credentials (passwords or passkeys) when opening an app, eliminating the “which button do I press?” problem.

The most provocative message centered on security. Mondello argued that simply adding passkeys alongside passwords doesn’t deliver true phishing resistance. Organizations must plan to drop passwords entirely for accounts with passkeys.

“The hard truth is that to actually deliver the phishing resistance benefit to any given account, all phishable methods of signing in or recovering it need to be eliminated or otherwise mitigated,” Mondello said.

Get Ready for Day 2!

Day 2 will have even more great content across multiple tracks, with no shortage of user stories. Look for user stories from TikTok, Roblox, Microsoft, Docusign and many others, alongside technical insights for implementation.Not registered? Don’t miss out! Attend remotely and access all previous sessions on demand, and attend day 2 and 3 live via the remote attendee platform! See the full agenda and register now at authenticatecon.com.


FIDO Alliance Launches Passkey Index, Revealing Significant Passkey Uptake and Business Benefits

Passkey Index provides a composite view of passkey utilization and business impact data from leading online service providers CARLSBAD, Calif. – The FIDO Alliance today launched the Passkey Index, revealing […]

Passkey Index provides a composite view of passkey utilization and business impact data from leading online service providers

CARLSBAD, Calif. – The FIDO Alliance today launched the Passkey Index, revealing significant passkey uptake and benefits for online services offering passkey sign-ins. Launched in partnership with Liminal, the Passkey Index provides a composite view of data from leading service providers on the adoption, utilization and business impacts of passkeys.

The Passkey Index was launched today in concert with Liminal’s Passkey Adoption Study 2025, a survey of 200 organizations either actively deploying passkeys or committed to doing so in the near future. Together, these new resources provide the most comprehensive view of passkey deployments yet, and strategic intelligence for organizations wanting to modernize and de-risk their authentication technology.

The Passkey Index is available at FIDOalliance.org and Liminal’s Passkey Adoption Study 2025 is available at Liminal.co

Passkey Index Companies Report Passkey Sign-in Rates and Benefits 

The Passkey Index comprises data from companies that have deployed passkeys over one to three years, including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and TikTok across eight utilization and performance areas. 

The Index reveals that passkey eligibility is high: FIDO Alliance member companies contributing to the Index report that an average of 93% of accounts are now eligible for passkeys. The percentage of accounts with a passkey enrolled is over a third (36%), while more than a quarter (26%) of all sign-ins now leverage passkeys. 

Passkey Index companies also reported strong business benefits with passkeys: 

Passkeys reduce sign-in time by 73% compared to other authentication methods, averaging just 8.5 seconds per login. Traditional approaches including email verification, SMS codes, and social login options took an average of 31.2 seconds.  Passkey sign-ins have a 93% success rate, compared to 63% for other methods; 30% higher success rates mean fewer failed attempts and greater throughput at critical checkpoints The Index also revealed that passkey adoption led to an 81% reduction in login-related help desk incidents. Reducing help desk burden allows IT and support teams to focus on higher-value issues.

“The data in the Passkey Index marks the first time we have been able to measure the actual utilization and performance of passkeys. Thanks to this data from several early-adopting organizations, we can confidently say that passkeys are available, being used, and providing quantifiable benefits to deploying organizations,” said Andrew Shikiar, CEO of the FIDO Alliance. “The FIDO Alliance intends to grow this program over time as a benefit to service providers within our membership, a guideline for newer implementers and an industry benchmark to track ongoing growth of passkey utilization over time.”

Liminal’s Passkey Adoption Study 2025 Validates Passkey Index by the Broader Industry 

Liminal’s Passkey Adoption Study 2025 complements the Passkey Index with a look at the industry outlook on passkeys. The survey of 200 IT professionals either actively deploying passkeys or committed to doing so highlights how buyers are turning to Passkeys to modernize and de-risk authentication. It revealed the following key points:

Passkeys are a strategic priority that delivers, with 63% of all respondents ranking passkeys as their top authentication investment priority for the next year. The majority (85%) of those that have already adopted passkeys report strong satisfaction with both their decision to implement and the business results they’ve seen so far. Organizations expect passkeys to deliver both ROI and risk reduction, as 63% of respondents believe strong authentication methods like passkeys can create cost savings and efficiency gains, while they are also expected to reduce risk (56%) and fraud (58%).  Passkeys deliver behavioral and business change. After passkeys had been deployed, a significant decline in password usage was reported by 43% of respondents, while the majority (89%) said more than half of their users are expected to opt in to passkeys after being prompted, demonstrating that adoption scales quickly after deployment. Organizations are willing and ready to adopt a fully passkey-based strategy, with almost all (97%) respondents reporting that their organization is willing to fully transition to a passkey-based authentication strategy in the future. Readiness to adopt is also widespread, with 86% of respondents stating their organization’s infrastructure is already fully or mostly prepared to support passkey authentication. They perform even better than expected. Nearly half of current implementers (49%) report adoption rates exceeding 75%, outperforming initial expectations.

Shikiar added: “It is in every company’s strategic interest to reduce reliance on passwords, and this study clearly illustrates that passkeys are doing exactly that: delivering tangible business benefits through enhanced sign-in success, improved user experience and decreased risk.”   

Passkey Index methodology

In collaboration with Liminal, the FIDO Alliance conducted a confidential survey of nine of its FIDO Member Alliance organizations to gain a deeper understanding of how passkeys are being deployed across their ecosystem and the outcomes being observed. This report offers an aggregate, anonymized view of current implementation patterns, opt-in performance, utilization, and organizational efficiency gains.

Liminal’s Passkey Adoption Study 2025 methodology 

Liminal conducted a proprietary survey of authentication decision-makers to understand how passkeys are being adopted, implemented, and evaluated across digital platforms. The research focuses on 200 organizations that have already deployed passkeys or are planning to adopt them within the next two years. This study examines key performance indicators, including adoption rates, opt-in behavior, user satisfaction, implementation challenges, and buyer priorities. It offers a data-driven perspective on how passkeys are performing in the market today and where the most important opportunities for improvement and growth exist.

About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.

Contact
press@fidoalliance.org


Six Months of Passkey Pledge Progress

In April we invited organizations around the world to take the Passkey Pledge, a voluntary commitment to increase awareness and adoption of passkey sign-ins to make the web safer and […]

In April we invited organizations around the world to take the Passkey Pledge, a voluntary commitment to increase awareness and adoption of passkey sign-ins to make the web safer and more accessible.

Passkey adoption is growing rapidly, with tens of billions of user accounts now equipped with the option to use a passkey instead of relying on passwords. We launched the Passkey Pledge to help rally the industry and accelerate adoption even further, helping even more organizations to realize the dual benefits of heightened security and a frictionless user experience.

When we launched the Pledge, we set out five goals to suit a range of organizations and use-cases, with the aim of achieving them over the next 12 months. Over 200 companies responded to our call and took the pledge. As we reach the halfway point in this journey, there have already been some incredible success stories and we wanted to highlight and share some of them with the community for inspiration.

Atlancube: The company’s commitment to the Passkey Pledge “accelerated our internal development and certification timelines” culminating in its product passing interoperability testing and successfully completing both FIDO2 CTAP2.1 and U2F L1 authenticator certifications. Primarily, this will help Atlancube prepare to launch a certified hardware security key that supports passkey sign-ins. It also helped increase awareness of the importance of passkeys among its engineering and business teams, strengthening cross-functional collaboration.

Dashlane: The password manager and credential security platform has upgraded the security of user passkeys it stores, by signing passkey challenges in a remote secure environment. The company has also integrated FIDO2 security keys into its product, replacing the master password with a hardware-backed secret to encrypt the user’s vault.

First Credit Union: The member-owned financial institution in New Zealand with over 60,000 members partnered with Authsignal to implement FIDO Certified passkey infrastructure. It adopted passkeys as it was the only approach that struck the right balance between security, usability and accessibility for its diverse membership base. Since rolling out passkeys, 58.4% of its members adopted the new authentication experience, with 54.5% of all authentications now using passkeys. In addition, over 23,500 members enrolled in multi-factor authentication. Read more in the First Credit Union case study

Glide identity: Glide Identity has achieved FIDO certification for its new products, joining the ranks of certified providers delivering standards-based authentication solutions. This certification validates Glide Identity’s commitment to interoperability and positions the company to serve organizations worldwide seeking reliable, FIDO-compliant authentication solutions.

HYPR: Took the Passkey Pledge to help realize a public good in eliminating shared secrets and passwords. The company has already delivered on its pledge, deploying passkeys at scale to Fortune 500 enterprises and beyond, including two of the four largest US banks.

LY Corporation: Made its Passkey Pledge to contribute to the industry-wide adoption of passkeys. During the last six months the company has increased the number of touchpoints where passkey sign-in is triggered, as well as publishing educational content to improve user literacy about passkeys. This has resulted in improved passkey sign-in rates of 41%, and reduced SMS transmission costs by replacing SMS OPTs with passkeys.

NTT DOCOMO: Has made significant progress on its Pledge to demonstrate actions that measurably increase the use of passkeys by users when signing into their services. The company has continuously improved the user experience by improving and refining messages on passkey enrollment and error pages to make them more customer friendly. NTT DOCOMO is confident of reaching its target to increase passkey usage ratio by 10% within the year since taking the Pledge.

Secfense: Has enabled support for passkey sign-ins across enterprise environments without requiring changes to existing applications. The company has implemented large-scale passwordless rollouts in highly regulated sectors, including banking and insurance, completing projects in just a few months. These deployments replaced passwords with phishing-resistant FIDO authentication, without modifying existing systems or disrupting users, proving that full passkey adoption is possible even in legacy infrastructures.

Thales: Over the last six months, Thales has extensively promoted the benefits of passwordless authentication and passkeys to its customer base  and other organizations through sponsored events, workshops, webinars and other channels. This is part of the company’s long-standing commitment to fight against phishing and improve both security and user convenience.

We’d like to extend a big thank you to all those who signed up to the pledge and for sharing an early snapshot of the progress you’ve made. We’ll provide more insights and updates as the Passkey Pledge moves into the final 6-month stretch. It’s not too late to take the Pledge this year – we’ve already seen how much can be achieved in such a short space of time. If you’ve already taken the Pledge, tell us about your progress as we’d love to share your success with others in the future.


Passkey Index 2025

FIDO has launched the Passkey Index, which provides a composite view of data from leading service providers – including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and […]

FIDO has launched the Passkey Index, which provides a composite view of data from leading service providers – including Amazon, Google, LY Corporation, Mercari Inc., Microsoft, NTT DOCOMO, PayPal, Target and TikTok – on the adoption, utilization and business impacts of passkeys. It reveals significant passkey uptake and benefits for online services offering passkey sign-ins. Read the full report here.

Read the Report


Project VRM

Gathering the MyTerms Troops

MyTerms (IEEE P7012) is on track to be ProjectVRM’s biggest achievement—and maybe the biggest thing on the Net since the Web. I’m biased, but I believe it. And that track runs through three events next week: VRM Day, on Monday October 20. IIW, the Internet Identity Workshop, from Tuesday to Thurdsday, October 21 to 23. […]

MyTerms (IEEE P7012) is on track to be ProjectVRM’s biggest achievement—and maybe the biggest thing on the Net since the Web. I’m biased, but I believe it.

And that track runs through three events next week:

VRM Day, on Monday October 20. IIW, the Internet Identity Workshop, from Tuesday to Thurdsday, October 21 to 23. AIW, the Agentic Identity Workshop, on Friday, October 24.

All three are at the Computer History Museum in Silicon Valley. Register at those links. VRM Day is free. The others are relatively inexpensive.

Here is some of what’s going on around MyTerms.

The draft is complete and on track for publication early next year. But work can start in the meantime. Consumer Reports is with us on this. Joyce and I met with them in New York on Monday. They’ll be there next week. Sir Tim Berners-Lee, who invented the Web, devotes a chapter of his new book, This is for Everyone, to the intention economy. He also credits my book by that title (which reported on ProjectVRM progress in 2012) with inspiring his Solid Project. Joyce and I met with Tim last Tuesday as well. Thanks to work by Iain Henderson, Liz Brandt, and others, there are allied efforts going on in Europe, most notably with MyData. We can see good things starting to happen on the enterprise side, thanks especially to the recent writings of Nitin Badjatia. Dig When Customers Set the Terms: How the Intention Economy and ‘MyTerms’ Enable the Great Unwinding. Kwaai and members of the open source personal AI community are on the case as well.

Iain and Nitin will also be at the events next week. So will others from the MyTerms working group, Kwaai, and other allied efforts.

We plan to have VRM Day online by Zoom (or the equivalent—we’ll let you know); but we’ll get the best results if you’re there in person.

Hope you can make it, and see you soon.

 

Monday, 13. October 2025

EdgeSecure

Creating A Mentoring Culture Centered on Joy

Creating a Mentoring Culture Centered on Joy Photography from MENTOR Newark’s Grand Opening ceremony provided by Fresco Arts Team. Photography and Curation of the MENTOR Newark facilities provided by Tamara… The post Creating A Mentoring Culture Centered on Joy appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Creating a Mentoring Culture Centered on Joy

Photography from MENTOR Newark’s Grand Opening ceremony provided by Fresco Arts Team.
Photography and Curation of the MENTOR Newark facilities provided by Tamara Fleming Photography.

In a city full of potential, MENTOR Newark is creating pathways for young people to thrive in their community and turn their dreams into reality. As the New Jersey affiliate of MENTOR, the National Mentoring Partnership, MENTOR Newark connects youth in Newark, New Jersey, to caring mentors who provide guidance, support, and positive role modeling. “Our mission starts with three words: joy, purpose, and opportunity,” shares Thomas Owens, Executive Director of MENTOR Newark. “Too often, when working with students—especially in communities like Newark—the focus is on what we’re protecting them from: harm, crime, or failure. But that approach can carry an assumption that without us, that’s where they’d end up. What we’re really trying to do is build something different. I always say, it’s like creating a garden. First, you till the soil, make sure the nutrients are there, and water it. Then, when the plants grow, your job is simple: keep watering them and give them light. That’s what mentoring should be. It’s not about saving kids; it’s about nurturing them so they can grow into who they’re meant to be.”

“MENTOR Newark is currently collaborating with the district on what the National Mentoring Partnership calls the “National Mentoring Connector,” a system designed to connect quality mentoring programs with schools and communities efficiently. This approach represents a shift from direct service to empowering others to scale mentoring impact throughout the city.”

– Thomas Owens
Executive Director, MENTOR Newark

Empowering Youth with Opportunities
For Owens, a commitment to community service began at just eleven years old in New York. “My father and his crew started tenant patrols in the housing projects, and I was the one who always tagged along,” he recalls. “I’d be in basements in The Bronx, sitting through tenant meetings with him. I was always by his side, and by the time I was fourteen, I knew all the boroughs.” That early exposure shaped a guiding belief: “If you stay committed and do the work, you can go wherever you want to go. And that same work becomes your protection and speaks for you, even when you’re not in the room.”

Owens went on to run nonprofits in New York and later became a founding member of the Eagle Academy for Young Men of Newark, the only all boys public school in the city of Newark. “We started the school in 2012, and I remained there until my first class of sixth graders that we recruited graduated from the 12th grade. I then moved over to my current role at MENTOR Newark, formerly Newark Mentoring Movement. When I joined, I aligned the organization with the National Mentoring Partnership, and I’ve been leading it ever since. It’s given me the chance to continue this work in a way that feels deeply personal and creative. People often ask me if I’m ever going to start an art program, and I tell them, this is my art. Working with young people, building this movement, that’s my creative work.”

MENTOR Newark works with the school district, local nonprofits, and other partners to provide training, professional development, and support for the people doing the mentoring work. “If I go into a school and start a mentoring program myself, I might be able to mentor 20 kids,” says Owens. “But if I support 20 schools in building their own programs, we can reach hundreds and exponentially grow the level of mentoring around the country. This past January, we had the opportunity to take part of our team to the National Mentoring Summit in Washington, D.C. We presented our work in front of about 2,000 people during one of the plenary sessions. We showcased what we’re doing here in Newark and discussed capacity building and how to support the people and systems that make mentoring sustainable and impactful.”

The MENTOR Newark team also secured an appropriation through Senator Cory Booker’s office and approached the Newark school district with an idea. “We suggested using a portion of this money to create a Director of Mentoring Services position within the district,” explains Owens. “The collaboration of funds and shared vision with Superintendent León, led to the creation of the first-ever role in the State of New Jersey dedicated solely to mentoring at the district level. We now have someone, Jermaine Blount, who works directly with the district and attends board committee meetings and this has made it so much easier to align mentoring with the district’s everyday work. This joint approach helps both us and the school district deliver on the mission of creating joy and a mentoring culture for young people.”

“We want to encourage them to find their own joy, because my joy isn’t their joy, it’s about discovering what lights them up. We must also approach our work with intensity and purpose, while creating meaningful opportunities for joy. That’s the cycle we’re building: joy inspires purpose, purpose creates opportunity, and opportunity brings us back to joy. We want to show our students what’s possible, give them the tools and the confidence, and then let them take off on their own. Once you wind them up, they’re ready to run.”

“Too often, when working with students—especially in communities like Newark—the focus is on what we’re protecting them from: harm, crime, or failure. But that approach can carry an assumption that without us, that’s where they’d end up. What we’re really trying to do is build something different. I always say, it’s like creating a garden. First, you till the soil, make sure the nutrients are there, and water it. Then, when the plants grow, your job is simple: keep watering them and give them light.That’s what mentoring should be. It’s not about saving kids; it’s about nurturing them so they can grow into who they’re meant to be.”

– Thomas Owens
Executive Director, MENTOR Newark

Building Capacity to Expand Mentoring
The idea of building capacity within the mentoring community developed organically as MENTOR Newark worked closely with local organizations. Owens explains that many groups wanted to bring mentoring programs into schools but faced significant barriers. “When I asked why they couldn’t get into schools, the answer was often that they lacked essential components like a curriculum, background checks, or proper training. To address these gaps, MENTOR Newark stepped in to provide training, assist with background checks, and help formalize mentoring programs.”

Through a partnership with the Newark school district, MENTOR Newark could then vouch for these programs and introduce them as credible options to be integrated into schools. Although funding is often limited, Owens emphasizes that positioning these programs correctly enables them to secure their own funding over time. “MENTOR Newark is currently collaborating with the district on what the National Mentoring Partnership calls the “National Mentoring Connector,” a system designed to connect quality mentoring programs with schools and communities efficiently. This approach represents a shift from direct service to empowering others to scale mentoring impact throughout the city.”

In reflecting on what he has learned through the process of building and growing MENTOR Newark, Owens says clarity and intentionality matter. “I’ve learned to ask a lot of questions, and we must be specific and intentional about our generosity. While community interest and requests for space or support are constant, we have to make sure every decision aligns with MENTOR Newark’s core mission of building a stronger mentoring ecosystem.”

Through a collaboration with New Jersey Performing Arts Center (NJPAC), MENTOR Newark helped design a mentoring curriculum tailored for NJPAC mentors who work with younger students. “They wanted to do some small, Tiny Desk–style concerts in our main area,” shares Owens. “We also partnered with the Dodge Poetry Festival which will host two interactive events at the mentoring center that brings in poets and facilitates workshops with young people. We’re seeing more and more Historically Black Colleges and Universities (HBCUs) graduate chapters choosing to hold meetings and events at MENTOR Newark. You walk in on any given day and I’m in my office in a meeting, someone else is mentoring in another room, there's a grad chapter meeting happening down the hall, and the kids are running the whole thing.”

MENTOR Newark partnered with the Newark school district to bring in middle school students every Tuesday through Thursday andoffer them lessons on graphic arts, hospitality and the culture of HBCUs and introduce them to peer leaders. “High school students, many of whom are part of the MENTOR Newark program, are the ones delivering those lessons, using a curriculum co-developed with support from consulting firm McKinsey & Company. Since launching in February 2025, MENTOR Newark youth have led over 40 sessions, reaching more than 1,000 middle schoolers across the city, and our high schoolers were leading the way. After the conclusion of one of the sessions, I watched our students sanitizing chairs, mopping floors, and cleaning up the space, without being asked. They don’t do it because we told them to, they do these tasks because they feel ownership. This is their space, and they take it seriously.”

“I’ve been here three years, and the space keeps getting better. It feels like it was made for high school students and we’re still adding our own creativity to it. Growing up in Newark, we don’t always get opportunities like this or get to be in spaces like I’m in now. What makes it different here is the trust. They see potential in me and have given me a new perspective on life. This means everything, because outside of here, people still treat me like a kid, even though I’m 17. Since I joined MENTOR Newark, I’ve opened up more. I’ve been applying to college classes, getting into programs, and that’s because of what I’m doing here. I’m forever grateful.”

– High School Student

A New Space to Call Home
In 2025, a partnership between MENTOR Newark and Edge began with a shared opportunity. Edge had office space available after transitioning to fully virtual, and MENTOR Newark was looking for a new home to grow its mission. “We had been working with a realtor to sublease the space without much success, until a mutual business contact introduced us to Thomas and shared details about the organization's mission and their need for a physical space,” shares Amy Olavarria, Executive Director Human Resources and Administration, Edge. “After all the normal formalities, I was able to meet with Thomas and hand him the keys to their new office space. It was an amazing feeling. I’ve met several of the young adults in the program, and they’re all so mature, motivated, and well-mannered. It’s inspiring to see, and MENTOR Newark is truly an incredible organization.”

Prior to the move, MENTOR Newark’s headquarters were located on a lower level of the building, where several students played an active role in building and renovating the space. “Our students were deeply involved in creating the previous space, so I wasn’t sure how they would connect with the new one,” admitted Owens. “But kids always surprise you. When they walked in, they immediately recognized the potential and were genuinely excited. They appreciated the new amenities, including the kitchen, private bathrooms, and other features we didn’t have before. Most importantly, they now have their own dedicated area, called the Student Office, which is exclusively theirs.”

Additional support with the initiative was provided by Ashley Mays and The Newark Alliance. The organization was instrumental in helping them negotiate the agreement with Edge. As a major partner, they’ve also provided critical support that enables MENTOR Newark to maintain their new downtown space. Owens knows it takes a village to bring an intiative like this to the finish line, “We are deeply grateful to City of Newark Mayor Ras Baraka, Newark City Council President C. Lawrence Crump, and local developer Siree Morris for their continued encouragement and advocacy for our mission and the students of Newark. Their support inspires us to keep pushing forward.”

That sense of ownership extends beyond the students. At a recent open house, the broader community showed up to explore the new space. “People were moved, and saw the students taking responsibility for the space, engaging with guests, and even leading parts of the event,” shares Owens. “There was dancing, laughter, and a strong sense of pride. Many attendees were curious how it all came together, and when they realized it came through a partnership between two organizations, it made a real impression.”

“Edge is honored to have Thomas and his organization in this space,” adds Olavarria. “Even though MENTOR Newark and Edge are two nonprofits in different industries, we share a common goal: to help and serve the people in our communities and beyond. Working with MENTOR Newark also perfectly aligned with Edge’s brand promise of CONNECTED. COMMITTED. COMMUNITY. I attended their grand opening, and the atmosphere was one of home, community, and acceptance—regardless of age or background. It was a powerful feeling of unity and peace.”

“Edge is honored to have Thomas and his organization in this space. Even though MENTOR Newark and Edge are two nonprofits in different industries, we share a common goal: to help and serve the people in our communities and beyond. Working with MENTOR Newark also perfectly aligned with Edge’s brand promise of CONNECTED. COMMITTED. COMMUNITY. I attended their grand opening, and the atmosphere was one of home, community, and acceptance—regardless of age or background. It was a powerful feeling of unity and peace.”

– Amy Olavarria
Executive Director Human Resources and Administration
Edge

Inspiring New Perspectives
For one high school student, MENTOR Newark isn’t just a place to go—it’s a place that changed how they see themselves and their future. From upgraded spaces to life-changing mentorship, the program helped them feel seen, supported, and ready to grow. “I’ve been here three years, and the space keeps getting better. It feels like it was made for high school students and we’re still adding our own creativity to it. Growing up in Newark, we don’t always get opportunities like this or get to be in spaces like I’m in now. What makes it different here is the trust. They see potential in me and have given me a new perspective on life. This means everything, because outside of here, people still treat me like a kid, even though I’m 17. Since I joined MENTOR Newark, I’ve opened up more. I’ve been applying to college classes, getting into programs, and that’s because of what I’m doing here. I’m forever grateful.”

For an 18-year-old fellow, MENTOR Newark offers more than just hands-on experience, it also nurtures self-discovery and independence. “When we were located downstairs, we had the opportunity to learn how to put up sheetrock, how to caulk, and build the space from our mentor and MENTOR Newark team member David Byre Tyre—a professional artist and designer. This program opened me up to things I never imagined doing, especially not at home or in other programs. I’m learning how to be more independent, and they tell us to ‘be selfish.’ Not in a negative way, but be selfish about your goals. Know your dreams and go after them because no one else will do it for you. Whatever we want to do in school or career wise, they fully support our dreams. It feels like a family here and I’ve had opportunities I’d never get anywhere else.”

She continues, “We just had an event with Senator Cory Booker and were able to ask him questions face to face. As a young Black woman in Newark, that’s not something I ever expected. This place is shaping who I’m becoming. It’s showing me how to help my community and care for the people around me, and that’s the kind of leader I want to be.”

Learn more about MENTOR Newark at newarkmentoring.org.

The post Creating A Mentoring Culture Centered on Joy appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Sunday, 12. October 2025

Digital Identity NZ

DINZ Executive Council Elections & Annual Meeting 2025

Inspiring trust solutions that protect, empower, and help our people thrive At Digital Identity NZ (DINZ), we bring our stakeholders together to build confidence, connection, and opportunity. By bridging trust … Continue reading "DINZ Executive Council Elections & Annual Meeting 2025" The post DINZ Executive Council Elections & Annual Meeting 2025 appeared first on Digital Identity New Z

Inspiring trust solutions that protect, empower, and help our people thrive

At Digital Identity NZ (DINZ), we bring our stakeholders together to build confidence, connection, and opportunity. By bridging trust gaps and empowering ecosystems, we strengthen communities and open the door for New Zealanders to thrive on the global stage. As we approach the end of the year, it is time for nominations for the Council seats coming up for re-election. Each Council member is elected for a two-year term, with elections held annually, and results notified at the Annual Meeting in December.

Executive Council Nominations

There is now an opportunity to put yourself forward, or nominate someone else, for a role on the Digital Identity NZ Executive Council. This year we have vacancies for the following positions:

Corporate – Major (3 positions) Corporate – Other (2 positions) SME & Start-up (2 positions)

The nominees for the above positions must be from a Digital Identity NZ member organisation (including Government agencies) and belong to the same Digital Identity NZ Membership Group they are to represent on the Executive Council.  If you are unsure of your organisation’s membership category, please email elections@digitalidentity.nz 

All nominations must be entered into the online form by 5pm, Monday 3rd November 2025.

Nomination form

Digital Identity NZ Executive Council roles and responsibilities include:

Direct and oversee the business and affairs of Digital Identity NZ. Attend monthly Executive Council meetings, usually two hours in duration (video conferencing is available). Drive towards achieving Digital Identity NZ’s strategic plan by participating in Digital Identity NZ working groups and projects. Represent Digital Identity NZ at industry events and as part of delegations. Assist in managing and securing members for Digital Identity NZ. Where agreed by the Executive Council, act as a spokesperson for Digital Identity NZ on issues related to working groups or projects. Be a vocal advocate for Digital Identity NZ.

Benefits of joining the Digital Identity NZ Executive Council

Contributing to Digital Identity strategy with other passionate people with your experience Expand your networks Educating and empowering New Zealanders in the digital identity space

Online Voting

Voting will take place online in advance of the meeting, with the results announced at the Annual Meeting. Please refer to the Charter for an outline of Executive Council membership and the election process. Each organisation has one vote, which is allocated to the primary contact of the member organisation.

Annual Meeting

The Annual Meeting is scheduled for 10:00am on Thursday, 4th December 2025, and will be held via Zoom.

Register here

Notices and Remits

If you wish to propose any notices or motions to be considered at the Annual Meeting, please send them to elections@digitalidentity.nz by 5:00pm on the Thursday, 13 November 2025.

Key Dates:

13 October: Call for nominations for Executive Council representatives issued to members 3 November: Deadline for nominations to be received 10 November: List of nominees issued to Digital Identity voting members and electronic voting commences 13 November: Any proposed notices, motions, or remits to be advised to Digital Identity NZ 4 December: Annual Meeting, results of online voting announced

Background

From the beginning, we have asked that you consider electing a diverse group of members who reflect the diversity of the community we seek to support. We ask that you do so again this year.  The power of that diversity continues to shine through in the new working groups this year, particularly as we consider the importance of Te Tiriti, equity, and inclusion in a well-functioning digital identity ecosystem.

The Council has identified several areas where diversity, along with expertise in the digital identity space, could help us better serve the community. Nominations from organisations involved in kaupapa Māori, civil liberties, and the business and service sectors are particularly encouraged. We also encourage suggestions from young people within your organisations, as their viewpoint is extremely valuable and relevant to the work we perform. As an NZTech Association, Digital Identity NZ adopts its Board Diversity and Inclusion Policy, which you can read here.

The post DINZ Executive Council Elections & Annual Meeting 2025 appeared first on Digital Identity New Zealand.

Friday, 10. October 2025

The Rubric

Faster, Cheaper, and More Private: the Sequel IS Better! (did:btcr2, Part 2)

did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...
did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...

Faster, Cheaper, and More Private: the Sequel IS Better! (did:btcr2, Part 1)

did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...
did:btcr2 is a censorship-resistant DID method using the Bitcoin blockchain as a Verifiable Data Registry to announce changes to the DID document. It improves on prior work by allowing: zero-cost off-chain DID creation; aggregated updates for scalable on-chain update costs; long-term identifiers that can support frequent updates; private communication of the DID document; private DID...

EdgeSecure

The SHI TeCHS Catalog Through EdgeMarket

The SHI TeCHS Catalog Through EdgeMarket Fast Track to Technology Procurement Technology’s role in the success of public sector organizations has never been more essential. To help the community keep… The post The SHI TeCHS Catalog Through EdgeMarket appeared first on Edge, the Nation's Nonprofit Technology Consortium.
The SHI TeCHS Catalog Through EdgeMarket

Fast Track to Technology Procurement
Technology’s role in the success of public sector organizations has never been more essential. To help the community keep pace with technological change and more readily access solutions that will drive organizational success, Edge created EdgeMarket to provide safe, simple, and smart procurements that deliver positive outcomes. Among these solutions is the Technology Catalog for Hardware, Software & Services (TeCHS) through SHI, providing one of the most comprehensive and forward-thinking technology hardware, software, and services procurement vehicles available in the U.S.

In the spring of 2021, Edge conducted a competitive request for proposal process (RFP) for a technology and services catalog provider with two main goals in mind. The first was to deliver massive scope, scale, and unsurpassed value in technology and service purchasing to Edge members and co-op participants everywhere in the country. The second goal was to harness the innovative capabilities and global capacity of an IT solutions partner who could deliver truly transformative solutions, built upon their extensive catalog, deep talent, scalable logistics, and state-of-the-art facilities. “Our awarded partner, SHI, did a fantastic job presenting capabilities to facilitate ease of purchasing with scope, scale, and value, but also some really outstanding strategies and delivery capabilities for truly transformative solutions,” shares Dan Miller, AVP, EdgeMarket. “The EdgeMarket TeCHS contract offers SHI’s full line of products, services, and solutions, and it’s no surprise that this solution is one of our most popular master contract vehicles in our co-op.”

To meet the procurement needs of a greater number of institutions, Edge architected the agreement to allow for adjusted terms and customization. “We want to help our members get the solutions they need, not just another component-based contract, hence the services dimension. We also wanted to introduce any additional terms and conditions that can help an organization improve upon the baseline agreement so they can move forward with clarity and confidence.”

Dan Miller Associate Vice President, EdgeMarket, Edge

A Smarter Approach to Technology Deployment
Before issuing the RFP, Edge spoke with member institutions, including Rutgers University, about the frustrations of state contracts and the procurement challenges they were facing. Sue Ryan, Strategic Sourcing Manager, Information Technology Services, Rutgers, was an important source of information as Edge began to develop the RFP. “Sue provided valuable insight into the process, where organizations would commonly have to self-assemble the sourcing of elements for a strategic move,” explains Miller. “We learned through COVID that our members and Edge needed to move on a dime, making key decisions and activating on those decisions effectively. We knew the criteria for the contract would be breadth, scope, and scale of solution; including hardware, software and services. We also wanted a partner that had a vision for the future pathways, and who was making investments to deliver transformative solutions. They needed the ability to adapt to any new adverse reality or take advantage of any new opportunity. Edge was so impressed with the SHI proposal, as they far exceeded our expectations.”

The TeCHS contract provides quick and easy access to hundreds of hardware, software, and service categories and groups that are conveniently organized into categories. “SHI has a great catalog, deep talent, scalable logistics that are extremely impressive, and state-of-the-art facilities that they continue to invest in and grow from,” shares Miller. “All of these assets help our members gain access to tools that can fundamentally alter how they do business and enhance their day-to-day operations.”

Partnering with their customers, SHI helps organizations take a smarter approach to technology deployment and running efficient and effective IT operations. “TeCHS has really been a visionary step forward—a 21st-century technology contract,” shares Lou Malvasi, Senior District Manager, Strategic Education, R1 Universities, SHI. “This procurement contract isn’t just focused on large research universities, it’s designed to support institutions across New Jersey and nationally. Community colleges, K–12 districts, and state and local governments can all take advantage of the same streamlined procurement and deployment model. We’re actively working to expand awareness and adoption beyond the tri-state area so more organizations can tap into innovation and collaboration opportunities.”

Unifying Technology Procurement
The TeCHS catalog was designed to provide an all-encompassing solution that makes thousands of trusted technology partners and solutions available for streamlined cooperative procurement. The TeCHS contract has really been a visionary step forward—a 21st-century technology contract,” says Malvasi. “State contracts, in many cases, are still very archaic in how institutions are allowed to procure. For projects that involves a combination of hardware, software, and services, traditional procurement methods can create unnecessary complications. Historically, you had to buy hardware on one contract with one reseller, software on another, and then find a third services contract to pull it all together. The EdgeMarket TeCHS catalog creates a one-stop shop, not just for procurement, but also for deployment. Institutions gain one point of accountability across the entire lifecycle of a project. We’ve heard from a number of university procurement teams how this contract is accelerating internal projects that used to be slowed down by outdated procurement models. This solution is not just more efficient; it’s enabling real progress.”

Rutgers is using the TeCHS contract to simplify and unify technology procurement across their institution. “TeCHS affords Rutgers the ability to develop IT strategies and then use one contract to source all elements required to support those strategies at lower cost and greater ease than any other contract, including State contracts—which we walked away from,” explains Ryan. “The TeCHS contract also allows us to avoid all the terms and conditions that you constantly have to negotiate. Additionally, Rutgers has a catalog through our marketplace system and TeCHS affords us better pricing. Because of this contract, we were able to receive an additional layer of discounts on top of the normal discount that SHI would put on our catalog. As my RFPs become much more complicated, we need to have a contract where we can approach the design more holistically and have access to a wider breadth of technology partners.”

For organizations who are looking to bid a complex strategy, Ryan says you can leverage this type of contract with multiple components to help create a comprehensive approach. “For us, the Edge TeCHS contract afforded us the ability to have a vehicle that we can build a strategy design, rather than just buying software and hardware. The software is always the easy piece, but this allows us to look holistically at what avenue we can use to combine all these components and successfully build out strategies. With this contract, we’ve been able to develop some of our bigger projects.”

To help make the procurement process successful for institutions, SHI also offers professional services, adding expertise and support to each stage of the technology life cycle. “Oftentimes you have to bid out the professional service or consulting piece apart from where you are buying the products,” explains Ryan. “As you build strategic projects, having a contract like TeCHS where professional services are included, makes the process much easier and more streamlined.”

“TeCHS has really been a visionary step forward—a 21st-century technology contract. This procurement contract isn’t just focused on large research universities, it’s designed to support institutions across New Jersey and nationally. Community colleges, K–12 districts, and state and local governments can all take advantage of the same streamlined procurement and deployment model. We’re actively working to expand awareness and adoption beyond the tri-state area so more organizations can tap into innovation and collaboration opportunities.”

Lou Malvasi Senior District Manager, Strategic Education, R1 Universities, SHI

Efficiently Meeting Technology Needs
SHI’s Customer Innovation Center (CIC) is located at the company’s New Jersey headquarters and allows organizations to plan, design, explore, and validate their technology needs. Along with product and integrated solution demonstrations, the CIC offers workshops and events where individuals can explore new ideas, gain insights on the latest trends, obtain hands-on experience with new technologies, and receive guidance from SHI experts. In addition, SHI operates two IT integration centers which combine hardware and software components from multiple manufacturers into ready-to-deploy rack systems. Faced with datacenters that were becoming increasingly complex, Rutgers now utilizes the SHI Ridge Integration Center to access scalable and cost-effective services, including rack and stack services, pre-installation testing, and installation activities.

SHI has been working with Rutgers on a 10-year master network refresh project and has utilized the SHI Ridge Integration Center to bundle services and put them under one SKU in Rutgers’ e-procurement catalog. “Rutgers is in the middle of a large network master plan, where lots of pieces to the puzzle are involved,” explains Malvas. “Cisco is one piece, but there are also UPS data-center racks that they are refreshing across New Jersey. At the SHI Ridge Integration Center, we procure all the data-center equipment and bring it into our warehouse. We have quality-control experts that will actually rack and stack, and we send configuration files to the Rutgers team. Upon Rutgers’ review and approval, we rack, stack, and cable the equipment under the TeCHS contract and deliver the equipment to locations across New Jersey.”

For many R1 institutions, Malvasi says AI infrastructure is the predominant focus. “The demand for GPUs far exceeds the available supply, making it an extremely competitive space. Through SHI’s AI and Cyber Lab, schools gain access to on-premises and cloud infrastructure for testing and validation. This allows principal investigators and faculty to run real-world application scenarios before purchasing large-scale systems. We work closely with data science teams and researchers to test their applications in a six-week sprint and look at everything, including compute, power, cooling, and connectivity, to deliver a detailed output of what infrastructure is truly needed. This approach not only reduces risk and optimizes cost, but also strengthens the institution’s position when competing for large federal research grants. We want to help faculty align their technical requirements with the realities of funding and deployment.

“For us, the Edge TeCHS contract afforded us the ability to have a vehicle that we can build a strategy design, rather than just buying software and hardware. The software is always the easy piece, but this allows us to look holistically at what avenue we can use to combine all these components and successfully build out strategies. With this contract, we’ve been able to develop some of our bigger projects.”

Sue Ryan Strategic Sourcing Manager, Information Technology Services, Rutgers University

Helping Organizations Achieve their Goals
To meet the procurement needs of a greater number of institutions, Edge architected the agreement to allow for adjusted terms and customization. “We want to help our members get the solutions they need, not just another component-based contract, hence the services dimension,” says Miller. “We also wanted to introduce any additional terms and conditions that can help an organization improve upon the baseline agreement so they can move forward with clarity and confidence.”

The TeCHS contract is available to all institutions, large or small, and SHI will gladly help any organization build out a catalog. “Even if you do not have an e-procurement platform like Rutgers, we can still set up a SHI.com website for you to purchase from using a username and login,” says Malvasi. “We can publish quotes, create landing pages, and work with facilities to understand the equipment you are buying from year to year. SHI already has TeCHS pricing integrated within our catalog team, so once this is set up for the customer, the pricing will feed into that automatically. Integration with eCommerce platforms like ServiceNow and Jaeger is a little more work on the backend, but if you want a simple SHI.com catalog, you can benefit from the TeCHS pricing almost immediately.”

Looking for a streamlined solution for procuring the latest technology? Learn more about the SHI TeCHS Catalog at edgemarket.njedge.net/home/shi-techs-catalog.

The post The SHI TeCHS Catalog Through EdgeMarket appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Thursday, 09. October 2025

FIDO Alliance

White Paper: FIDO and the Shared Signals Framework

Orchestrating Agile and Secure IAM Workflows October 2025 Authors: Jacob Harlin, MicrosoftJosh Cigna, YubicoMartin Gallo, HYPRSumana Malkapuram, NetflixApoorva Deshpande, Okta Abstract In today’s fragmented enterprise security landscape identity and access […]
Orchestrating Agile and Secure IAM Workflows

October 2025

Authors:

Jacob Harlin, Microsoft
Josh Cigna, Yubico
Martin Gallo, HYPR
Sumana Malkapuram, Netflix
Apoorva Deshpande, Okta

Abstract

In today’s fragmented enterprise security landscape identity and access management (IAM) systems often operate in silos. The need for cohesive, real-time coordination across platforms is more critical than ever. This paper introduces a strategic approach that combines FIDO-based strong authentication with the OpenID Foundation’s Shared Signals Framework (SSF) to orchestrate agile and secure IAM workflows, enable stronger continuous authentication, and promote collaborative defense against identity threats.

FIDO protocols offer a robust foundation for user authentication as they leverage public-key cryptography to eliminate password-based vulnerabilities. However, authentication alone is insufficient for sustaining zero-trust principles. Once an authenticated session is established, its trustworthiness must be continuously evaluated. This broader need for continuous evaluation is where SSF comes in – enabling the secure exchange of identity and security events, such as risk signals and session revocations, across disparate systems and vendors.

This document explores how integrating SSF into IAM architectures enhances visibility and responsiveness throughout the user journey, including joiner-mover-leaver (JML) and account recovery scenarios. It also highlights how Continuous Access Evaluation Protocol (CAEP) and Risk Incident Sharing and Coordination (RISC) protocols, when layered atop FIDO2, empower organizations to make real-time, risk-informed decisions that reduce fraud and accelerate incident response.

This synthesis of FIDO and SSF represents a paradigm shift toward continuous, adaptive trust that enables organizations to move beyond static controls and toward dynamic, signal-driven security ecosystems.

Audience

This white paper is for enterprise security practitioners and identity and access management leaders whose responsibility is to protect the security and life cycle of online and identity access management. Specifically, the target audience should include those whose purviews cover activity monitoring for threat detection and response as well as IAM staff who support those goals. Additionally, IAM leadership and architects should review this document to understand opportunities the described technologies offer and the implications of implementing them.

Download the White Paper 1. Introduction

The FIDO Authentication protocol has a proven track record of securing initial session authentication by leveraging strong public key infrastructure (PKI) based cryptography. Adoption of this technology has been a leap forward as a unified approach for secure and usable session establishment, however the ability to maintain, monitor, and manage ongoing sessions has historically remained fractured. This challenge is exacerbated by the reality of today’s enterprise security landscape, where numerous security vendors and solutions often operate in silos with limited communication. These barriers hinder comprehensive security outcomes during adverse events, leading to localized mitigations rather than unified responses. 

Shared signals offer a crucial pathway to facilitate a more holistic and effective response by providing a way to exchange security events across vendor boundaries. Ongoing management and monitoring are required to adopt the full zero-trust model.  The OpenID Foundation’s Shared Signals Framework (SSF) aims to address these challenges. If you root an IAM program with a strong footing, such as FIDO based authentication, and combine it with strong ongoing activity monitoring enabled by an SSF, you can achieve substantial changes that reduce (and enable you to react to) fraud and maligned activities.

2. What is the Shared Signals Framework?

The Shared Signals Framework (SSF) standard simplifies the sharing of security events across related and disparate systems. The framework allows organizations to share actionable security events and enables a coordinated response to potential threats and security incidents. SSF is defined by the OpenID Foundation’s Shared Signals Working Group (SSWG). The SSF standards are still evolving, but evaluation of the specifications provides a clear picture of what the SSWG hopes to achieve and can inform practitioners around what can be done with these tools today. The goal of this framework is to define a common language and mechanism for communicating actionable security events in near real-time, that allows systems to respond more effectively and in a coordinated way to potential threats.

SSF helps bridge gaps between identity providers, relying parties, and other services by creating a unified way for entities to notify each other of relevant changes, such as risk signals or session status updates. 

For example, Mobile Device Management (MDM) tools can transmit a device compliance change event to indicate a user’s laptop is no longer compliant with corporate policies. When this event is received by a downstream system, that service may determine that the user’s authenticated session should be terminated until such a time as the device moves back into a healthy state. 

Note: It is important to remember that SSF security events standardize and facilitate the sharing of information. They are not directives. Recipients need to determine the actions to take in case of a security event.

The SSF standard describes how to create and manage streams, which are used to deliver notification of events to the receiver using push (RFC 9835) and poll (RFC 8936) mechanisms. From a technical perspective, SSF describes using secure, privacy protected generic webhook transit with events delivered via HTTP in streams. 

Software vendors can act as transmitters and receivers; however, they must establish independent unidirectional streams. Events are formatted as Security Event Tokens (SETs) (RFC 8417) and the entities involved are identified by Subject Identifiers for Security Event Tokens (RFC 9493). Additional Subject Members are also defined in the OpenID Shared Signals Framework Specification 1.0

Since SETs do not describe the content or semantics of events, the SSWG is developing two standard profiles under SSF: 
Continuous Access Evaluation Profile (CAEP): For sharing access relevant state changes like token revocation or device posture.
Risk Incident Sharing and Coordination (RISC): For sharing signals about “risky” behaviors, such as account compromise.

2.1 Continuous Access Evaluation Profile (CAEP)
To further simplify interoperability between various vendors, the SSWG has also defined the CAEP Interoperability Profile. This specification “defines the minimum required features from SSF and CAEP that an implementation MUST offer in order to be considered as an interoperable implementation”. (CAEP Interoperability Profile).

Federated systems commonly assert the login only during initial authentication, which can create security risks if user properties (such as location, token claims, device status, or org membership) change during an active session. CAEP aims to enhance the “verify, then trust” mantra by defining a common event profile to communicate such changes as they happen. For example, early proposed examples suggest CAEP events can be used to:

Tie risk signals to known identities (users and non-human identities (NHIs) Track sessions and behavioral changes over time Dynamically adjust access without requiring the user to re-authenticate

This list is non-exhaustive, and capabilities are expected to grow and evolve as CAEP is more widely adopted. Because CAEP is built upon SSF principles, interoperable push and poll of SETs can be sent in real-time between trusted entities. These entities can include identity providers, relying parties (RP), monitoring systems like Security Information and Event Management (SIEM) systems, MDM systems, or any security-focused software vendor. 

When an entity receives a SET, they can then evaluate the event and decide whether to revoke tokens or transmit an updated security status to other services. Monitoring systems such as MDM, endpoint detection and response (EDR)/extended detection and response (XDR), SIEMs, or any security-focused software vendor can emit/consume CAEP events. As enterprise architectures evolve, CAEP can serve as a foundational tool for zero-trust strategies, enabling continuous and adaptive access evaluation that is informed by real-time context.

2.2 Key components of the Security Token Event (SET)

At the core of SSF is the Security Event Token (SET), a JWT based envelope defined by RFC 8417, that provides the foundational format for encoding and transporting these events.

“The intent of this specification is to define a syntax for statements of fact that SET recipients may interpret for their own purposes.” (RFC 8417)

Based on this principle, SETs provide a structured, interoperable format to convey claims (statements of fact) such as account changes, credential updates, or suspicious activity, without prescribing any particular enforcement action. This allows recipient systems to evaluate and respond to events in accordance with their own policies. Each profile (CAEP, RISC, SCIM) imposes specific constraints on the base SET and its associated subject identifiers (per RFC 9493), thereby defining clear semantics and expected behaviors for particular use cases. 

The SET itself is composed of several key claims, which together define the issuer, audience, subject, and event full context. A full description is available within the official documentation from the OpenID foundation, RFC 8417, and RFC 9493. The following is a brief outline of these claims. 

iss (issuer) – Represents the entity that issued the token, such as https://idp.example.com/ (as per SET examples). This is used by the receiving service to verify that the event originates from a trusted provider. aud (audience) – Specifies the intended recipient of the token. Depending on the deployment, the recipient may be the relying party application, an identity provider, or another trusted service. This helps ensure that only the designated service processes the security event. jti (JWT ID – unique event identifier) – A unique identifier for this specific event within the security stream. Helps with tracking and deduplicating events to avoid processing the same event multiple times. iat (Issued At Timestamp) – Indicates the exact Unix timestamp when the event was generated. Helps determine the event’s freshness and prevent replay attacks. sub_id (subject identifier) – Structured information that describes the subject of the security event.  events (Security Events Information) – The core claim that contains details about the specific security event. This is a mapping from an event type identifier (for example, https://schemas.openid.net/secevent/risc/event-type/account-disabled) to an event-specific JSON object that typically includes attributes such as subject, contextual metadata (for example, reason, timestamp, and risk level), and any profile-defined parameters required to interpret and act on the event. event_timestamp – Represents the date and time of an event. Uses NumericDate  txn (Transaction Identifier) – OPTIONAL – Represents a unique transaction value. Used to correlate SETs to singular events.

2.3 Risk Incident Sharing and Coordination (RISC)

While CAEP defines a standardized messaging transport for communicating session-related state changes between trusted parties during active sessions, additional security events that might compromise an identity outside of a single session must also be addressed. This is where Risk Incident Sharing and Coordination (RISC) comes into play. 

RISC is designed to share security events that are related to potential threats, credential compromises, and account integrity across federated systems. RISC hopes to define profiles that enable each recipient system to assess and act upon security events based on their unique risk policies, rather than mandating specific enforcement actions. 

RISC SETs might also empower standards compliant systems (via the System for Cross-Domain Identity Management (SCIM) standard for example) to communicate “statement of fact” assertions, with the goal to enable simpler automation and coordination across an asynchronous federated environment.

It is important to remember that RISC, like CAEP, suggests a framework of profiles and roles for platforms to leverage.

SETs only state provable assertions. They do not issue specific directives. Receivers may need to leverage profiles that are not yet established, to always take prescribed actions based on SETs received from transmitters. However, those profiles need to be understood by the transmitter/receiver pair. The ultimate goal is to enable more automation and faster reactivity across sessions through the sharing of SETs. 3. SSF and user journeys 

When you plan for implementation of IAM tools and capabilities, it is a common practice to consider the user journeys that need to be supported. These user journeys include day-to-day authentication and authorization processes, as well as more impactful (but less common) JML and recovery processes. Both CAEP and RISC methodologies can be used to enhance these workflows, building off strong authentication backed with FIDO2. With FIDO2 you are able to make decisions about users with certainty and with SSF you can track actions and react more quickly and accurately based on identity signals and user behaviors.

While the adoption of SSF is expected to grow, it will be up to the individual practitioner or organization to best determine how to leverage these capabilities. At the time of writing, the proposed workflows (as well as many of the transmitter and receiver interfaces) all need to be manually created and configured. Instead, it is recommended that you evaluate how these suggestions can enrich existing workflows and request delivery of these capabilities from your vendors and implementers.

3.1 Onboarding (joiners) and upgrading (movers) access

One journey that affects every end user is the joiner, or onboarding, process which generally establishes accounts for a user before they start at an organization. Accounts are created and entitlements are granted, with the expectation that they will not be used immediately. This timeframe is normally documented as “Day Zero -1.” This timeframe varies depending on organizational practices, but in order to ensure a speedy onboarding process most mid to large sized organizations follow this trend. 

The risk here is that it is easy to perform OpenSource Intelligence Gathering (OSINT) and enumerate accounts that fall into the “pending start day” category. The current set of IAM tools may lack the intelligence or agility to dynamically enable and disable accounts based on a strong identity proofing workflow and business demands of “hitting the ground running” often mean that these accounts are active and unmonitored before a user starts.

Profiles built on Shared Signal Frameworks (specifically RISC) can be leveraged to enhance this process. You can develop workflows that use the successful establishment of FIDO credentials via strong ID Proofing workflows, or initial detection of the use of pre-registered FIDO credentials, to trigger account enablement via IAM systems. With this workflow, accounts can sit inactive during the Day Zero – time frame and will only be dynamically activated once a successful strong authentication has been detected.

Role or access changes (known as mover workflows) can follow a pattern similar to that of the onboarding enhancement. New accounts can be created in a disabled state, awaiting specific triggers (such as date and time) in conjunction with authentication. RISC also opens the door to more dynamic access elevation, where the signaling framework can be used to trigger approval workflows in IAM ticketing and provision systems to temporarily grant higher privileges or roles.

Creative use of the shared signals frameworks, paired with a FIDO backed Root of Trust (RoT), can strengthen and enhance joiner and mover user journeys. These emerging techniques should be evaluated and adopted in a timely manner, to raise the bar for all IAM practitioners.

3.2 Device recovery/replacement

Another common user journey is establishment of a user on a new device. While it is similar to the onboarding journey, pre-existing permissions, accounts, and roles add complexity to this journey. This is also a common area of attack as attackers can abuse this workflow to enroll their own devices or otherwise compromise the pre-existing identity via unsecured channels. 
A best practice for device loss workflows is to lock down access as soon as a lost device is reported. You can leverage RISC signals to inform RISC consumer systems of the new device registration activity as part of an automated workflow that helps disable access as needed. Once a new device is issued, an identity can be re-established on the new device with a FIDO2 authentication workflow. The workflow can then leverage RISC signals to have IAM provisioning systems re-enable access.

Similar workflows can be leveraged if the FIDO2 authenticator needs to be replaced. This includes the loss of a device that contains a synced credential or a hardware token that contains a device-bound credential. Identity proofing workflows need to be leveraged to securely re-establish identity before a new credential can be bound to a user’s account. After this workflow is complete, RISC signals can be leveraged to re-enable sensitive access that was disabled when the credential was reported missing.

3.3 Offboarding (Leaver events in JML)

Offboarding workflows fall into two categories: planned and unplanned. Planned offboarding remains fairly unimpacted by SSF. It is possible to leverage CAEP signals to trigger termination of any active sessions after the user signs off for the last time. However, the SSF is more useful for unplanned offboarding events. A workflow can evaluate CAEP signals, and any open sessions can be identified and ended. As part of this workflow FIDO credentials should be de-associated from the user’s accounts, ensuring that the user can no longer log in. Both of these controls can ensure that unplanned offboarding events are well controlled and executed across the board.

3.4 Session tracking

Within the scope of modern identity security, session tracking plays a pivotal role in maintaining the integrity and security of user sessions. While authentication methods like FIDO effectively protect the initial login, they are significantly enhanced when complemented by session tracking. This involves the continuous monitoring of a session’s behavior and context throughout its entire lifecycle, from creation to termination. Such ongoing evaluation is crucial for identifying risk signals that may indicate potential security threats, such as session hijacking or unauthorized access attempts.

Platforms within a networked environment use CAEP events to send a range of signals to an authentication system responsible for managing sessions. You can utilize session tracking data so that as events are received, the authentication system can implement appropriate security measures, such as enforcing step-up authentication or terminating sessions. These events originate from multiple, diverse platforms, which each act as both transmitters and receivers within the SSF. This interconnected network offers valuable insights into potential security threats, enabling each platform to contribute to and enhance session tracking across the entire network.

To illustrate the impact of session tracking, we will explore use cases that compare an environment that uses only WebAuthn authentication with an environment that uses an enhanced approach that incorporates continuous authentication and shared signals. This comparison highlights how continuous session tracking can significantly bolster security and mitigate risks. 

The following table describes some possible ways to design these workflows. The table outlines the traditionally observed behaviors of systems and how security policies can be enhanced with the inclusion of SSF capabilities. When compared side by side, you can see the advantages provided by the adoption of SSF signaling. 

User Journey – Adding continuous access and session evaluation to a high assurance authentication.

ScenarioFIDO (Point-in-Time Authentication)FIDO + SSF (Continuous Assessment and Signals)CAEP/RISC eventsInitial authenticationUser logs in using WebAuthnUser logs in using WebAuthn.NASession establishmentSession is established and remains valid until expiration or logoutSession is established with continuous monitoring enabled.
If a disallowed event signal is received (for example, credential compromise, risk alert, or policy violation), the session can be revoked or re-evaluated immediately instead of waiting for expiration or logoutCAEP session-establishedThreat intelligence alertNo visibility or actionA threat intelligence system (for example, EDR/XDR or an anti-phishing platform) watches for a phishing campaign targeting a user group. If a phishing campaign is detected, the system acts as a transmitter and sends a RISC credential-compromise event to the Identity Provider (IdP), which functions as the SSF receiver in this scenario. Upon receiving the event, the IdP correlates the identity, flags the session, and revokes it as necessary.
The IdP can then act as a transmitter and issue a CAEP session-revoked event to other downstream SSF receivers, such as SaaS applications or partner services. This enables receivers to take appropriate actions (for example, terminating sessions or prompting re-authentication) based on the trust change initiated by the IdP.RISC: credential-compromise 
CAEP: session-revoked
Session hijack or replay (post threat alert)Session remains valid and an attacker can reuse the stolen session token (for example, via fixation or XSS), as FIDO-only systems do not have post-authentication visibility.Signals (for example, from threat intelligence platforms) elevate risk and those events are transmitted to receivers like the IdP, which then terminates the session. This prevents the reuse of any compromised session tokens.CAEP: risk-level-changedStep-up authentication (post threat alert)Not triggeredAfter receiving a RISC credential-compromise event from a threat intelligence system, the Identity Provider (IdP) flags the session as high-risk and prompts the user to authenticate using FIDO WebAuthn. Once the user completes strong re-authentication, the IdP issues a CAEP assurance-level-change event to reflect the increased assurance level. This event can also be transmitted to downstream consumers such as audit platforms or relying parties, enabling consistent assurance tracking.CAEP: assurance-level-change 4. Filling gaps – compliments to FIDO and conclusion

As demonstrated, by the use cases outlined above, both CAEP and RISC pair well with FIDO authentication standards to improve overall security postures and practices for enterprises and organizations. These cases only cover the largest areas where these frameworks should be adopted and integrated into current tools and workflows. In addition to our recommendation of implementing these standards, a robust and well planned SSF/FIDO program can provide buffers/flagging against potential false positive signaling and help make the tasks of attributing improper activities and detection of rogue actors easier for Network Operations Centers (NOCs). 

SIEM systems rely on credible data from endpoints. SSF helps to normalize the structure of many tasks that historically have required bespoke connectors. Shared signals (such as CAEP session state changes or RISC credential-compromise events) can add clarity and deeper insight into principal (the user or entity associated with the event) and system behavior. Additionally, SSF-enabled SIEM or IAM tools can be leveraged to strengthen current step-up authentication practices, providing native ways to track high privilege interactions without the need for full reliance on single point of failure third party systems. 

In the past, passive signals were used for dark web monitoring. With shared signals coordination we now have the capabilities to send notifications and cycle credentials automatically for systems that do not support strong authentication. Accounts with leaked credentials can either be auto-disabled and shunted to a reset workflow that is backed by a strong authentication with FIDO or automatically rotated with credentials that are vaulted and retrievable with IDV or FIDO authentication. Stolen credentials may not be limited to usernames and passwords and can also include stolen synced passkeys and or certificates. CAEP can be leveraged to communicate out of context credentials, and the shared signals should be leveraged as part of a risk-based authentication workflow.

CAEP, RISC, and FIDO provide a risk-averse way to enable federated login. Implementation of both enhanced session tracking and strong authentication creates a workflow in which external users can leverage federated login processes and security teams can more closely monitor and attribute activity and behavior. In the Customer Identity space, these enhanced signals can provide more secure ways to allow end users to authenticate using their existing trusted identity provider accounts (for example Google, Apple or enterprise Identity Providers) instead of creating new local credentials, through enhanced session tracking and strong, phishing resistant authentication.

When practitioners and vendors embrace RISC and CAEP frameworks for signaling, they strengthen not only their own environments but also the broader information security ecosystem. A common, interoperable signaling language increases the ability of systems across organizational boundaries to track and correlate user and process activity, detect inappropriate behavior, and respond consistently. In this way, the adoption of SSF moves security practice toward a more collaborative, standards-based model that prioritizes shared defense and ecosystem resilience. When SSF is put into practice, it enables external entities to be better informed in real time, improving collective security and ensuring that end users are more effectively protected.

5. SET examples

This section contains several mockup examples of the makeup of SETs. These are provided to add clarity to the contents and capabilities of each component of the SSF. They describe the information systems can expect to receive and what data points can be included in a token.

5.1 CAEP example tokens

CAEP provides a standardized way to communicate access property changes in real time. It defines Security Event Tokens (SETs), which are sent by transmitters using the SSF framework. Upon receiving a CAEP event, the receiver can dynamically adjust access permissions, which reinforces zero-trust security principles and ensures security decisions remain context aware and adaptive. 

The following are examples of key CAEP Security Event Tokens (SETs).

5.1.1 Session revoked

Session revoked: Indicates an active session has been terminated

Event transmission example.

5.1.2 Credential changes

Token claims change: Signals changes in token claims such as roles, entitlements, and group memberships that affect access control.

Credential change: Signals that a user’s credentials have been changed (for example, deleted, updated, created, or revoked). Examples of credentials include passwords, fido2-platform, and fido2-roaming. 

Event transmission example

5.1.3 Assurance level or compliance change

Assurance level change: Indicates that the assurance level of user’s authentication has changed, impacting session security.

Device compliance change: Signals a change in the security posture of a user’s device. For example, a previously compliant device is now non-compliant.

Transmission event for device compliance example.

5.2 RISC example tokens 

The following examples show the key RISC SETs.

5.2.1 Account credential change required

Indicates an event requiring a credential update for the subject, typically due to detected compromise or reuse. For example, this helps prevent credential stuffing attacks across federated accounts. 

5.2.2 Account enabled

Notifies that a previously disabled account has been re-enabled. This allows relying parties to reinstate access where appropriate (for example, after resolving a false positive).

5.2.3 Account purged

Notifies that the subject’s account has been permanently deleted and should no longer be recognized by relying parties.

5.2.4 Account disabled

Notifies that the subject’s account has been disabled and is no longer accessible. This helps prevent unauthorized access (for example, after fraud detection or HR termination).

Transmission event for account disabled for fraud detection.

5.2.5 Identifier changed/recycled

Notifies when a user’s identifier (for example, email or username) has changed or is reassigned. Helps prevent unauthorized access using outdated identifiers.

6. Document history ChangeDescriptionDateInitial publicationWhite paper first published.October 2025 7. References

Internet Engineering Task Force (IETF). (2020, November 30). Poll-Based Security Event Token (SET) Delivery Using HTTP. IETF Datatracker. https://datatracker.ietf.org/doc/rfc8936/

Internet Engineering Task Force (IETF). (2020, November). Push-Based Security Event Token (SET) Delivery Using HTTP. IETF Datatracker. https://datatracker.ietf.org/doc/html/rfc8935

Internet Engineering Task Force (IETF). (2018, July). Security Event Token (SET). IETF Datatracker. RFC 8417https://datatracker.ietf.org/doc/html/rfc8417

Internet Engineering Task Force (IETF). (2023, December). Subject Identifiers for Security Event Tokens. IETF Datatracker. https://datatracker.ietf.org/doc/rfc9493/

OpenID. (2025, August 29). OpenID Continuous Access Evaluation Profile 1.0. OpenID. 
https://openid.net/specs/openid-caep-1_0-final.html 

OpenID. (2024, June 25). CAEP Interoperability Profile 1.0 – draft 00. OpenID. https://openid.net/specs/openid-caep-interoperability-profile-1_0-ID1.html

OpenID. (2025, August 29). OpenID RISC Profile Specification 1.0. OpenID. 
https://openid.github.io/sharedsignals/openid-risc-1_0.html

OpenID. (2025, August 29). OpenID Shared Signals Framework Specification 1.0. OpenID. https://openid.net/specs/openid-caep-1_0-final.html

Wednesday, 08. October 2025

Blockchain Commons

Musings of a Trust Architect: The Gordian Club

ABSTRACT: Unencrypted data isn’t safe. Centralized servers aren’t reliable. Gordian Club offer an alternative: the autonomous cryptographic object (ACO). Self-contained objects protected by cryptography can be passed around with minimal possibility of surveillance or censorship. It’s great for a number of use cases where infrastructure is unreliable, from work in politically unstable regions to dis

ABSTRACT: Unencrypted data isn’t safe. Centralized servers aren’t reliable. Gordian Club offer an alternative: the autonomous cryptographic object (ACO). Self-contained objects protected by cryptography can be passed around with minimal possibility of surveillance or censorship. It’s great for a number of use cases where infrastructure is unreliable, from work in politically unstable regions to disaster response.

Imagine: A reporter maintains a list of sources and the information they provided for an article critical of the federal government. This is a requirement for fact-checking any serious news story or for later defending it in a court of law. But the federal government illegally seizes the list and jails the sources.

Imagine: A protest group uses a supposedly secure messaging app to coordinate, but the government threatens the app store, who substitutes a version that records all of the information that should be encrypted. The government then begin arresting participants at home, before the protests even begin.

Imagine: An immigrant flees a totalitarian regime. They carry with them a digital cache of their identity credentials, which will be necessary to immigrate elsewhere. But a border patrol catches them exiting their country and confiscates their records. The government uses their control of internet infrastructure to block future attempts to verify the credentials, which all phone home. The emigrant is let go, but now not only cannot immigrate, but will also have problems proving their identity in their own country.

These stories are unfortunately no longer restricted to problematic countries on the edge of global democratic society: authoritarianism is spreading across the entire world. In the United States alone, unwarranted searches and seizures, violations of free speech rights, and illegal use of military forces for domestic peacekeeping are on the rise. As a result: unencrypted data is no long safe, because we can’t be certain it won’t be illegally seized; centralized services are no longer trustworthy, because they have shown that they will bow to dictatorial whims; and centralized servers are no longer reliable, because their usage could be censored through infrastructural control.

Protecting data, especially in a world of services that profit off of user content, is a serious problem that I’ve long struggled with. Unfortunately, the problem is coming to a head. In a globally connected society, we can no longer trust data, servers, or services that are outside of our personal control. New solutions are required, not just to ensure our human rights, but to ensure our personal safety.

The Rise of Autonomy

Fortunately, Bitcoin trailblazed the path for another way. Bitcoin protects a user’s assets (and their ability to transact) with decentralized protocols that depend on math rather than the fiat of whatever entity controls a server or service. But Bitcoin is largely limited to protecting “value transfer.” There’s a lot more ground to cover.

That’s where Blockchain Commons’ newest technology comes into play: the autonomous cryptographic object (ACO). It can protect many other sorts of digital interactions, and it pretty much does what it says on the label.

The Power of Autonomy: Autonomy means the ability to make your own choices: self-government. It’s a word that should be right up there with privacy and self-sovereignty as a fundamental digital right. But to truly maintain self-control requires you not to be dependent on external entities. That’s the core of the “A” in ACO.

This might be the most important part of an ACO because it creates a number of fundamental advantages:

Unblockable Access. No server or platform dependencies. Perfect Privacy. No logs or tracking. Disaster Resilience. Available during infrastructure failure. Censorship Resistance. No fiat controlling access.

The Power of Cryptography: Meanwhile, cryptography is the math. The “C” in ACO says that your autonomous control of the object is dependent upon set mathematical rules rather the arbitrary decision of some external force. It’s what allows you to escape the fragility of centralized servers: math doesn’t bow to authoritarian dictates.

The Power of Objects: Finally, the “O” in ACO isn’t just a neutral descriptor. It says that an ACO is a discrete (and, yes, autonomous) thing that can be stored or passed around as users see fit, without the need for a specific network. Calling it an object differentiates it from a client or a server or something else dependent on a global communication infrastructure.

Using ACOs is a paradigm shift. Traditionally, access control was at the whim of administrators. With ACOs, access control (and the information they protect) is determined by the pure math of cryptography instead. It’s a change from the serfdom of asking “Mother, May I?” to the agency of saying “Yes, I Can.” It creates infrastructure that you control, rather than infastructure that controls you.

Join the Gordian Club

I don’t want ACOs to just be a theory, so I’ve been working in recent months to create a working example of an ACO at Blockchain Commons: the Gordian Club.

A Gordian Club is an ACO built on Gordian Envelope (along with the rest of the Gordian Stack). It allows for the storage of credentials, data, or other information in an organized and protected way. Access control is managed by Permits.

Envelope. Gordian Envelope allows for the “smart” storage of information as a recursive set of semantic triplets. Permit. Envelopes can be encrypted, with permits allowing for the decryption of that data in a variety of ways.

A Gordian Club’s permits allow the encoding of either read or write permissions. These permissions can be simultaneously linked to public keys, to XIDs, and to secret shares, because envelope permits can enable different means of access to the same data. Permission can also be delegated, using cryptographic ocaps made possible by the Schnorr signatures at the heart of Gordian Clubs.

The permissions and data are published in an initial Edition of the Gordian Club. But that’s just the first step. Thanks to the write permissions, a Club can later be updated in new Editions that might contain slight or wholesale changes to the data and permissions found in the previous Edition. A provenance mark validates the linkage of multiple Editions.

Because of its autonomous design, Gordian Clubs are entirely transport neutral. Though you could send an Edition over the internet, you don’t have to. You could send it via messaging. You could put it on an NFC card or thumb drive, then mail it. You could print it as a QR code and publish it in a newspaper. You could distribute it via Blockstream Satellite. The transport doesn’t even have to be near-term: a Gordian Club Edition could be stored away for archival and used years down the road. This is true autonomy: not beholden to servers or services, but not beholden to a stable network either.

Here’s how a Gordian Club might look in those real-life examples of modern-day authoritarianism:

Journalism: A journalist stores a list of sources and their information in a Gordian Club. One permit allows him to open it with his private key. He also sends the Club and SSKR shares to the five board members for his newspaper. Any three of them together can open it, which they might need to do in the case of a lawsuit over an article. The journalist can later issue new Editions of the Club when he updates his information cache or when the members of the board change. The information is encrypted, which means it’s protected even in the case of an illegal seizure. Freedom of press has become a mathematical right: the government would have to coerce either the journalist or multiple board members to access it.

Protest: A protest group passes around a Gordian Club that contains information on upcoming protests. Updated Editions are published whenever new protests are planned, which can be done by agreement of a FROST quorum of protest organizers. Alongside its data, the Gordian Club contains a list of allowed readers, which was determined by the FROST quorum of organizers. However, any reader can also delegate read permissions to another reader. These delegated permissions only remain valid for that Edition of the Gordian Club; if there was a compromise due to delegation, it wouldn’t extend to future Editions.

Credentials: An immigrant stores their credentials as a Gordian Club, which they send to a human rights organization in the country they are immigrating to. It is locked with the organization’s public key and with a stretched password, which corresponds to a line from the immigrant’s favorite song, which is long enough to be largely unbreakable, yet memorable to the immigrant. If the immigrant is seized before they leave their country, the border patrol can only go on what the emigrant self-reports. Even if the border patrol learns who the immigrant is, they can’t block their credentials, because they’re all self-sovereign, without phone-home requirements, and stored in that Gordian Club. Alternatively, if the immigrant reaches a safe haven, the human rights organization will provide the Gordian Club; either they can unlock it with their key or the immigrant can do so with their own password.

There are many other use cases that go beyond that increasing authoritarianism of modern countries. This includes use cases where the internet is not available or where a longer timeframe is required.

Emergencies: A category 5 hurricane devastates the Eastern Seaboard. The internet is largely down, though cell phone access remains available for emergency use. An unencrypted Gordian Club is created with all the emergency resource information and passed from user to user via messaging. Signatures verify its authenticity, even as Editions are updated, which helps users to steer away from the scammers that inevitably come out during these times of tragedy.

Archival: The patriarch of a family writes his last will and testament in a Gordian Club, accessible by his private key, by his lawyer’s private key, or by three out of five shares given to his heirs. The information in it stays private until his passing, with the quantum-resistant cryptography available in Gordian Envelope ensuring privacy until that sad date. But afterward, it’s easily accessible by heirs or the lawyer. The provenance marks clearly note which version of the will is the newest.

Five Principles For Autonomy

Gordian Clubs make ACOs concrete by following five principles that I’ve developed for autonomous systems.

Operate Without External Dependencies. Everything you need is within the Gordian Club: data and permissions truly operate autonomously. Encode Rules in Mathematics, Not Policy. Permits are accessed through mathematical (cryptographic) constructs such as private keys or secret shares. Make Constraints Load-Bearing. Members can’t be removed from a Gordian Club Edition, but that also means permissions can’t be unilaterally revoked. Gordon Clubs don’t have live interactivity, but that means they can’t be censored by a network. Preserve Exit Through Portability. An ACO that can be freely passed around without network infrastructure is the definition of portability. Work Offline and Across Time. Gordian Clubs are meant to be used offline; archival is a major use case, allowing access across a large span of time.

I’ll have more on these principles, how I derived them, and what the Exodus Protocol Pattern is in a future Musings.

Credit Where Credit is Due

Gordian Clubs were inspired by the Clubs feature of Project Xanadu, which was the world’s first hypertext project: it could have been a world wide web of tightly interconnected information before there was a World Wide Web.

Project Xanadu was built around “Clubs,” which could be individuals or organizations and which could be recursively created: a Club (or individual) could be a member of a Club (or individual) … etc. Each Club could have read or write permissions to itself or to other clubs, and those rights could also be passed down through a hierarchy.

The problem with Clubs was that they required centralized administration. When I became peripherally involved with Project Xanadu in the early ’90s, I suggested the use of cryptography to turn that human-based administration into math-based administration. But cryptography wasn’t up to the requirements at the time.

Now it is, thanks in large part to the release and development of Schnorr signatures, allowing for the creation of Gordian Clubs.

Gordian Clubs Are a Reality!

None of this is just a theory. I have a working library and CLI for Gordian Clubs. A demo is available on YouTube:

There’s also a full log of the demo, which you can use to follow along, using the clubs-cli-rust app.

Take a look, but more importantly let me know how will you use Gordian clubs? What use cases will be served by ACOs? And are Gordian Clubs the right answer or do you need something more (or less)? I’d love to get your feedback as we continue work on this crucial new technology.

For more on Gordian Clubs, take a look at our developer pages:

Gordina Clubs Overview The Power of Autonomy Gordian Technology ocaps and Delegation Project Xanadu History

Also see the “Beyond Bitcoin” presentation from TABConf 7:

“Beyond Bitcoin”

Tuesday, 07. October 2025

OwnYourData

Digital Product Passports Towards More Sustainable Futures

Digital product passports (DPPs) have gained attention through the EU’s Circular Economy Action Plan and the Ecodesign for Sustainable Products Regulation (ESPR). They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition, and production history. DPPs are seen as critical components for creating circular economies, especially in lig

Digital product passports (DPPs) have gained attention through the EU’s Circular Economy Action Plan and the Ecodesign for Sustainable Products Regulation (ESPR). They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition, and production history. DPPs are seen as critical components for creating circular economies, especially in light of the European Green Deal to align industry with climate targets. As DPPs are already becoming mandatory for many products, their social, technical, environmental, and economic implications have to be considered from the start through interdisciplinary conversations and collaborations. 

MyData Global Conference was the perfect opportunity to have insightful discussions as we explored how DPPs can contribute to more sustainable futures. In our presentation, we also introduced the Promoting Accelerated Circular Economy through Digital Product Passports (PACE-DPP) project as an applied case study to explore the potential for DPPs.

So, what exactly is a Digital Product Passport?

A Digital Product Passport is a product-specific dataset accessible via a digital carrier. It enables businesses, regulators, and consumers to access key information—such as material composition, environmental impact, production history, and recyclability. 

They enable capturing contextual data throughout the product value chain, such as environmental impact, material composition,  production history, repair activities, and recycling capabilities. There is also exciting potential to capture more information, going beyond just those related to sustainability.

By allowing information throughout a product’s lifecycle to be accessible and sharable, they can enable transparency, empower smarter choices, support regulatory compliance, and foster trust across the supply chain. So, DPPs are expected to play a key role in facilitating innovative approaches by not only enabling the exchange of information but also driving new business value and increased efficiencies across the supply chain. 

Main Aspects of DPPs 

The European Commission sets out some requirements for Digital Product Passports, including: 

User friendliness: Access to the digital product passports should be free and easy for all stakeholders. Tailored information: Differentiated access with different types of information available (for example, customers, lawmakers, and suppliers will look for and care about different information). Accuracy: The data provided needs to be accurate, complete and up to date Interoperable: DPPs must be interoperable to facilitate the sharing and utilisation of data generated by various actors. Adaptable: Any further data other than obligatory data can be added and stored. Promoting Accelerated Circular Economy through Digital Product Passports Project- Wood Use Case

In our presentation, we offered a case study to communicate what this all means in practice. The PACE-DPP project is already piloting DPPs in the wood industry—enhancing traceability from forest to mill gate. With this use case, we demonstrated how DPPs can streamline operations, reduce waste, and promote sustainable sourcing.

By integrating circular economy principles into value chains and conducting thorough analyses of supply chains within the wood/paper and electronic device sectors, the project aims to enhance processes with circular flows.

Benefits of Digital Product Passports

During our session, we discussed the diverse benefits that Digital Product Passports (DPPs) can bring across different domains. The following points summarize key takeaways from our presentation as well as valuable inputs and reflections shared by participants during the discussion.

Business Streamlined processes from partners to suppliers, as businesses will need to understand processes of their own and their partners before even thinking about DPPs. (For example, in a furniture business will need to understand their fabric and wood suppliers’ processes to be included in their furniture’s DPPs). Holistic analysis of the business supply chain to provide more accurate information on DPPs Increase efficiency and drive down costs through streamlining data flows and business operations Generate potential and incentives for multi-stakeholder collaboration and enable new cooperative business models DPPs can create a competitive advantage for those businesses that go beyond the minimum requirements and include information about new areas such as their brand authenticity, ethical standards and social impact There is a critical role, especially for big players, to promote and lead sector-wide best practices (for example, Ikea can bring in new standards through its DPP implementation to inspire change). Socially Empower consumers and promote transparency, as DPPs can act as accountability mechanisms. Fair and equitable digital societies by highlighting issues around product life cycles from raw materials to retail Broader civic engagement and increased activism, such as by creating awareness campaigns around greenwashing, backed by data provided through DPPs. Legally The biggest motivation to make DPP achieve its full potential is the legal requirements Regulatory enforcement that can have some teeth to enable DPPs to fulfil their purpose and achieve their full potential Promote connected thinking when it comes to regulatory compliance. For example, a business working in the wood industry will need to comply with forestation regulations, fulfil DPPs requirements, consider GDPR, etc. How to get there?

To fully realise their potential, we have to have human-centric approaches to DPPs from their design, to implementation and use. We wrapped up our presentation by reminding five core principles we need to keep in mind if we want to reap these benefits:  

Transparency: Ensuring information siloes are broken to enable data flow between actors.  Human-Centric Approach: Ensuring that the information that reaches consumers and citizens more broadly is accurate, understandable, and relevant. This is an exciting area to explore further to understand what this will mean for digital product passports in practice.  Accountability:  Ensuring there are robust regulations with teeth to prevent inaccurate claims such as greenwashing, etc. and that, as they are being rolled out and implemented, the regulations are not watered down. Trust: Between partners, manufacturers, suppliers, etc., as well as amongst citizens to ensure sustainable products become the norm in the EU, and we work towards truly circular economies. Equality: Holistic approach needs to include equality as it relates to sustainability and circular economies. The DPPs have great potential to highlight the interconnected nature of growth on a global scale and environmental boundaries. 

DPPs could be key to enabling circular economy and carbon reduction strategies, including those for new markets and business models, and also to social compliance reporting.  As we move toward a greener future, Digital Product Passports offer a powerful way to connect sustainability with digital innovation—making every product part of the solution.

The PACE-DPP project received financial contributions from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), supported by the Austrian Research Promotion Agency (FFG funded project #917177), as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK), supported by the German Research Promotion Agency (DLR-PT).

 

Der Beitrag Digital Product Passports Towards More Sustainable Futures erschien zuerst auf www.ownyourdata.eu.


MyData

Putting Patients First: How Dokport is Shaping the Future of Digital Healthcare

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. At Dokport, our journey […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. At Dokport, our journey […]

The Engine Room

[CLOSED] Join our team! We’re looking for an Associate for Resilient Tech!

Who we are The Engine Room (TER) is a nonprofit organization with a distributed global team of experienced and committed activists, researchers, technologists and community organizers. Our vision is for social justice movements to use technology and data in safe, responsible and strategic ways, while actively mitigating the vulnerabilities created by digital systems. Since 2011, […] The post [CL

Who we are The Engine Room (TER) is a nonprofit organization with a distributed global team of experienced and committed activists, researchers, technologists and community organizers. Our vision is for social justice movements to use technology and data in safe, responsible and strategic ways, while actively mitigating the vulnerabilities created by digital systems. Since 2011, […]

The post [CLOSED] Join our team! We’re looking for an Associate for Resilient Tech! appeared first on The Engine Room.

Friday, 03. October 2025

FIDO Alliance

Biometric Update: Germany pushes passkey adoption, releases draft technical guidelines

Germany’s Federal Office for Information Security (BSI) is asking for public comment on a draft document that outlines technical considerations for configuring passkey servers. The draft was published on September […]

Germany’s Federal Office for Information Security (BSI) is asking for public comment on a draft document that outlines technical considerations for configuring passkey servers.

The draft was published on September 30 and seeks to get inputs from relevant stakeholders, the BSI said in a news release.

The BSI TR-03188 Passkey Server guidelines are available as a draft in version 0.9, the BSI says. It was drafted within the scope of FIDO2 and WebAuthn standards, among others.


Biometric Update: Yubico finds passkeys awareness still lacking in global survey

There is a persistent disconnect between perceived cybersecurity and actual vulnerability. That’s the key finding from Yubico’s 2025 Global State of Authentication Survey. The findings indicate a world still reliant on […]

There is a persistent disconnect between perceived cybersecurity and actual vulnerability. That’s the key finding from Yubico’s 2025 Global State of Authentication Survey. The findings indicate a world still reliant on outdated authentication practices, highlighting the need to align personal and workplace cyber hygiene.


PC Mag: Ditch Your Passwords: Why Passkeys Are the Future of Online Security

Passkeys are revolutionizing the way we secure our online accounts, with the potential to eliminate passwords altogether. We explain why they offer stronger protection for your digital life and how […]

Passkeys are revolutionizing the way we secure our online accounts, with the potential to eliminate passwords altogether. We explain why they offer stronger protection for your digital life and how you can start using them.

There’s a reason everyone is working on a way to replace passwords. They’re often easy to guess, hard to remember, and changing them after every data breach is a pain, even if you do have a password manager. Thankfully, the Fast Identity Online (FIDO) Alliance developed passkeys, a new authentication technology that eliminates the need to enter your email address or a password into login fields around the web, and they’re gaining popularity. For example, Microsoft deleted passwords from its authenticator app in August, but left in support for passkeys.


IT Brief: Help desks emerge as cybersecurity weak spot amid rising attacks 

Bojan Simic, Chief Executive of HYPR and a FIDO Alliance board member, warns that IT help desks are increasingly targeted by attackers using social engineering tactics. These tactics often involve leveraging stressful […]

Bojan Simic, Chief Executive of HYPR and a FIDO Alliance board member, warns that IT help desks are increasingly targeted by attackers using social engineering tactics. These tactics often involve leveraging stressful scenarios, such as an executive locked out of their account just before boarding a flight, to pressure help desk agents into bypassing or overlooking security protocols. “The help desk shouldn’t be the weakest link; it should be the first line of defence. That means moving beyond guesswork and adopting identity verification that confirms who someone is, versus what they know or the device they’re using. With phishing-resistant, standards-based verification built into support workflows, agents stop being human lie detectors and start being defenders,” said Simic. 


DIF Blog

DIF Newsletter #54

October 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF

October 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Steering Elections are coming up soon!

We sent out an explainer last week with all the details, but the most urgent reminder is until the 9th, you can:

Nominate someone else you think would make a great steering committee member (we will reach out to them), Self-nominate , and/or Submit questions you'd like all candidates to answer.

Keep an eye out for the answers to those questions from the final slate of candidates 16 Sept, and feel free to use the #sc-elections channel on Slack to discuss.

DIF Labs Beta Cohort 2 Concludes with Successful Show & Tell

DIF Labs Beta Cohort 2 concluded with a successful Show & Tell event on September 24, 2025, showcasing three months of development across three innovative projects.

The Anonymous Multi-Signature Verifiable Credentials (ZKMPA) project built a protocol for m-of-n multi-party credential approval while preserving signer anonymity. Using Semaphore with cryptographic membership proofs and nullifiers, the team achieved W3C VCDM 2.0 compatibility and demonstrated how DAOs can issue credentials with privacy-preserving governance.

The Privacy-Preserving Revocation Mechanisms project delivered the first comprehensive comparative study of revocation strategies for W3C Verifiable Credentials, analyzing status lists, accumulators, zk-SNARK proofs, and short-term credentials. The team created a detailed taxonomy and reference implementation benchmarking costs for issuers, holders, and verifiers, with collaboration from the Ethereum Foundation on Merkle-tree approaches.

Legally-Binding Proof of Personhood via QES (QVC) bridged W3C Verifiable Credentials with Qualified Electronic Signatures under EU eIDAS regulation, bringing legally non-repudiable identity to decentralized credentials. The project explored pseudonymous QES flows and ETSI standards compliance for use cases including contracts, academic credentials, healthcare, and financial agreements.

All three projects presented working demonstrations to global participants from Korea, Japan, Europe, and the United States. The community provided structured feedback using the Roses/Buds/Thorns framework, and projects will continue development as open-source implementations. Visit the DIF Labs blog for complete details and event recording.

DIF Labs Beta Cohort 2: Show & Tell Recap 🚀 | DIF Labs science DIF Labs Trusted Agents Working Group Launches

DIF has launched a new Trusted Agents Working Group to address the emerging challenges of AI agent identity, authentication, and trust. As AI systems gain increasing autonomy and operate across organizational boundaries, the need for robust identity infrastructure becomes critical.

Co-chaired by Andor Kesselman, Nicola Gallo and Dmitri Zagidulin, the working group will develop standards and frameworks ensuring AI agents can maintain verifiable identities, establish trust relationships, and operate with appropriate human oversight. According to Kesselman, "The Trusted AI Agents Working Group focuses on defining an opinionated, interoperable stack to enable trustworthy, privacy-preserving, and secure AI agents."

The inaugural meeting focused on brainstorming use cases and shaping initial focus areas, with the first work item addressing Agentic Authority Use Cases. Read more in Kesselman's LinkedIn post and learn how to get involved here.

Our first Trusted AI Agents Working Group Decentralized Identity… | Andor Kesselman Our first Trusted AI Agents Working Group Decentralized Identity Foundation meeting today was a big success! We ran a cross-work item session to brainstorm use cases and shape initial ideas for the group’s focus areas. This is just the beginning-multiple work items will now move forward within the broader Working Group. We’re focusing on providing artifacts and outputs that the community can anchor to. Each work-item is self governed. I expect we will eventually get into MCP, A2A, NANDA, NLIPs, oCAPs, oAuth, AP2, ANS, Agent Delegation, and many other systems. Our first work item is Agentic Authority Use Cases. What’s Our Scope? The Trusted AI Agents Working Group (WG) at the Decentralized Identity Foundation (DIF) focuses on defining an opinionated, interoperable stack to enable trustworthy, privacy-preserving, and secure AI agents. These agents act on behalf of users or systems and require robust mechanisms for identity, authority, and governance. Feedback form if you’re interested in participating: https://lnkd.in/gaU4hQFw Join DIF to get involved and check https://lnkd.in/gEHvKTYt for more information. Check the mailing list: https://lnkd.in/gCHX8uuJ. Thanks to Nicola Gallo and Dmitri Zagidulin 🇺🇦 for co-chairing this beast of a working group. This is going to be a lot of fun and we’re going to make some amazing progress really fast. My overall take was that this group was a bunch of doers. Also, updates: This pairs really well with https://lnkd.in/g6aitZpQ, so come on October 24 if you want to meet some of us live and work on agentic protocols. And also the Open Agent Labs with IBM AI Alliance should benefit a lot from this feedback Dave Nielsen Adam Larter Alan Karp Eric Scouten Jim St. Clair Kaliya Young Ken Griggs Tom Jones Mitchell Travers Neil Thomson Sachio Iwamoto Matthew McKinney Przemek Praszczalek Geun-Hyung (Peter) Kim Makki Elfatih Adrian Field Daniel Glinz Viraj Patva @iian henderson LinkedInAndor Kesselman Credential Schema Specification 1.0 Released

The Claims & Credentials Working Group, co-chaired by Otto Mora and Valerio Massimo Camaiani, has released Credential Schema Specification 1.0. This specification provides standardized schemas for basic identity credentials, defining the fields required to identify an individual for KYC purposes and other foundational use cases. The 1.0 release ensures schemas are interoperable, extensible, and aligned with existing standards including W3C Verifiable Credentials, OIDC, and schema.org, and includes comprehensive documentation, reference implementations, and guidance for schema developers.

DIF Represented at UNGA Identity Panel

DIF members Matt McKinney and Nicola Gallo spoke on a hands-on panel about digital public infrastructure, trust, and identity two weeks ago, raising awareness and bringing the good word (and up-to-date architectural thinking) to specialists and builders closer to government infrastructure deployments. See last week's blog post for a more detailed read-out.

🛠️ Working Group Updates

Browse our working groups here

Creator Assertions Working Group

The Creator Assertions Working Group continues advancing work on content provenance and authenticity assertions for digital media. Recent discussions have focused on integrating creator assertions with broader verifiable credential frameworks, exploring how content creators can make cryptographically verifiable claims about their work. The group is examining metadata standards that support various content types while maintaining flexibility for emerging use cases. Work continues on alignment with the C2PA ecosystem and development of assertion types that can accommodate both individual creators and organizational content production workflows.

👉 Learn more and get involved

DID Methods Working Group

The DID Methods Working Group focused on updating evaluation criteria for DIF-recommended DID methods. The group refined its assessment framework to emphasize multiple independent implementations, demonstrated production deployments, and clear compliance with core DID traits. Discussions addressed balancing objective technical criteria with expert evaluation to ensure recommendations reflect both standards compliance and practical viability. The group continues work on the proposed W3C DID Methods Working Group charter, addressing community feedback about scope and the role of blockchain-based methods in standardization efforts.

👉 Learn more and get involved

Identifiers and Discovery Working Group

Multiple work streams advanced within the Identifiers and Discovery Working Group. Following the completion of the v1 did:webvh spec, implementations are demonstrating successful interoperability through comprehensive test suites. Performance analysis shows efficient handling of DID document updates even in high-frequency scenarios. The DID Traits team finalized specifications for their 1.0 release, with particular focus on traits related to key validation capabilities and long-term identifier availability. The group explored applications in software supply chain security contexts and examined how DID traits align with emerging regulations including the EU Cyber Resilience Act.

👉 Learn more and get involved

🪪 Claims & Credentials Working Group

Following the successful release of version 1.0 specification, the Claims & Credentials Working Group launched a community schemas initiative. This program creates frameworks for organizations to contribute verifiable credential schemas to a shared repository, with pathways for community review and potential standardization. Recent work includes extending schemas for banking KYC requirements, with particular attention to international postal address formats. The team refined terminology around personhood verification credentials and established processes for synchronizing schemas across multiple repositories. Future development priorities include employment credentials and anti-money laundering certification schemas.

👉 Learn more and get involved

Applied Crypto Working Group

The Applied Crypto Working Group made substantial progress on BBS+ signature schemes and privacy-preserving cryptographic primitives. Key developments include refinements to pseudonym generation approaches, with the team evaluating polynomial methods and their security properties against adversarial scenarios. Discussions addressed post-quantum security considerations and their implications for long-term privacy guarantees in credential systems. The group continues coordination with IETF standardization efforts and is preparing updated test vectors for upcoming draft releases. Members are exploring implementation approaches in both Rust and C++, weighing trade-offs in performance, security, and ecosystem compatibility.

👉 Learn more and get involved

DIF Labs Working Group

With Beta Cohort 2 successfully concluded, the DIF Labs Working Group is evaluating the program structure and considering timing for future cohorts. The group continues providing support to Beta Cohort 2 projects as they transition to ongoing open-source development. Discussions have focused on lessons learned from the cohort model, including the effectiveness of mentorship structures, project scoping approaches, and mechanisms for ensuring long-term project sustainability. The Labs team is also exploring opportunities to showcase project outcomes at industry conferences and standards bodies.

👉 Learn more and get involved

DIDComm Working Group

The DIDComm Working Group advanced work on binary encoding support through CBOR implementation. The team evaluated architectural approaches for supporting multiple encoding formats, considering whether to introduce binary encoding as an optional feature in version 2.2 or as the default in a future major release. Technical discussions addressed message encoding detection, MIME type handling for different encoding schemes, and backward compatibility with existing implementations. The group also explored DIDComm's role in AI agent-to-agent communications, examining how the protocol can support secure, privacy-preserving interactions between autonomous systems.

👉 Learn more and get involved

Hospitality & Travel Working Group

The Hospitality & Travel Working Group made substantial progress on the HAT Pro (Hospitality and Travel Profile) specification. The team developed comprehensive schemas for food preferences, dietary restrictions, and accessibility requirements using graph-based models that eliminate data duplication and improve cross-referencing capabilities. Recent work includes creating UML models and JSON schemas for complex preference structures that can adapt to varied travel contexts. The group is exploring AI-assisted data input mechanisms to simplify the user experience while maintaining data accuracy. Subject matter experts from multiple travel sectors have joined the working group, bringing valuable domain expertise to schema development.

👉 Learn more and get involved

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates

Browse our special interest groups here


DIF Hospitality & Travel SIG

The Hospitality & Travel SIG continued its evolution alongside the working group, focusing on broader ecosystem considerations and real-world implementation challenges. Recent sessions examined the intersection of decentralized identity with emerging AI capabilities in travel, including personalized itinerary generation, automated booking agents, and AI-powered concierge services. The group discussed how traveler-controlled credentials can enable these AI systems while maintaining privacy and user control. Participants also explored challenges in achieving industry-wide adoption of new credential standards, including the need for demonstration projects that showcase tangible benefits to both travelers and service providers.

👉 Learn more and get involved

DIF China SIG

👉 Learn more and get involved

APAC/ASEAN Discussion Group

The APAC/ASEAN group hosted discussions on regulatory developments affecting decentralized identity across the Asia-Pacific region. Key topics included alignment between national digital identity initiatives and decentralized identity standards, with particular attention to interoperability requirements for cross-border transactions. The group examined recent policy changes in Australia, Singapore, and Japan, identifying common themes around privacy protection, user control, and the role of government-issued credentials within broader digital identity ecosystems. Participants discussed strategies for engaging with regulators to ensure decentralized identity approaches are considered in policy development.

👉 Learn more and get involved

DIF Africa SIG

The Africa SIG continues its focus on practical implementations of decentralized identity across the continent. Recent discussions have examined mobile-first approaches to credential management, recognizing that smartphone adoption patterns in Africa differ from other regions. The group explored solutions for users with feature phones or limited connectivity, including offline verification capabilities and SMS-based fallback mechanisms. Participants shared insights on regulatory environments across different African nations and opportunities for harmonization of digital identity frameworks at the regional level.

👉 Learn more and get involved

DIF Japan SIG

The Japan SIG explored technical approaches to AI agent authentication using decentralized identifiers. Discussions covered the unique requirements for identifying autonomous systems, including mechanisms for establishing trust chains between AI agents and their human operators or organizational sponsors. The group examined use cases spanning automated trading systems, customer service agents, and collaborative AI workflows. Participants considered how existing DID methods can be adapted for AI agent use cases and whether new DID methods might be warranted. The group is planning offline events to deepen community engagement and facilitate face-to-face technical discussions.

👉 Learn more and get involved

DIF Korea SIG

👉 Learn more and get involved

📖 DIF User Group Updates
DIDComm User Group

The DIDComm User Group explored practical implementations of the protocol in production environments. Members shared experiences with mediator deployments, discussing scalability considerations and reliability patterns for always-available message routing. The group examined integration approaches with emerging AI communication frameworks, identifying similarities between DIDComm's secure messaging patterns and requirements for AI agent interactions. Discussions also covered developer experience improvements, including debugging tools, testing frameworks, and documentation enhancements that can lower barriers to DIDComm adoption.

👉 Learn more and get involved

📢 Announcements at DIF Executive Director Applications Still Open

DIF is accepting applications for the Executive Director position as the current term comes to a close. This is an opportunity to shape DIF's strategic direction and lead the organization through its next phase of growth. Application details are available in the job description, with questions welcomed at jobs@identity.foundation.

Decentralized Trust Graph Working Group Launches

The Linux Foundation Decentralized Trust (LFDT), Trust over IP, and Decentralized Identity Foundation have launched a new Decentralized Trust Graph Working Group, providing a venue for developing standards around trust networks and reputation systems in decentralized environments. This working group complements DIF's ongoing work by addressing graph-based approaches to modeling trust relationships. DIF members interested in participating can find details about joining in the ToIP community calendar. This collaboration demonstrates the growing ecosystem of organizations working on complementary aspects of decentralized identity and trust infrastructure.

Discount Codes Available for IIW and Agentic Internet Workshop

DIF members can access special discount codes for two upcoming events:

Internet Identity Workshop (IIW) XLI: Use code DIF_XLI_20 for 20% off registration at this link Agentic Internet Workshop: Use code AIW_DIF_10 for 10% off registration at this link

Explore the DIF Events Calendar for a complete listing of upcoming conferences, workshops, and community gatherings where DIF members will be participating.

🗓️ ️Community Events

Internet Identity Workshop XLI
The semi-annual gathering of the identity community returns, offering unconference-style sessions where participants drive the agenda. IIW continues to be a critical venue for discussing emerging challenges, sharing implementation experiences, and building consensus around identity standards.

Use code DIF_XLI_20 for 20% off registration at this link

Agentic Internet Workshop
The Agentic Internet Workshop takes place immediately following IIW, providing an opportunity to explore how decentralized identity standards can provide the foundation for AI agent interactions and trust. This new workshop addresses the intersection of AI agents and internet infrastructure, with decentralized identity as a key enabling technology. Sessions will explore authentication mechanisms for AI agents, human oversight frameworks, and trust models for agent-to-agent interactions.

Use code AIW_DIF_10 for 10% off registration at this link

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.

Thursday, 02. October 2025

DIF Blog

Why your content needs an ingredient list

Series: Building AI Trust at Scale Part 3 · By DIF Ambassador Misha Deville View all parts → In the 1990s, when mass food production introduced hundreds of novel ingredients and industrial processes, people lost significant visibility into what they were eating. Nutrition labels emerged as a solution for transparency
Series: Building AI Trust at Scale
Part 3 · By DIF Ambassador Misha Deville
View all parts →

In the 1990s, when mass food production introduced hundreds of novel ingredients and industrial processes, people lost significant visibility into what they were eating. Nutrition labels emerged as a solution for transparency and to help consumers make informed choices on their consumption.

Today, digital content creation is experiencing its own industrial revolution. Canva logged 16 billion AI-powered feature uses last year 1, Midjourney surpassed 21 million users by March 2025 2, and 71% of businesses and 83% of creators are already reporting using AI tools in their content workflows 3. This shift has introduced new intermediaries and processes that make it harder to trace digital content origins. Without transparency, it is near impossible for artists and platforms to maintain creative control and proper attribution, or for audiences to connect to increasingly intermediated and automated creators.

The music industry for one is facing a tipping point of copyright infringement from unauthorised training models, artificially inflated play counts, and fake AI tracks swamping streaming platforms 4. The likes of Universal Music Group, Warner, and Sony, are pushing for the technology to serve the artists and help enhance their creativity instead of replacing them. As Jeremy Uzan of the Universal Music Group says, “AI can be used to assist the artist” as they pursue an artist-centric approach. For example, using AI to translate Brenda Lee’s iconic ‘Rockin around the Christmas tree’ into Spanish, or audio-enhancing archival Beatles audio, to demonstrate how human creative input combined with AI assistance can work commercially and legally. However, the current AI tools intending to foster new creative opportunities often lack the granular attribution controls and auditability that is needed to create the level of supply-chain transparency that creators need.

Like nutrition labels before them, The Content Authenticity Initiative and Creator Assertions Working Group (CAWG) are trying to restore the transparency that mass automation has obscured. Such efforts, deployed at scale, would give audiences the information they need to make informed choices about the media they consume, without dictating their decisions.

The Foundational Infrastructure

The C2PA Content Credentials Specification 5 acts as a foundation for this media transparency. It cryptographically binds an origin ‘label’ to a digital asset that says how the asset was created. The CAWG 6 builds on C2PA with a framework for attaching the ‘who’ and ‘why’ to an asset as ‘Content Credentials’. As Scott Perry, the Co-Chair of CAWG and Conformance Program Administrator of C2PA, puts it, CAWG metadata brings “human claims to digital media”.

What this looks like in the real world is Google’s Pixel 10 now shipping with C2PA conformance built in, YouTube tagging select videos as “captured with a real camera”, and LinkedIn marking images with a “Cr” symbol to show when they carry Content Credentials. These tags relay information like whether AI was used to generate or edit a part of the content, the entity that created the Content Credential, and when the credential was created. But, as Eric Scouten, Co-Chair of CAWG and Identity Standards Architect at Adobe, stresses, “one of the biggest misconceptions about [CAWG] is that it is something like Snopes or Politifact, out to say what’s true and what’s not, and that’s not the case.” CAWG does not arbitrate truth. It does not fact-check or attach political judgments. Instead, it provides signals about who created a piece of content, when, how, and in what context. The decision to ‘believe’ in the content remains with the viewer.

When FKA Twigs created her own AI clone for fan interactions, she demonstrated the difference between artist-controlled AI use and unauthorised exploitation 7. Once people knew that she stood behind her AI clone, the work felt legitimate and trust flowed from the person to the work. With provenance infrastructure, fans could verify which AI interactions were officially sanctioned by Twigs herself versus the unauthorised. Not because Content Credentials determine what’s ‘real,’ but because they can provide verifiable provenance information about the creation of digital assets, including an authorisation trail from creation to consumption.

The Challenges with Creator Identities

The FKA Twigs example hints at a much larger challenge ahead. Apps like Character AI already have ~9 million users per day, and as AI clones, virtual personas, and agentic creators proliferate, the range of online ‘identifiers’ becomes significantly more complex.

Even today, navigating the multiple digital identifiers of creators, from professional personas to social media handles and artist pseudonyms, is a fragmented journey. The plurality of creator identities creates a fundamental mismatch with existing identity verification systems. Large organisations like newsrooms and major labels rely on X.509 certificates and PKI systems that fit to enterprise workflows and secure their own supply chains, for example disincentivising pre-release leaks. But individual creators don’t operate in that world. For them, identity lives in social handles and personal websites where identifiers are informal and often platform-bound.

CAWG’s framework bridges this gap by accepting both ends of the spectrum. Their Identity Claims Aggregator mechanism verifies the disparate identity signals through trusted third parties, and issues a single verifiable credential that binds the creator’s chosen identity to their content. This gives creators a direct, human voice in the history of their work, rather than only recording what the device or app has logged in the process. As Eric explains, “the point of the identity assertion is that it is a framework that allows a lot of different things to be plugged into it.” The design is deliberately credential-agnostic, giving creators the flexibility to bring their own chosen identity signals. Future versions of the CAWG identity framework will likely add support for generic W3C Verifiable Credentials and self-controlled credentials such as those being developed by the First Person Project.

Major labels and organisations like the Universal Music Group and the Recording Industry Association of America are already exploring the use of ISNI (International Standard Name Identifier) for artist identities. In practice, this allows labels and managers to attach industry-recognised identifiers to digital assets that protect an artist’s image and likeness in their content. But this approach still has its challenges. For one, ISNI faces the perennial challenge of universal standards adoption. As with most industries, there is no single identifier used for creators today that is publicly resolvable. Scott takes a pragmatic approach to the universal identifier problem, “each industry should publish its best practices alongside the normative standard - i.e. saying this is the state of play in music right now. This is the best you can do, do it this way, we're working on it. Then as that evolves, as it updates, you have one place to go for anyone who wants to know how to identify music.”

CAWG’s strength lies in anticipating this plurality of identity and evolution. The framework is designed to incorporate new credential types as they emerge, from today’s ISNI and social accounts to tomorrow’s W3C Verifiable Credentials and even agentic identity systems.

This adaptability is particularly critical for media industries because digital content can be discovered and consumed decades after creation. Provenance data needs to persist across the content’s entire lifecycle, requiring what Eric calls “archival-quality identity”. Unlike transactional systems that only need authorisation at the point of use, such as purchasing an item online, media attribution can become more valuable as content gains cultural significance or commercial success. Sample clearances, royalty disputes, and copyright claims can arise years later, demanding granular, persistent attribution records that today’s identity token-based models like OAuth don’t provide.

As Eric explains, “if I produce a piece of content today, and you happen to find it in 2030 or 2040, I would like you to be able to understand that it was me that produced that, and to have confidence that you correctly attribute it to me. But that sort of lasting, archival quality identity, is shaky. I think the AI systems are especially shaky on that front.”

But what if every track could carry its creative history? A kind of musical DNA that travels with the content, recording not just what was made, but who made it, how, and under what authority.

The Complexity of Agentic Identity

This type of content DNA becomes essential with agentic AI systems. Unlike generative AI tools that simply transform input to output, agents pursue goals over time, coordinating multiple tools and delegating to other agents. When a music producer delegates post-production to an AI agent that then assigns harmonisation to one agent and mastering to another, the non-deterministic nature means every delegation, agent version, and training input must be recorded in case of future disputes.

This creates a fundamental distinction in attribution requirements. Tools are deterministic, their provenance handled by C2PA which can reliably attest to what happened inside a capture device or editing suite. Agents are non-deterministic, making autonomous choices and passing work along delegation chains. CAWG addresses this by developing persistent, verifiable identifiers that survive across delegations and enable authorisation chains to be traced.

In media industries, the complexity extends beyond identity into rights management and remuneration. The JPEG Trust Initiative, an ISO standards effort collaborating with CAWG, is standardising how usage permissions and commercial terms travel with content. Together, C2PA, CAWG, and JPEG Trust form a layered trust stack, proving what happened, who did it, and under what rights.

This infrastructure enables critical uses cases for the agentic web, such as:

AI Disclosure Granularity: Moving beyond binary “AI or not AI” labels to capture the spectrum of AI involvement. Copyright Protection: Recording types and quantities of input from humans, agents, instruments, or other sources, to establish legal protection for mixed human-AI works, as fully AI-generated works cannot be copyrighted. Platform Identification: Indicating content boundaries and licensing restrictions while maintaining creator control over broader commercial use.

These capabilities can unlock automated royalty distribution, combat unauthorised training data use, and support new discovery mechanisms between creators and fans. Alongside these market-driven opportunities, regulatory pressure is simultaneously accelerating Content Credential adoption across industries.

The Drivers from Compliance to Opportunity

Steps towards mandating content labelling have already begun. California has proposed legislation that would fine platforms $5,000 a day for failing to label AI content, with implementation targeted for 2026 8. The EU is similarly considering disclosure requirements for AI-assisted and generated content for 2026 9. These disclosure laws will catalyse C2PA adoption as platforms need the infrastructure to record content provenance and AI involvement to comply with regulations.

Some may see these laws as a regulatory burden, or worry that they will create surveillance infrastructure, forcing creators to expose more than they wish, but the technical reality is different. C2PA alone only records the tools used and when, allowing for total anonymity of the creator. CAWG equally gives complete control to the creators in what they disclose. The technical architecture enables privacy by letting creators choose which identity signals amplify their message or benefit their attribution goals. There’s no requirement to tie your entire identity to one piece of content.

To further increase creator flexibility, CAWG is now developing a new mechanism called ‘identity hooks’ as a way to delay attribution decisions until creators know which identity signals they need. When creators are authenticated in a phone or editing tool, that system can both sign via C2PA and attest the creator was logged in during the creation process. This establishes a stable anchor at the time of creation that creators can hook back into later when they need to attach a relevant persona or credential. As Andrew Dworschak, Co-founder of Yakoa, says, “[Identity hooks] bring flexibility so that a creator can have maximum optionality down the line when they realise they need [attribution] to support their content flow”.

Andrew’s company builds digital rights protection technology for creators and he sees even broader opportunities from Content Credentials. For example, “allowing people to connect with each other in new ways, come to new agreements, and share revenue in ways that are appropriate to them.” Even today, Yakoa’s AI monitoring tool can help to identify where creators haven’t received proper attribution across platforms, shifting the conversation from reactive compliance to proactive rights-management infrastructure.

The Future of Creative Infrastructure

The infrastructure for content authenticity is already evolving. Regulations on content disclosure are effective from next year. Major technology providers and platforms have begun adopting the tools. The question isn’t whether this happens but who will shape how it develops.

CAWG’s open and credential-agnostic approach creates infrastructure that serves all creators regardless of size or association. The specifications continue to be written as new technologies emerge, and provenance data types continue to be developed. Altogether the ecosystem is enabling creative controllability while embracing AI’s collaborative potential.

For creative industries facing AI transformation, engaging with the working groups now means influencing the attribution systems that will eventually be as commonplace as nutrition labels.

A huge thank you to Eric Scouten, Scott Perry, Andrew Dworschak, Jeremy Uzan, and Erik Passoja for their time and insights in preparing this article.

Join the conversation:

Participate in DIF’s Creator Assertions working group: Help shape the solutions that allow content creators to express individual and organisational intent about their content. Participate in DIF's Trusted AI Agents working group Test the Tools: Experiment with identity assertions and provenance using Contentauthenticity.adobe.com Join the Content Authenticity Initiative: Help the growing cross-industry ecosystem that is restoring trust and transparency online. Building AI Trust at Scale — Series ← Previous in this series: Part 2 – Translating Promise into Value Next in this series: Part 4 — Authorising Autonomous Agents at Scale By DIF Ambassador Misha Deville View all parts Endnotes McGill, Justin. 2025. https://brandwell.ai/blog/midjourney-statistics/. Brandwell. McGill, Justin. 2025. https://brandwell.ai/blog/midjourney-statistics/. Brandwell. Singla, A., et al.. 2025. The State of AI: How organizations are rewiring to capture value. Quantum Black AI by McKinsey Force, Eamonn. 2025. AI, bot farms and innocent indie victims: how music streaming became a hotbed of fraud and fakery. The Guardian. C2PA. C2PA Specifications. CAWG. CAWG Specifications. Youngs, Ian. 2024. FKA Twigs uses AI to create deepfake of herself. BBC UK. Cal Matters. SB 942: California AI Transparency Act. European Union. AI Act: Regulation (EU) 2024/1689.

Oasis Open

Call for Participation: Open Exposure Management Framework (OEMF) TC

A new OASIS technical committee is being formed. The Open Exposure Management Framework (OEMF) TC has been proposed by the members of OASIS listed in the charter below. The TC name, statement of purpose, scope, list of deliverables, audience, IPR mode, and language specified in this proposal will constitute the TC’s official charter. Submissions of technology for […] The post Call for

New TC to establish an unbiased, community framework to unite and direct the efforts in preventing, assessing, and resolving exposures in organizational technology.

A new OASIS technical committee is being formed. The Open Exposure Management Framework (OEMF) TC has been proposed by the members of OASIS listed in the charter below.

The TC name, statement of purpose, scope, list of deliverables, audience, IPR mode, and language specified in this proposal will constitute the TC’s official charter. Submissions of technology for consideration by the TC, and the beginning of technical discussions, may occur no sooner than the TC’s first meeting.

The eligibility requirements for becoming a participant in the TC at the first meeting are:

You must be an employee or designee of an OASIS member organization or an individual member of OASIS, and You must join the Technical Committee, which members may do by clicking here.

To be considered a voting member at the first meeting:

You must join the Technical Committee at least 7 days prior to the first meeting (on or before October 23, 2025 ; and You must attend the first meeting of the TC, on October 31, 2025

Participants also may join the TC at a later time. OASIS and the TC welcomes all interested parties.

If your employer is already on the OASIS TC member roster, you may participate in DPS TC (or any of our TCs) at no additional cost. Find out how.

If your employer is not a member, we’re happy to help you join OASIS. Contact us to discuss your options for TC membership.

Please feel free to forward this announcement to any other appropriate lists. OASIS is an open standards organization; we encourage your participation.

CALL FOR PARTICIPATION

OASIS Open Exposure Management Framework (OEMF) TC

Section 1: TC Charter1.a. TC Name
Open Exposure Management Framework (OEMF) TC1.b. Statement of Purpose
The purpose of the Open Exposure Management Framework (OEMF) is to establish an unbiased, community framework to unite and direct the efforts in preventing, assessing, and resolving exposures in organizational technology. 

The need for this framework emerged from a desire of cybersecurity professionals to have a thoughtful, purpose-driven set of parameters for managing exposure.  Some of the motivating forces behind creating the OEMF are: 

– An aspiration to accommodate security domains such as Vulnerability Management and Cloud Security in a more detailed way than existing cybersecurity frameworks currently do. 
– An opportunity to standardize and structure how technology exposures are defined, discovered, prioritized, and acted upon. 
– A drive to include and focus on critically important upstream activities that prevent technology exposures. 
– A desire to outline tactical guidance around the processes and technologies that intersect Exposure Management. 
– A present need for an independent, industry accepted scale for measuring Exposure Management maturity. 
– A present need to define best practices and terminology related to Exposure Management in a manner that is agnostic of specific vendor technologies.

Major Goals (see Section 5 for more detailed explanations and timelines): 
1. Propose a functional exposure management lifecycle. 
2. Offer practitioners a common set of capability requirements per lifecycle stage. 
3. Map capability requirements to prominent frameworks such as NIST, CIS, Gartner, etc. 
4. Offer the Cybersecurity industry an acceptable maturity scale for Exposure Management. 
5. Provide implementation parameters to achieve each maturity milestone. 
6. Address data inconsistencies between disparate exposure data sources. 
7. Map technology capabilities to the OEMF functional lifecycle.1.c. Business Benefits 
The primary business benefit of the OEMF is to provide organizations (both private and public) with a structured methodology to better avoid and correct the exploitability of their technology footprints. By following the methodology outlined in the OEMF, organizations would benefit through:

– More effectively avoiding the creation of exploitable technology configurations at scale. 
– Becoming more efficient in discovering, prioritizing, and resolving technology exposures. 
– Maximizing limited technology and human resources on the Exposure Management activities that most significantly reduce organizational susceptibility.
– Making better use of exposure data organizations may already have today. 
– Being enabled to make more educated and effective decisions in technology investments and personnel allocation related to Exposure Management programs.1.d. Scope
The primary scope is to enable the Cybersecurity community with a series of best practices around Exposure Management. Following that, the project intends to provide a methodology for Cybersecurity professionals or partners to perform selfassessments in Exposure Management maturity, much like OWASP has done with the Software Assurance Maturity Model. Additionally, the project seeks to develop reference material that Cybersecurity professionals can leverage to tactically drive Exposure Management maturity within their respective organizations.

As such, the main scope of the OEMF is to provide framework documentation and supplemental educational materials such as videos, presentations, images, and templates regarding Exposure Management. 

The OEMF will not produce any software products or engage in any direct commerce with outside entities.  The project assumes that the best practices put forth by the OEMF will organically drive an evolution in technology and human capabilities by those who consume the OEMF’s materials.1.e. Deliverables
Below are the major milestones/deliverables the OEMF is working towards, with target dates for each.

1. Publish the first edition of the Open Exposure Management lifecycle. This lifecycle defines what best practice entails for preventing, assessing, and resolving technology exposure. (estimated December 2025) 

2. Publish a set of capability requirements (both process and technology capabilities) for each OEMF lifecycle stage. (estimated February 2026) 

3. Map the defined capability requirements to controls in prominent Cybersecurity frameworks including NIST CSF, CIS Critical Security Controls, and Gartner CTEM. (estimated February 2026) 

4. Publish an OEMF maturity scale that Cybersecurity professionals can use to self-assess organizational Exposure Management maturity. (estimated May 2026)

5. Provide implementation requirements to achieve each maturity milestone for each OEMF lifecycle stage. Once this is complete, stakeholders will be able to not only understand their maturity but to synthesize their own improvement plans. (estimated June 2026) 

6. Publish a guide to mapping data inconsistencies between Exposure Management data sources, specifically targeting issues with disparate severity scales across different data sources. (estimated November 2026)1.f. IPR Mode
Non-Assertion Mode1.g. Audience
The primary stakeholders of the OEMF are Cybersecurity personnel, most notably personas such as Chief Information Security Officers (CISOs), Directors of Security, as well as managers and leads responsible for Vulnerability Management, Application Security, Cloud Security, and Identity Security. Secondary stakeholders would include executive leadership, Risk & Compliance Personnel, and even customers/partners of an organization since a more effective means of reducing technology exposure, and reporting on outstanding exposure, benefits these secondary stakeholders greatly. 

Exposure Management is a domain of Cybersecurity that has a fairly consistent relevance across all industries, however, larger enterprises and public entities as well as organizations that design their own infrastructure and applications would benefit even further from the OEMF as those organizations have deeper, more complex Exposure Management considerations and would have more need for the secure design elements of the framework.1.h. Language
The primary language of the OEMF TC is English.Section 2: Additional Information2.a. Identification of Similar Work
The OEMF seeks to be a supplemental framework that integrates with existing Cybersecurity frameworks and models to achieve two critical outcomes:

1. A unification and detailed direction of “find and fix” security domains such as Vulnerability Management, Application Security, Cloud Security, Software as a Service Security, and Identity Security. 

2. A bridging of best practices in secure design and Cybersecurity to give a consistent approach to preventing exposure in addition to assessing and resolving exposures that occur. 

The OEMF mainly intends to augment and link to existing frameworks and models. For secure design lifecycle phases, mapping will be provided to the tenants of the latest version of OWASP SAMM. For Cybersecurity lifecycle phases, an initial mapping will be provided for the latest version of the NIST CSF, CIS Critical Security Controls, and Gartner’s Continuous Threat Exposure Management framework. These mappings will be a guide that details how each lifecycle stage relates to a lifecycle stage in an existing framework, and which control domain each prescribed capability supports in those frameworks. 

The intended outcome is that organizations can still use these prominent frameworks to direct their overall Cybersecurity and Operations programs, but when trying to assess and drive maturity on Exposure Management domains, that organizations can “drill in” utilizing the OEMF. The output of an OEMF maturity evaluation can be used to easily update maturity against these existing frameworks due to the mappings provided by the project.2.b. First TC Meeting
The first OEMF TC meeting is expected to take place on/around October 30, 2025 via Zoom.2.c. Ongoing Meeting Schedule
Virtual meetings are expected to meet weekly though completion of the first deliverable, then likely transition to semi- monthly. 2.d. TC Proposers
Chris Peltz, Guidepoint Security
Bill Olson, Tenable 
Steve Carter, Nucleus 
Nathan Paquin, Guidepoint Security
Christopher Brown, Guidepoint Security
Gavin Millard, Tenable 2.e. Primary Representatives’ Support 
I, Chris Peltz, as OASIS primary representative for Guidepoint Security, confirm our support for the OEMF TC and our participants listed above.
I, Bill Olson, as OASIS primary representative for Tenable, confirm our support for the OEMF TC and our participants listed above.2.f. TC Convener
Chris Peltz, GuidePoint Security, chris.peltz@guidepointsecurity.com2.g.  Anticipated Contributions
The OEMF project is at its inception, there are no preexisting repositories or open source projects to donate.2.h. FAQ Document
N/A2.i.  Work Product Titles and Acronyms
N/A

The post Call for Participation: Open Exposure Management Framework (OEMF) TC appeared first on OASIS Open.


EdgeSecure

Letter from Forough Ghahramani, Ed.D.

Dear EdgeDiscovery Community, As the pace of innovation accelerates and transformative technologies like AI and quantum computing reshape the research landscape, this Summer/Fall 2025 issue of EdgeDiscovery invites you to… The post Letter from Forough Ghahramani, Ed.D. appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Dear EdgeDiscovery Community,

As the pace of innovation accelerates and transformative technologies like AI and quantum computing reshape the research landscape, this Summer/Fall 2025 issue of EdgeDiscovery invites you to explore the ideas, people, and infrastructures advancing discovery and inclusive innovation across our ecosystem.

We begin with a powerful conversation featuring Dr. Dan Stanzione, Executive Director of the Texas Advanced Computing Center (TACC), whose leadership in open science supercomputing continues to shape national and global capabilities. From Frontera, the fastest university-based supercomputer, to the new NSF-supported Leadership Class Computing Facility (LCCF), Dr. Stanzione offers insights into the future of AI-enabled HPC, the promise of quantum, and the importance of training the next generation of technologists. His thought leadership underscores the critical intersection of infrastructure and impact in advancing science.

We are proud to feature Dr. Ilkay Altintas, Chief Data Science Officer at the San Diego Supercomputer Center and Founding Director of the Societal Computing and Innovation Lab (SCIL), has built a career at the intersection of computing and societal impact. Her work spans environmental modeling, biomedical data, and advanced cyberinfrastructure, with a strong focus on accessibility, scalability, and responsible innovation. Through programs like WIFIRE, she has pioneered real-time hazard response tools that bridge research and emergency management, while also helping shape a data science workforce grounded in ethical and community-driven practice. At SCIL, she leads efforts to design user-centric, composable platforms that empower domain experts to harness AI and HPC without needing deep technical expertise. Dr. Altintas exemplifies how thoughtful collaboration and infrastructure innovation can drive equity, resilience, and discovery across disciplines.

We also feature a deep dive into the work of Dr. Frank Wuerthwein, Director of the San Diego Supercomputer Center, Executive Director of the Open Science Grid, and Principal Investigator of the National Research Platform (NRP). Dr. Wuerthwein shares his vision for building a scalable and inclusive infrastructure to support AI education and data-intensive science at institutions of all sizes, including community colleges and under-resourced campuses. He emphasizes that advancing education in AI requires not only technological infrastructure, but also social infrastructure, collaborative networks of educators, shared platforms, and aligned curriculum pathways. His efforts to democratize access to computing and integrate hands-on learning from high school to career reskilling reflect the kind of systems thinking needed to close opportunity gaps and scale innovation.

We highlight the contributions of Dr. Manish Parashar, Director of the Scientific Computing and Imaging (SCI) Institute, Inaugural Chief Artificial Intelligence Officer at the University of Utah, whose career has shaped national conversations around responsible AI, data sharing, research infrastructure, and policy. A mentor to many, including myself during his time at Rutgers, Dr. Parashar's vision and collaborative leadership have laid the foundation for initiatives such as the National Data Platform and the Virtual Data Collaboratory.

From national centers to regional catalysts, we spotlight Dr. Michael Johnson, President of the New Jersey Innovation Institute (NJII), whose work is redefining how higher education intersects with industry to translate academic innovation into real-world impact. Drawing on his entrepreneurial background and translational research experience, Dr. Johnson has positioned NJII as a nimble, mission-driven organization accelerating AI and EdTech innovation, expanding access for small and mid-size businesses, and addressing workforce development through pragmatic, cost-effective solutions.

Our feature on SHI International’s AI & Cyber Labs gives readers a glimpse inside one of New Jersey’s most exciting new facilities for enterprise AI adoption and experimentation. Through an interview with Lee Ziliak, SHI’s Field CTO, we explore their work with NVIDIA, aspirations for quantum engagement, and commitment to partnering with higher education institutions to accelerate responsible innovation.

These articles reflect the power of voices influencing national conversations. As regional research and education networks like Edge work to bridge gaps and build ecosystems, we’re reminded that partnerships, across disciplines, sectors, and geography, are essential to unlocking equitable access to advanced technologies.

As always, EdgeDiscovery is not just a publication, it is an evolving platform for dialogue, collaboration, and community-building. I hope this issue sparks new ideas and inspires deeper engagement as we continue to build a future grounded in innovation, access, and purpose.

With appreciation,

Forough Ghahramani, Ed.D.
Assistant Vice President for Research,
Innovation, and Sponsored Programs
Edge

The post Letter from Forough Ghahramani, Ed.D. appeared first on Edge, the Nation's Nonprofit Technology Consortium.


Leveraging and Managing AI in Education Today and into the Future

Leveraging and Managing AI in Education Today and into the Future A conversation with Forough Ghahramani and Florence Hudson - originally published on the Springer Nature Research Communities website on… The post Leveraging and Managing AI in Education Today and into the Future appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Leveraging and Managing AI in Education Today and into the Future

A conversation with Forough Ghahramani and Florence Hudson - originally published on the Springer Nature Research Communities website on September 24, 2025.

What drew you each to such varied topics of work/study and how did you find yourself where you are today?

Forough Ghahramani: I’ve always followed my curiosity, and that’s taken me through a varied journey across biology, math, computer science, software engineering, academia, entrepreneurship, and now into AI and quantum. I started out passionate about science, biology and math especially, and added computer science during my graduate program, which opened the door to early work in high-performance computing. I still remember using punch cards on the IBM S/360, then watching computing evolve rapidly, from minicomputers during graduate school to 64 bit systems, open source computing environments, the internet, search engines. My early career included working with operating system development for proprietary Virtual Memory Management System (VMS) at Digital Equipment Corporation (DEC), then moved on to Unix engineering, performance management and benchmarking and migrating applications from 32-bit to 64-bit systems.

One of the most rewarding phases of my career was as a technical consultant, where I got to see how the systems I helped build were applied across industries including, pharmaceuticals, biotech, healthcare, finance, manufacturing, and even steel manufacturing. Working as a systems architect on the Human Genome Project brought everything full circle. It tied my backgrounds in biology, mathematics, and computing into one meaningful direction. I became fascinated by the field of bioinformatics, and I fully reinvented myself in that area. I went from being a traditional software engineer to running my own biotechnology consulting company, which exposed me to the world of entrepreneurship and its unique challenges and rewards.

Over time, I came to view technology as much more than a tool, it became a vehicle for discovery, innovation, and transformation. That entrepreneurial spirit eventually led me to academia, where I found joy in teaching, mentoring, and launching new programs that bridge industry and education. I’ve always been driven by a love of learning and innovation, and even when career shifts weren’t intentional, sometimes guided by industry shifts, life phases or family needs, they added depth and diversity to my skill set and opened my interest to new areas.

I went back to school a couple of times, one earlier in my career to get my MBA and then later in life I earned my doctorate, something that brought together my leadership work in higher education and my lifelong commitment to continuous growth. I first learned about AI in the 1980s and have worked with big data and HPC for over 30 years, but what fascinates me today is seeing AI enter the mainstream and imagining its future alongside quantum technologies. My experience in both industry and higher education, always on the leading edge,  has allowed me to live four very different careers, and I’m still energized by what lies ahead.

In my current role, as Vice President for Research & Innovation at NJ Edge,  I work with higher education leaders as they are developing the strategy to support research through advanced and emerging technologies, including AI, high performance computing, and quantum.

While much has changed in my career journey, what has stayed constant includes a problem-solving mindset, a hunger to grow, and a strong sense of what matters to me at any given time. Education has played a central role in shaping opportunities throughout my life, and I’m a firm believer in giving back. As an engineer and advocate, I’ve worked to encourage young people, especially girls, to pursue STEM fields, often speaking to students from K–12 through college to help spark interest and confidence in science and technology. Another aspect of giving back is serving on advisory boards of two of my alma maters, Penn State University college of Science and University of Pennsylvania. Involvement in professional organizations such as IEEE and Society of Women Engineers has also provided opportunities for community engagement.

Florence Hudson: I always loved math and science from a young age. When I was a young girl my brother would wake me up to watch NASA spaceflight missions on TV which I thought were so cool. One day I asked “how do they do that?” That’s when I began thinking like an engineer.

As an engineer and a scientist, I have insatiable curiosity, plus I love to create things and fix things whether for business, technology, research, government, policy, humans, or society. Basically, I follow my curiosity, identify challenges to address, and apply my thinking and all types of technology to help solve problems. It’s a never-ending opportunity as the problems change as do the technologies and solutions available to address them. I believe our opportunity while we are on this earth is to identify the unique gifts we each have and use them for good everyday. That is what I strive to do across all domains that interest me, from data science to all types of engineering, sciences, knowledge networking, cybersecurity, standards, societal challenges, education, outreach and more.

As my educational and professional careers unfolded, I worked for NASA and the Grumman Aerospace Corporation while earning my aerospace engineering degree. I loved aerospace engineering, but the lead time from research to launch was decades and funding was declining. Computing and information technology was growing, so I expected that computers would run the world some day and I went into computing. My first job in computing was at Hewlett Packard. Then I enjoyed a long career at IBM where I was able to apply technology to all sorts of societal, business and technical challenges from an initial sales role to eventually becoming an IBM Vice President of Strategy and Marketing and Chief Technology Officer.

After retiring from IBM in 2015, I became a Senior Vice President and Chief Innovation Officer at Internet2 in the research and education world, and then worked for the NSF Cybersecurity Center of Excellence at Indiana University. In 2020 Columbia University asked me to lead the Northeast Big Data Innovation Hub after I had been on the advisory board since 2015 working on their overall strategy and cybersecurity initiatives, so it was a natural fit to become Executive Director. I had also started my own consulting firm (FDHint, LLC) as CIOs were asking me to consult with them. I have also served on over 18 corporate, advisory and steering boards - from NASDAQ-listed companies to academic and non-profit entities.

My passion for cybersecurity is a key focus of mine. It started in my early days as an aerospace engineer working on defense projects. At IBM I worked on security initiatives in servers and solutions, and continued the focus working for the NSF cybersecurity center of excellence at Indiana University. This led to my leading the development of the IEEE TIPPSS standard to improve Trust, Identity, Privacy, Protection, Safety and Security for clinical IoT (Internet of Things) which won the IEEE Emerging Technology Award in 2024. Springer has published two of my books on TIPPSS.  I am currently Vice Chair of the IEEE Engineering in Medicine and Biology Society Standards Committee, and lead a TIPPSS roadmap task group which has spawned a new IEEE standard working group on AI-based coaching for healthcare with TIPPSS - Trust, Identity, Privacy, Protection, Safety and Security. TIPPSS is being applied in other domain areas as well, including large experimental physics control systems, energy grids and distributed energy resources. TIPPSS is envisioned to apply to all cyberphysical systems.

In what ways do you think your own educational/academic/career path might have been different if you started in today’s climate?

Forough Ghahramani: If I were starting my academic and professional journey today, I think it would have looked quite different, maybe not in direction, but in pace, access, and mindset. When I was starting out, computing was a specialized, niche field. Physical access to machines, time on shared systems, and a lot of patience was necessary. Today, a high school student can access cloud-based computing resources, learn to code from YouTube, and contribute to open-source projects from their bedroom. That kind of accessibility changes everything. With AI, cloud computing, and real-time collaboration platforms now core to both education and work, the barriers to accessing knowledge and innovating early have dramatically lowered.

With today’s startup opportunities, accelerators, and online communities, I probably would have embraced entrepreneurship sooner. I also imagine I would have engaged with more interdisciplinary learning earlier on, because today’s educational environment really encourages learning across domains. AI, data science, and quantum computing would have pulled me in even faster given my background and propensity, but I would have had to be more intentional about focus, given today’s information overload can be overwhelming.

I think my motivation and values would be the same. I’ve always been driven by curiosity and the desire to connect ideas across fields. What has changed is that today’s climate rewards that type of thinking more openly, and it provides more tools to act on it faster.

Florence Hudson: I think if I were to start my educational and professional career today I might have stayed in aerospace engineering longer as there are many more job opportunities with government and commercial space organizations, and faster transition of the research to practice. When I was an Aerospace and Mechanical Engineer at Princeton University and was working on future missions around Jupiter during a NASA summer internship, they said my summer internship project would take 18 to 20 years to come to fruition. That’s a long time! That’s when I decided to go into Information Technology (IT). Now there is a much faster path from research to execution in aerospace, and many more jobs, thereby broadening and accelerating opportunity and impact.

Being involved in both technology and education, do you see more risk with the technology itself (misinformation, bugs, security) or how it is applied in the educational landscape (with complicated policies, uneven funding, inequalities)? Or a combination?

There is risk in both AI technology itself as well as how it is used in the educational landscape.

To think more broadly, we must consider that the educational landscape of AI includes everyday use and education for all citizens - not just educational institutions. Openly available AI-enabled systems from Large Language Models (LLMs) like ChatGPT to everyday devices using AI to make suggestions that may be incorrect, are affecting the education of our citizens, students, teachers and professionals. If an educator, professional or parent is provided incorrect information and they teach others or take actions with incorrect information, the incorrect recommendations by AI can have a broad negative impact. We must aspire to limit that negative impact.

There is also a risk with users sharing information in AI tools that are meant to be kept private, whether they are private citizens or professionals in industry or government. The AI tools may leverage information used to ask questions of AI to add to the corpus of content used to answer questions for other users, thereby risking privacy and security of shared information. This risk applies to all humans and institutions asking questions of AI tools as their questions provide context and content that can be used by the AI tool more broadly.

In educational systems and institutions, AI has the risk of providing incorrect information so students and teachers may be learning things incorrectly, which will proliferate to others they speak with or teach. AI is creating a false sense of comfort that it knows the right answer, without people questioning or vetting it. It makes it easy for people to stop thinking. Many people want to let the AI think “for” them, and many people do not bother to check if it is right or wrong. This is a real danger.

Technology, by itself, can be flawed, but the risks can be managed with good design, robust testing, responsible development and ongoing management. An important concern is when powerful technologies are layered on outdated systems, infrastructures, or unclear policies.

While we must continue to improve the technology itself, we also need to focus on the human, structural, and policy dimensions that determine whether technology helps or harms. If AI is deployed without thoughtful design and policy, and educator involvement, it can do more harm than good. The challenge isn’t just what AI can do, it is also who gets to use it and for what purpose.

Like any technology, there will be bugs and problems, but it’s when we abuse the power of AI that the risk to humans and institutions increases.

What, briefly, is the big picture landscape of AI and education, including key strengths and risks?

AI is being used in education already, by students, teachers, and administrators. Like any tool, it can be used for good or for bad.

AI is transforming education at every level, from K–12 classrooms to higher education and workforce training, by introducing new possibilities for personalization, real-time support, availability and scalability across the broad ecosystem of educational systems and institutions. Key strengths include the ability for AI to deliver adaptive learning experiences tailored to each student’s pace and style, automate time-consuming tasks like grading or feedback, and reveal data-driven insights that help educators intervene earlier and more effectively in student learning journeys. AI can provide a quick synopsis for students and teachers to be able to quickly ingest content, and even translate content across languages, generate visualizations to support complex thinking, and serve as a tutor, coach, or creative collaborator. It can enable teachers and administrators to analyze student and school data and metadata to identify patterns, anomalies and opportunities to make better decisions and improve processes to better enable student success.

But alongside these strengths are real risks. There are real concerns about authorship, academic integrity, privacy, and surveillance, especially when student data is collected without transparency or used to make high-impact decisions. The ease of generating text or code with AI raises philosophical and practical questions about what it means to learn, think critically, or create original work in an AI-augmented world. There's also the risk of over-reliance: students and educators may become dependent on AI to the point that foundational skills erode or motivation diminishes. It also may enable students, teachers and administrators to disconnect from the content and make less informed or human-centered decisions.

Striking the right balance means centering human agency and pedagogy in the design and deployment of AI tools. AI should serve as a support mechanism, not a substitute, for the relational, reflective, and exploratory aspects of education. This requires thoughtful policies, transparent use guidelines, educator training, and practical design that anticipates and avoids unintended consequences.

In what ways can AI be used to enhance/encourage learning rather than give students a way around it?

AI can be a powerful cognitive companion when integrated thoughtfully into the learning process. Rather than serving as a shortcut to answers, it can enhance learning by helping students form better questions, explore multiple perspectives, visualize abstract or complex ideas, and engage in iterative practice with immediate, personalized feedback. For example, intelligent tutoring systems can walk students through problem-solving steps, while AI writing tools can offer style and grammar feedback that encourages revision rather than doing the writing for them.

The real shift lies in moving away from a transactional learning mindset, where students are focused on getting the answer as efficiently or quickly as possible, toward a collaborative learning mindset, where AI acts as a coach, partner, or creative assistant in the learning process. In this context, students are not passive recipients of knowledge but active participants in the construction of their understanding. AI tools can model Socratic questioning, recommend readings based on prior gaps, or simulate real-world scenarios for application of skills.

When used this way, AI doesn’t replace learning, it scaffolds it. It gives learners room to explore, fail safely, reflect, and try again. That’s not just about keeping students honest, it’s about keeping them engaged, curious, and confident in their capacity to learn and grow.

How AI has impacted students’ attitudes towards education (from K12 to higher ed). Do they feel it’s less relevant? Or are they excited because it’s a tool they harness?

Students’ reactions to AI in education are mixed. Some see it as a big benefit in reducing their effort, thereby diminishing their perceived value of their own effort, (“Why write when AI can do it?”), while others see it as a superpower that enhances their creativity and efficiency. There is some skepticism around the use of AI by educators in the classroom. Much depends on how schools and educators frame AI, not as a crutch, but as a catalyst for inquiry, reflection, and application.

What areas outside of paper writing are changing and in what ways? 

AI is reshaping how students approach nearly every part of academic life. Lecture transcription and summarization tools (e.g., Otter.ai), AI-powered flashcard generators, and group collaboration platforms with embedded AI assistants are streamlining notetaking, study sessions, and project work. The learning ecosystem is becoming more modular, on-demand, and scaffolded by intelligent systems.

AI is changing how educators approach their roles as well. Some educators are requiring in-person test-taking for students with hand-written answers in the classroom, to avoid the use of AI and ensure they know what the students are actually learning and understanding. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

How has AI changed how students attend lectures and take notes, study, do group work, etc.? 

AI is rapidly reshaping how students engage with learning, from the way they attend lectures to how they study and collaborate. Tools like Otter.ai and Notion AI allow students to focus on understanding rather than taking frantic notes, offering real-time transcription, summarization, and translation to support diverse learners. AI-enhanced note-taking apps can organize content, generate highlights, and even answer follow-up questions, turning notes into interactive study companions. When it comes to studying, platforms like Khanmigo and Quizlet deliver personalized learning experiences by creating adaptive quizzes, tutoring simulations, and targeted study plans based on students' evolving needs.

Group work has also become more efficient with the help of AI-powered tools that support brainstorming, project planning, and communication, especially in remote or multilingual settings. Perhaps the most significant shift is in mindset: with AI handling many of the routine academic tasks, students are free to focus on deeper learning, critical thinking, and strategic problem-solving. Ensuring consistent and broad availability of AI tools, training, and infrastructure is essential to enable these advancements to enhance learning for all students.

How can AI be implemented without widening the digital divide between well-resourced and under-resourced schools?

To implement AI in education without widening the digital divide, we need to treat broad availability as a design requirement, not an afterthought. AI tools need to be available to the broad community of schools and learners whether they have ample resources or limited resources related to high-speed internet, infrastructure, and trained staff, or we risk creating uneven opportunities for growth across the broad student population. A set of suggested actions are included below.

Prioritize broad availability and low-bandwidth tools - Develop and adopt AI tools that work offline or with minimal internet connectivity. Many students still rely on shared devices or limited data plans, so tools must be optimized for use on mobile devices, with offline functionality, and in resource-constrained environments. Open-source platforms and lightweight AI models can play a critical role here. Invest in educator training across all settings - Professional development opportunities must be extended to educators across the broad landscape of schools, both well-resourced as well as under-resourced schools, so they can all have the opportunity to understand, evaluate, and effectively use AI. It’s not just about the tools, it’s about empowering educators to integrate them meaningfully and thoughtfully into their classrooms. Embed broad AI enablement in policy and funding - Policymakers and funders could tie grants and procurement to technology goals across a wide array of schools and communities to incentivize AI use and adoption. For example, federal and state programs could subsidize AI deployments or provide incentives for companies to co-design tools in a range of communities. Promote Public-Private-Partnerships (PPPs) in and across communities - AI adoption should be accompanied by partnerships that bring together schools, community organizations, libraries, universities, and industry. These partnerships can support infrastructure upgrades, shared use of cloud resources, or mentorship programs that extend beyond the school walls. Focus on student-centered AI - Instead of deploying AI only for administrative efficiency (e.g., grading automation or test prep), educational institutions and funders could invest in tools that support learner growth, curiosity, and agency, tools that work just as well for a student in a rural district as one in a top-performing urban school.

In summary, if we approach AI as a tool for both effectiveness and efficiency, and ensure community voices are part of the process from the beginning, it can help close, not widen, the digital divide.

This Nature article discusses using a document’s version history to deter AI usage in writing. Have you heard of other techniques or ideas involving technology? 

The method proposed in the Nature article includes reviewing incremental edits over time and can reveal whether a document was developed iteratively or was included as a fully polished piece, which can be a potential flag for large language model use. Version history is just one part of a growing set of tools. Other techniques include technological, pedagogical, and procedural based approaches.

Technological approaches use software and systems to detect or deter AI-generated content by analyzing how text is created or submitted. Examples include Turnitin AI Detection, which detects plagiarism and AI use. Another example is OpenAI watermarking that includes subtle signature embedding.

In the pedagogical approach, educators redesign assignments and assessments to emphasize critical thinking, originality, and personal connection, which are harder for AI to simulate. Students are taught how to use AI responsibly as a learning enhancer. Examples include Otter.ai for lecture summarization and study support, and custom AI Reflection Assignments such as comparing ChatGPT outputs with human-written drafts.

Procedural approaches include institutional or classroom policies that govern when and how AI can be used, often relying on transparency, documentation, and updated honor codes. Canvas LMS with audit trail and version control features is one example.

While advancements are being made in detection tools, none are foolproof. Ethical dilemmas can be created due to false positives, especially when punishments are incurred without clear evidence. Institutions will need clear, transparent, and fair AI use policies, combined with student education and faculty development.

As AI becomes widely adopted across industries, education will need to ultimately shift from a suspicious stance to one of guided integration. Maintaining integrity may involve detection and deterrence, however approaches will also need to include trust-building and authentic assessment.

Can you share an example of a study or project where AI significantly improved learning outcomes?

The University of Michigan's “Maizey”, a customized generative AI tool, is trained on specific course materials to provide personalized assistance to students. Positive results in student performance and engagement have been reported. It has also increased efficiency for both instructors and students. For example, in an Operations Management course at the University of Michigan Ross School of Business, the tool  saved instructors 5 to 12 hours of answering questions each week. For students, the tool provided an ability to ask questions beyond the classroom or a professor’s office hours. According to self-reported surveys, improvements in assignment and quiz scores were shown. This is a small but significant step in scaling personalized support.

Should English and coding still be taught in the same format? For instance, how would one teach a CS student the value/quality of code when it’s generated by AI? Will the field/study be more about prompts rather than writing code?

While English and coding will continue to be foundational, how they are taught needs to evolve.

The shift for English instruction may involve multiple facets. For instance, we envision there will be a move towards developing skills in discovering available information leveraging AI tools, and vetting it for accuracy. Beyond leveraging available information, more focus on creating and producing new information, learning to make informed judgments as users of information, leveraging AI for writing with critical oversight, and ethical writing will be important. With AI bringing basic information to our fingertips, an increased focus on creative thinking and creative information development, analysis, data storytelling and data visualization will be valuable.

For coding, the emphasis will need to shift from syntax mastery to a focus on problem-solving, critical thinking, and the ability to adapt and improve AI-generated code.  The shift will be from writing code from scratch to evaluating, refining, and architecting systems with assistance of AI. While prompts will be important, the emphasis in training will need to include how to critically assess and improve code in lieu of simply generating it.

With the tech industry leveraging AI to reduce the cost of employing developers, does this impact young people's interest in studying CS? 

Despite automation and computing advances, and perhaps fueled by them, Computer Science (CS) remains a dynamic field. From the early days when the focus in CS classes was writing a compiler, to the evolving focus on AI and now Quantum Computing, computer science grows with the evolution of innovative technologies and their applications.

Regarding software development, with the advent of automation leveraging AI, some students may gravitate toward areas where they feel human agency remains central including AI ethics, security, data science, human-computer interaction, data story-telling and data visualization. Some may shift from coding to prompt engineering, but the underlying logic, structure, and systems thinking are still core competencies.

Will Google and Stackover flow in their current forms become irrelevant? 

Google and Stack Overflow may not vanish, but they will evolve. AI systems trained on forums like Stack Overflow already offer contextualized responses. However, the social and pedagogical value of such platforms, seeing multiple solutions, peer validation, and community norms, remains important. The future may see integration rather than obsolescence.

How do you believe those on the technical side of AI can fight against the threats to education?

Technologists can choose to embrace rather than fight AI, as it is here and will be here for a while. They can choose to embed ethical guardrails into AI tools and their use, advocate for transparent systems, and co-design with educators. They can support open infrastructure and prioritize broad usability. Perhaps most importantly, they must acknowledge that technological literacy is also civic literacy in the AI age.

Perhaps the real opportunity is to see AI as a tool to help critical thinking and creativity grow, using AI tools to provide a baseline in thinking, with humans using that as a springboard for more creative and imaginative thinking and innovation.

How do you envision the educational and academic landscape in 5 years, 10 years, 20 years?

In the near term, AI will likely be woven seamlessly into the day-to-day fabric of education. AI-powered tools will assist with personalized learning pathways, real-time feedback, and multilingual content delivery. Adaptive platforms will support differentiated instruction, helping students master concepts at their own pace while offering educators rich insights into individual progress. Faculty and students will routinely use AI for brainstorming, tutoring, lab simulations, and writing assistance. Microcredentials and skills-based learning, especially in areas like AI literacy, ethics, and data fluency, will grow rapidly, both inside and alongside traditional degree programs. Efforts in leveraging real insights to improve and further the science of AI will grow, and the application of AI in science, engineering and other disciplines will increase.

The evolution of trust and use of AI in education will likely have to evolve in order to harness its value and mitigate the risks it introduces. As mentioned above, some educators are requiring in-person test taking with hand-written answers in the classroom to confirm what the students are actually learning and understanding rather than using AI to answer the tests. The use of AI can limit critical thinking, which is a risk to society, academia and science. Managing the use of AI to ensure real learning may be an ongoing challenge into the future.

In 10 years: The classroom may be less bound by physical walls or static schedules. We could see AI agents co-teaching with human instructors managing formative assessments, generating tailored lesson variations, and supporting students in multiple languages and learning styles. Augmented Reality (AR) and Virtual Reality (VR) will likely be commonplace in STEM labs, medical training, and arts education, offering fully immersive simulations and collaborative experiences. Interdisciplinary programs that combine computing, humanities, ethics, and policy will become the norm, responding to the needs of an AI-shaped world. Institutions may start awarding modular degrees that reflect personalized learning trajectories, not just traditional majors.

In 20 years: The boundary between formal and informal education may blur almost completely. AI-powered tutors will likely be embedded in the tools and environments students use daily, including their personal digital devices such as smartphones, wearables, home assistants, or AR glasses. Learning may happen anywhere, anytime, guided by intelligent agents that adapt not just to what learners know, but how they feel, what motivates them, and where they struggle. Credentials may shift from degrees tied to credit hours to skills portfolios based on demonstrated mastery, verified through performance in real-world simulations or digital apprenticeships. Lifelong learning will no longer be optional. It will be dynamically integrated into professional life through just-in-time learning pathways driven by AI.

Throughout this transformation, human educators will remain essential. Their roles may evolve from content deliverers to mentors, curators, and ethical stewards of technology, but their presence will be more critical than ever in guiding values, fostering community, and ensuring that learning remains deeply human, not just algorithmic.

How can research publishers help in AI and education?

Broadly, research publishers can help in AI and education by inspiring and publishing all sides of the AI story - the good, the bad, and the ugly - like Springer did with this invited blog. Allow everyone to learn from others through your publications.

Research publishers also have a role in ensuring that AI is used responsibly in scholarly communication, including setting norms around disclosure of AI usage, enabling reproducibility through shared datasets and code, and fostering interdisciplinary research that explores AI's impact on pedagogy, as well as on all people, institutions and systems using AI or who may be impacted by the use of AI.

One question above was written by AI. Can you guess which one?

We are not sure, however, this seems like a fitting end. AI is now both the tool and the topic, the assistant and the questioner. And perhaps that’s the most important takeaway: we are all co-authors in this unfolding story.

A guess is the “Google and Stackover flow” question based on the fact that it should be Stack Overflow.

Florence Hudson is Executive Director of the Northeast Big Data Innovation Hub at Columbia University and Founder & CEO of FDHint, LLC, a global advanced technology consulting firm. A former IBM Vice President and Chief Technology Officer, Internet2 Senior Vice President & Chief Innovation Officer, Special Advisor for the NSF Cybersecurity Center of Excellence, and aerospace engineer at the NASA Jet Propulsion Lab and Grumman Aerospace Corporation, she is an Editor and Author for Springer, Elsevier, Wiley, IEEE, and other publications. She leads the development of global IEEE/UL standards to increase Trust, Identity, Privacy, Protection, Safety and Security (TIPPSS) for connected healthcare data and devices and other cyber-physical systems, and is Vice Chair of the IEEE Engineering in Medicine & Biology Society Standards Committee. She earned her Mechanical and Aerospace Engineering degree from Princeton University, and executive education certificates from Harvard Business School and Columbia University.

Forough Ghahramani is Vice President for Research and Innovation for New Jersey Edge (Edge).  As chief advocate for research and discovery, Forough serves as an advisor and counsel to senior higher education leaders to help translate vision for supporting research collaborations and innovation into actionable Advanced CI strategy leveraging regional and national advanced technology resources. Forough was previously at Rutgers University providing executive management for the Rutgers Discovery Informatics Institute (RDI2), working with Dr. Manish Parashar (Director). Forough's experience in higher education also includes previously serving as associate dean and department chair. Prior to joining academia, she held senior level engineering and management positions at Digital Equipment Corporation and Hewlett Packard (HP), also consulted to Fortune 500 companies in high performance computing environments. Forough is a Senior Member of IEEE, has an appointment to the NSF Engineering Research Visioning Alliance (ERVA) Standing Council, a Vice President for NJBDA's Research Collaboration committee, serves on the Northeast Big Data Innovation Hub and the Ecosystem for Research Networking (ERN) Steering committees. Forough has a doctorate in Higher Education Management from University of Pennsylvania, an MBA in Marketing from DePaul University, MS in Computer Science from Villanova University, and BS in Mathematics with a minor in Biology from Pennsylvania State University. Forough is consulted on the state, national, and international levels related to STEM workforce development strategies. She is currently a Principal Investigator on two NSF-funded projects: “EAGER: Empowering the AI Research Community through Facilitation, Access, and Collaboration” and “CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE)”, and a co-PI on the NSF ADVANCE Partnership: “New Jersey Equity in Commercialization Collective.” She has previously served a co-PI on the NSF CC* OAC: Planning “Advanced Cyberinfrastructure for Teaching and Research at Rowan University and the Southern New Jersey Region” and the NSF CCRI: Planning “A Community Research Infrastructure for Integrated AI-Enabled Malware and Network Data Analysis”.

The original article can me found here »

The post Leveraging and Managing AI in Education Today and into the Future appeared first on Edge, the Nation's Nonprofit Technology Consortium.


FIDO Alliance

IDAC Podcast: Going Passkey Phishing with Nishant Kaushik, FIDO Alliance

In this episode of the Identity at the Center podcast, Jeff and Jim discuss various aspects of identity access management (IAM) policies and the importance of having a solid foundation. […]

In this episode of the Identity at the Center podcast, Jeff and Jim discuss various aspects of identity access management (IAM) policies and the importance of having a solid foundation. They emphasize the need for automation, controls, and how IAM policies should be created without technology limitations in mind. The discussion also covers the implementation challenges and the evolving concept of identity verification. Jeff, Jim, and their guest, Nishant Kaushik, the new CTO at the FIDO Alliance, also delve into the issues surrounding the adoption of passkeys, highlighted by Rusty Deaton’s IDPro article, and address some common concerns about their security. Nishant offers insights into ongoing work at FIDO Alliance, the potential of digital identity, and the importance of community in the identity sector. The episode concludes with mentions of upcoming conferences and an homage to the late identity expert, Andrew Nash.

Wednesday, 01. October 2025

FIDO Alliance

Ideem: Q/A with Andrew Shikiar, CEO of FIDO

We had the pleasure of sitting down with Andrew Shikiar, CEO of the FIDO Alliance known for their creation and evangelism of the Passkey the authentication method we’ve all come to know […]

We had the pleasure of sitting down with Andrew Shikiar, CEO of the FIDO Alliance known for their creation and evangelism of the Passkey the authentication method we’ve all come to know and love. The team here at Ideem, is of course huge fans of the passkey and what it has done to revolutionize how people authenticate themselves and were honored that Andrew took the time to answer all of our questions about passkeys and banking. That Q&A is below. Of course if you’re interested in learning more about how Ideem is making passkeys bank-grade you can learn more at our site.


EdgeSecure

Shaping the Future of Computational Science

Shaping the Future of Computational Science: A Conversation with Dan Stanzione on HPC, AI, and National Research Infrastructure In the rapidly evolving landscape of high performance computing, artificial intelligence, and… The post Shaping the Future of Computational Science appeared first on Edge, the Nation's Nonprofit Technology Consortium.
Shaping the Future of Computational Science: A Conversation with Dan Stanzione on HPC, AI, and National Research Infrastructure

In the rapidly evolving landscape of high performance computing, artificial intelligence, and quantum technologies, few leaders have shaped the trajectory of open science infrastructure as profoundly as Dr. Dan Stanzione. As Executive Director of the Texas Advanced Computing Center (TACC) and Associate Vice President for Research at The University of Texas at Austin, Stanzione has built a career on a foundational principle that has guided supercomputing development for decades.

Stanzione's approach to building large-scale computing infrastructure stems from a hard-learned lesson about putting users first. "Ultimately, we're building large-scale computing to do science," he explains. "It's less about what we, as computing people, might think is cool as the latest and most interesting technology, and more about what is useful for delivering science." This philosophy was crystalized when TACC won a major system competition worth $120 million. After his presentation, another large center director approached him with what he called a backhanded compliment: "Man, I wish I had the guts to be as boring as you were on this design." Stanzione's response was characteristically pragmatic, "We didn't put in any of the newfangled, crazy stuff because it's all more expensive and it doesn't work as well."

This user-centric approach has driven TACC to hire computational scientists rather than traditional IT professionals to lead systems teams. The strategy ensures that infrastructure decisions are driven by what computational scientists actually need to accomplish, as opposed to technological novelty alone.

Building National Research Ecosystems
TACC's influence extends far beyond its physical systems to encompass a national research ecosystem that supports over 3,000 projects annually across 450+ institutions. From research universities to community colleges, TACC provides computational resources that enable both cutting-edge research and workforce development. The center operates on a hub-and-spoke model that recognizes the importance of regional networks and local expertise. "We can scale up big computers to run tens of thousands of users, but it's awfully hard for me to scale the person down the hall from you who you can go ask about stuff," Stanzione explains. This ecosystem approach ensures that computational resources are accessible not just technically, but through human networks of expertise and support.

Regional research and education networks play a crucial role in this ecosystem, providing both the physical infrastructure for data transfer and the human networks necessary for knowledge dissemination. As scientific workflows increasingly rely on remote resources and collaboration, these networks become "basically air"—essential but invisible infrastructure that enables modern research.

“Developing a skilled data science workforce starts with creating learning environments that are inclusive, interdisciplinary, and connected to real-world challenges. The best way to build data science skills is by using them in practice and we must empower people not only to work with data, but to use it ethically and effectively in service of their communities. By co-developing educational resources and tools with the communities, students, researchers, and practitioners don’t just learn from the system, they help shape it. When we co-develop training materials with individuals who represent the needs of their own environments, the solutions and the learning are directly relevant.”

Dan Stanzione, Ph.D. Executive Director of the Texas Advanced Computing Center (TACC) and Associate Vice President for Research at the University of Texas at Austin

Navigating the AI Revolution in Scientific Computing
The explosion of artificial intelligence has fundamentally transformed the computational landscape, creating both unprecedented opportunities and complex challenges for research infrastructure. Stanzione describes the shift from gradual adoption to an "overwhelming avalanche" following ChatGPT's release, forcing centers like TACC to rapidly adapt their systems and services.

"Five, six years ago maybe 40% of our users could use GPUs. Now maybe it's 65%," Stanzione notes. This dramatic shift informed the design of Vista, TACC's AI-centric system built on NVIDIA Grace Hopper architecture, which serves as a bridge to prepare users for the next generation of leadership-class computing. However, the AI revolution presents a deeper challenge for the entire scientific computing ecosystem. "How are we going to keep traditional scientific computing going in a world where all the chips are built for AI?" Stanzione ponders. The answer lies in adapting to commodity AI components, much like the transition from custom supercomputing silicon to commodity microprocessors that began in the 1990s with the Beowulf project.

The fundamental difference between AI and traditional scientific computing lies in precision requirements. While scientific simulations demand 64-bit floating-point accuracy, AI algorithms can operate effectively with 8-bit or even 4-bit precision. This creates both challenges and opportunities. For example, Stanzione explains, "If these things are optimized for 8-bit integers, how do we make it look like we're doing 64-bit floating point?" The solution requires clever algorithms and hardware adaptations that could ultimately deliver superior performance even for traditional scientific workloads.

The Horizon Supercomputer: Enabling Discovery at Scale
TACC's upcoming Horizon supercomputer, scheduled for deployment in 2026 as part of the National Science Foundaion's (NSF) Leadership-Class Computing Facility, represents a quantum leap in computational capability. Expected to deliver 10× the performance of Frontera for traditional workloads and 100× for AI applications, Horizon will feature the largest public deployment of NVIDIA Blackwell processors available to researchers without cloud-like pricing.

Stanzione and his team have already identified 11 flagship projects that will launch with Horizon, spanning astronomy, materials science, molecular dynamics for drug discovery, seismology, and natural disaster prediction. But the most exciting aspect of these systems, he notes, is their unpredictability: "Discovery is by its nature somewhat unpredictable. We will be surprised, and something will happen." This element of surprise has characterized TACC's impact throughout its history. The center has supported work by four Nobel Prize winners, including David Baker from the University of Washington, who has been using TACC systems since 2005 for protein folding research that ultimately contributed to his 2024 Nobel Prize in Chemistry.

The Convergence Challenge: Power, Efficiency, and Innovation
The rapid growth of AI computing has created unprecedented challenges in power consumption and efficiency. Stanzione estimates that current AI development focuses primarily on being first and fastest, with little attention to efficiency—a luxury that scientific computing has never been able to afford. "We've never had the kind of money to throw around the hundreds of billions that they're throwing around in the AI space," he observes.

The solution, Stanzione argues, lies in software optimization rather than simply building more data centers. He points to DeepSeek's breakthrough in February 2024 as a prime example. By focusing intensively on software optimization rather than raw scaling, the company achieved 3-4× performance improvements while using significantly less power. "If your argument is, can I build $400 billion of data centers or with $10 million in software, where can I make that $200 billion in data centers? It was a pretty obvious answer to me," reflects Stanzione.

The industry faces fundamental limits in both silicon scaling and precision reduction. As transistor features approach atomic scales and precision requirements bottom out, the field must turn to architectural innovations, custom silicon designs, and potentially quantum accelerators to maintain the pace of computational advancement.

As TACC prepares for the Horizon era and the broader scientific community grapples with the AI revolution, Dan Stanzione's user-centric philosophy offers a valuable guide, “Stay focused on what serves science, remain adaptable to technological change, and never lose sight of the human element—the researchers, students, and technologists who ultimately determine the impact of any computational infrastructure.”

Quantum Computing: The Long View
While quantum computing generates significant attention and frequent questions about deployment timelines, Stanzione maintains a characteristically practical perspective. "I always go back to when I have users that actually want to use it," he responds to questions about quantum system deployment. "Right now you deploy big quantum systems to learn a lot about how to run quantum systems, and there's nothing wrong with that, but it doesn't serve my end science users that ultimately pay the bills."

Looking forward, Stanzione sees quantum accelerators as more likely than general-purpose quantum machines in the medium term. This hybrid approach aligns with his user-first philosophy to deploy quantum technologies when they solve real scientific problems more effectively than classical alternatives.

The Workforce Imperative
Perhaps no challenge is more critical than preparing the next generation of researchers and technologists for a world where AI, HPC, and quantum technologies converge. For TACC, workforce development is integral to advancing scientific progress. Many of its 9,000 annual users are first-year graduate students, making continuous education and onboarding essential.

Stanzione frames the workforce challenge in terms of historical precedent, noting that most economic growth over the past century has been driven by technology. From agriculture's transformation through improved productivity to the creation of entirely new industries, technological advancement has consistently created more jobs than it has displaced.

The key insight from this historical perspective is that fundamental research—often with no apparent practical application—ultimately enables transformative innovations. "In 1905 when doing work in relativity, Einstein did not think one day, if I do this right, I'll be able to get a taxi from my phone," Stanzione notes, yet GPS requires relativistic corrections to function accurately.

This long-term view underscores the importance of sustained investment in research infrastructure and education. Stanzione warns that declining government investment since the 1970s threatens the innovation ecosystem that has driven American technological leadership and cautions, "We just look at the last bit of the product and not all the things that it took to get there."

Stanzione frames the workforce challenge in terms of historical precedent, noting that most economic growth over the past century has been driven by technology. From agriculture's transformation through improved productivity to the creation of entirely new industries, technological advancement has consistently created more jobs than it has displaced. The key insight from this historical perspective is that fundamental research—often with no apparent practical application—ultimately enables transformative innovations. "In 1905 when doing work in relativity, Einstein did not think one day, if I do this right, I'll be able to get a taxi from my phone," Stanzione notes, yet GPS requires relativistic corrections to function accurately. "We just look at the last bit of the product and not all the things that it took to get there."

Looking Ahead: Challenges and Opportunities
The coming years will test the scientific computing community's ability to navigate several converging challenges. The commercial value of AI threatens to overwhelm traditional scientific computing through competition for hardware, talent, and attention. Power consumption continues to grow at unsustainable rates. Fundamental limits in silicon scaling and precision reduction approach rapidly.

As Forough Ghahramani, Ed.D., Assistant Vice President for Research, Innovation, and Sponsored Programs at Edge, observes, "Dr. Dan Stanzione's leadership at TACC continues to shape the future of advanced computing in this country. His vision, spanning HPC, AI, and quantum, is driving open science forward at unprecedented scale. As a thought leader and builder of national research infrastructure, his work through systems like Frontera, Vista, and the upcoming Horizon supercomputer reflects an unwavering commitment to accessibility, excellence, and innovation."

Yet Stanzione remains optimistic about the field's ability to adapt and innovate. The same community that successfully transitioned from custom supercomputing hardware to commodity clusters in the 1990s now faces another architectural transition. Success will require the same combination of technological innovation, pragmatic decision-making, and unwavering focus on scientific utility that has characterized high-performance computing's evolution.

For higher education leaders, Stanzione's message is clear: investment in computational infrastructure and workforce development is not optional but essential for maintaining America's position as a global leader in scientific innovation. The discoveries enabled by tomorrow's computational tools may be unpredictable, but the need for those tools is certain.

As TACC prepares for the Horizon era and the broader scientific community grapples with the AI revolution, Dan Stanzione's user-centric philosophy offers a valuable guide, “Stay focused on what serves science, remain adaptable to technological change, and never lose sight of the human element—the researchers, students, and technologists who ultimately determine the impact of any computational infrastructure.”

The future of open science depends not just on building bigger, faster computers, but on building systems that serve the scientists who will use them to unlock the next century of discovery.

The post Shaping the Future of Computational Science appeared first on Edge, the Nation's Nonprofit Technology Consortium.


CMMC on Campus

The post CMMC on Campus appeared first on Edge, the Nation's Nonprofit Technology Consortium.

Tuesday, 30. September 2025

Digital Identity NZ

Inspiring Trust across Aotearoa NZ Inc.

The energetic Digital Trust Hui Taumata at Te Papa lit a fire under NZ Inc’s digital identity movement. Industry leaders, policymakers, iwi kaitiaki, and innovators gathered to collectively declare that solving for trust is mission critical to New Zealand’s digital future. The post Inspiring Trust across Aotearoa NZ Inc. appeared first on Digital Identity New Zealand.

Kia ora,

The energetic Digital Trust Hui Taumata at Te Papa lit a fire under NZ Inc’s digital identity movement. Industry leaders, policymakers, iwi kaitiaki, and innovators gathered to collectively declare that solving for trust is mission critical to New Zealand’s digital future.

The Hui Taumata concluded with an unmistakable sense of urgency, shared purpose and appetite to collectively deliver on the vision for an open and decentralised marketplace for trusted credential proof access to products and services.

Recognising that excitement alone is not a strategy, we have been actively working on our new strategic plan and member value proposition, as signaled at the Hui Taumata.

Our refreshed strategy aims to strengthen trust in the digital economy through a united, open and collaborative stakeholder ecosystem. With the increase in distrust in our institutions and misinformation related to NZ’s empowering approach to identity, we most definitely have our work cut out for us!

Assuming we can remain a trusted voice, Digital Identity New Zealand (DINZ) is ideally positioned to connect, promote and advance trust solutions.

Major initiatives for 2025-26

Investable value proposition for members and partners Aotearoa NZ Inc communications strategy and go-to-market narrative aligned to Te Ao Māori data sovereignty principles (e.g. taonga, tikanga & kaitiakitanga) Trusted Aotearoa NZ Inc ecosystem architecture and sovereign data infrastructure.

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption. 

Working groups will build momentum around demand-side adoption and a trusted vendor ecosystem.

Key Takeaways for Members

The time is now for trusted decentralised identity:

Global drivers: Trading partners and visitors are adopting interoperable credentials, requiring NZ exporters and operators to adapt Domestic demand: Banks, insurers, corporate NZ, government agencies and SME sector face unsustainable delivery, compliance and fraud costs, and increasing risks / chaos Technology maturity: Open-standard wallets, zero-knowledge proofs, and consent dashboards are ready for safe and privacy enhancing adoption. 

Key value drivers and critical success factors:

Security, machine readable traceability and privacy by design National DISTF trust-mark & open standards for an interoperable framework Government & tier-1 procurement mandates to drive adoption Cross-sector collaboration to demonstrate ROI in various sectors (public services, banking, health, exports, and SMEs).

The plan outlines significant economic opportunities over the next five years from DISTF credential marketplace adoption.

See the complete sector breakdown.

New Working Group Highlights

We have established two new working groups to support market adoption:

Demand side (reference architecture): Focus on building adoption through use cases, reference architecture, and sector-specific strategies (such as education, health, social services; open banking and payments, and domestic commerce – SME focus).

DINZ has worked constructively with the DIA to establish and support core policy and technical working groups, plus accessibility and Te Ao Māori groups. 

Supply side (trusted vendor ecosystem): Focus on a common aligned approach for verifiable credentials (VCs) that delivers trust, simplicity, and safeguards for holders, while ensuring safe, interoperable, and trusted VC delivery.

At the Hui Taumata in August 2025, Minister Collins’ called for vendors to align on their role in building a trusted verifiable credentials ecosystem for New Zealand.

Following this, Craig Grimshaw of Sush Labs and Andrew Mabey of UNIFY Solutions are co-chairing a newly established Vendor Working Group on verifiable credentials for people of New Zealand.

With Digital Identity New Zealand (DINZ) hosting, Andrew and Craig will guide the group to ensure outcomes are collaborative, transparent and aligned with national goals.

Together Andrew and Craig bring complementary expertise with Sush Labs in wallet design and mobile user experiences critical for adoption and UNIFY Solutions in consulting, architecture and government-scale identity implementations. Both providing balanced leadership to convene vendors around a shared purpose: delivering a safe, interoperable and trusted ecosystem for all New Zealand.

Digital Public Infrastructure 

DINZ increasingly plays a thought leadership role in areas such as multi-modal biometric frameworks, authentication including identity binding and proofing, credential issuance and validation, access control, user-controlled data storage, and trusted data processing.

We intend to champion the importance of a Trust Aotearoa NZ Inc jurisdiction level namespace (i.e. name service) to a truly user centric decentralised ecosystem architecture. 

And finally, we continue to make proactive submissions as part of NZ’s regulatory modernisation programme and guidance on emerging standards including the new assurance hierarchy in the updated DISTF.

Executive Council Nominations

Nominations will open on 13 October. The results of the election will be announced at the Annual Meeting on 4 December.

The following board positions are available for election this year:

3 Major Corporate Seats  2 Other Corporate Seats  2 SME & Start Up Seats 

Start thinking about whether you are interested now. 

Industry News

We continue to experience an ever-increasing buzz in the digital identity space both making and breaking news domestically and around the world. Here’s a selection:

Government digital changes to bring big savings | Beehive.govt.nz Proactive-release_Driving-down-the-cost-of-digital-in-government Britain to introduce compulsory digital ID for workers | rnz.co.nz BBC News article on digital ID UK mobile operators launch age verification and anti-fraud APIs through GSMA Open Gateway Initiative | libertyglobal.com My take on Digital Driver’s Licenses Andy Higgs Newstalk interview  Full Interview with Leah Panapa on the Platform

Upcoming Interoperability Event: 16-17 November 2025

Chris Goh and Belinda Taylor’s NZTA team are hosting an interoperability event in Wellington on 16 -17 November, leading into the ISO Working Group 10 meetings that week. These international mDL/mdoc community events regularly confirm implementation feasibility, gather feedback to enhance standards quality, and maintain market momentum for mDL and mdoc implementations.

Sponsorship opportunities available:

Sunday 16 Nov: Lunch (approx. 120 attendees) and/or Coffee cart Monday 17 Nov: Lunch (approx. 120 attendees) and/or early evening canapés and non-alcoholic drinks (approx. 120 attendees)

Please contact Gabrielle.George@dia.govt.nz if you are interested.

Next  Actions for Members

Engage: Nominate representatives to the reference architecture, vendor working and special interest working groups.   Contribute: Share sector use cases for Circles of Trust white papers. Communicate: Adopt the new messaging framework and amplify the everyday benefits of trusted digital identity.

Together, we are shaping the trusted credential ecosystem that will empower New Zealanders, protect privacy, and unlock economic growth through a unified Aotearoa NZ Inc approach.

Tihei mauri ora!

Andy Higgs
Executive Director,
Digital Identity NZ

Read full news here: Inspiring Trust across Aotearoa NZ Inc.

Subscribe for more

The post Inspiring Trust across Aotearoa NZ Inc. appeared first on Digital Identity New Zealand.


FIDO Alliance

First Credit Union: Transforming Digital Banking with Passkeys

Corporate Overview Founded in 1955, First Credit Union is a member-owned financial institution in New Zealand with over 60,000 members. The organization delivers secure and innovative digital banking experiences through […]
Corporate Overview

Founded in 1955, First Credit Union is a member-owned financial institution in New Zealand with over 60,000 members. The organization delivers secure and innovative digital banking experiences through its comprehensive online banking platform. Members access their accounts via mobile app and browser options to manage finances anytime, anywhere. The credit union has embraced cutting-edge authentication technology to enhance both security and user experience for its diverse membership base.

Executive Perspective

“Implementing FIDO authentication through Authsignal has been a game-changer for our members’ digital experience. It’s secure, seamless and sets a new standard for trust in online banking.” – Herb Wulff, Treasury and Agency Banking Manager, First Credit Union

The Business Challenge

As a progressive modern financial institution, First Credit Union has embraced a path toward digital transformation. As part of its journey, it identified several critical challenges impacting both security and user experience.

Those challenges include:

Cybersecurity Risks. The organization wanted to reduce reliance on passwords, which is one of the most common attack vectors. First Credit Union sought phishing-resistant authentication methods to mitigate growing security threats. User Experience Friction. Traditional multi-factor authentication methods often create friction in the login process. The credit union aimed to make secure access feel seamless and intuitive for members with varying technical comfort levels. Cross-Platform Compatibility. Members access the platform across diverse devices and operating systems. First Credit Union needed a solution that worked consistently across mobile apps and web browsers. Integration Complexity. The new authentication solution had to integrate smoothly with existing infrastructure. This approach would minimize disruption to internal teams and members during deployment. Why First Credit Union Chose Passkeys

First Credit Union conducted a thorough evaluation of several traditional and emerging authentication methods. The goal was to find the right balance between security, usability and accessibility for its diverse membership base.

Traditional Options Fell Short

The team explored multiple multi-factor authentication (MFA) methods but found significant drawbacks with each approach. Authenticator apps can enhance security but have vulnerabilities that can be exploited due to their reliance upon one-time codes. They also require members to install and manage a separate app, which added complexity and friction. Email magic links provided convenience but created usability challenges and vulnerability to phishing and email interception risks.

Device credentials delivered a more seamless experience but lacked the standards-based interoperability needed across platforms. The credit union also considered standalone biometric authentication, but these solutions lacked the robust security guarantees and cross-platform compatibility that FIDO standards provide.

A critical insight emerged: offering too many authentication options risked confusing members, especially given the wide range of technical comfort levels across their demographic. A fragmented experience could lead to frustration, support overhead and reduced adoption.

FIDO Delivered What Others Couldn’t

FIDO authentication stood apart from alternatives that still presented significant vulnerabilities to phishing and lacked seamless, standards-based interoperability. The technology offered compelling advantages:

Phishing resistance eliminates shared secrets like passwords or OTPs that attackers can intercept or steal. The passwordless experience reduces friction for members while making access to online banking quicker and more secure. FIDO2 specification ensures seamless authentication across a wide range of devices and platforms, supporting both their app and browser-based services.

The solution improved member trust and satisfaction through enhanced security and streamlined login processes. It also reduced support overhead from password resets and login issues, allowing the team to allocate resources more efficiently and improve overall service quality.

Implementation Overview

First Credit Union partnered with Authsignal to implement a FIDO Certified passkey infrastructure. The team followed a structured rollout approach:

Phase 1: Internal Testing and Validation

The organization conducted rigorous internal testing to validate passkey integration across mobile and browser platforms. This phase ensured technical stability and compatibility.

Phase 2: Member Education and Communication

First Credit Union launched a targeted communication campaign that included:

Clear messaging about passkey benefits Step-by-step setup and usage guides Comprehensive support resources for onboarding

Phase 3: Gradual Branch Network Rollout

The team introduced passkeys in phases across the branch network. This approach allowed for performance monitoring, feedback collection and iterative improvements.

Phase 4: Monitoring and Optimization

Post-launch activities included tracking adoption metrics and authentication usage patterns. Member feedback drove user experience refinements.

Results and Impact

First Credit Union achieved impressive adoption and security outcomes since launching passkeys:

Adoption Metrics

58.4% of members adopted the new authentication experience 54.5% of all authentications now use passkeys Over 23,500 members enrolled in multi-factor authentication

Member Experience

Most members provided positive feedback citing ease of use and improved trust. Passkeys enabled simplified login through device-native biometrics like facial and fingerprint recognition. Members enjoy seamless experience across mobile and web platforms.

Operational Benefits

The organization reduced support overhead from password-related issues. First Credit Union enhanced its security posture with phishing-resistant authentication. The infrastructure now aligns with global standards for future readiness.

Future Vision

FIDO authentication serves as the cornerstone of First Credit Union’s long-term digital security strategy. The organization plans these expansions:

Secure Transaction Authentication: Extending passkeys to high-risk actions like transaction approvals Internal Systems Access: Implementing FIDO-based authentication for staff systems Third-Party Integrations: Leveraging FIDO’s interoperability for future service integrations Key Recommendations

First Credit Union offers these insights for organizations considering FIDO implementation:

1. Understand Your User Base: Assess members’ devices, digital habits and comfort levels to tailor the experience appropriately

2. Simplify the Experience: Avoid overwhelming users with too many authentication options

3. Choose the Right Partner: Work with trusted providers who offer expertise in passkey infrastructure

4. Communicate Clearly: Educate users early with clear messaging about benefits and simple setup guides

5. Test Thoroughly: Conduct comprehensive internal testing across platforms before member-facing deployment

Read the Case Study

Kantara Initiative

In conversation with ….. Amit Sharma, IDEMIA Public Security

Amit Sharma on his passion for all things ‘identity’ and the challenges he sees for the market in general The post In conversation with ….. Amit Sharma, IDEMIA Public Security appeared first on Kantara Initiative.

Amit Sharma on his passion for all things ‘identity’ and the challenges he sees for the market in general

The post In conversation with ….. Amit Sharma, IDEMIA Public Security appeared first on Kantara Initiative.

Monday, 29. September 2025

Digital Identity NZ

Digital Identity NZ – Major initiatives for 2025-26:

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption.  Working groups will build momentum around demand-side adoption and a trusted … Continue reading "Digital Identity NZ – Major initiatives for 2025-26:" The post Digital Identity NZ – Major initiatives for 2025-26: appeared first on Digital Identity New Zealan
Investable Value Proposition for members and partners Aotearoa NZ Inc Communication Strategy and Go-To-Market Narrative aligned to Te Ao Māori data sovereignty principles (e.g. taonga, tikanga & kaitiakitanga) Trusted Aotearoa NZ Inc ecosystem architecture and sovereign data infrastructure

There is increasing consensus that an effective, use case and benefits focused communication framework is crucial to market adoption. 

Working groups will build momentum around demand-side adoption and a trusted vendor ecosystem.

Key Takeaways for Members

The Time is Now for Trusted Decentralized Identity:

Global Drivers: Trading partners and visitors are adopting interoperable credentials, requiring NZ exporters and operators to adapt. Domestic Demand: Banks, insurers, corporate NZ, government agencies and SME sector face unsustainable delivery, compliance and fraud costs and increasing risks / chaos Technology Maturity: Open-standard wallets, zero-knowledge proofs, and consent dashboards are ready for safe and privacy enhancing adoption 

Key Value Drivers and Critical Success Factors:

Security, machine readable traceability and privacy by design National DISTF Trust-Mark & Open Standards for an interoperable framework. Government & Tier-1 Procurement Mandates to drive adoption. Cross-Sector collaboration to demonstrate ROI in various sectors (public services, banking, health, exports, SMEs)

The plan outlines significant economic opportunities over the next five years from DISTF credential marketplace adoption.

CategoryDetails / MetricsNational 5-Year UpsideNZD 8–16 B mid case combined cost savings, fraud reduction, export premiums and new-revenue uplift. Higher if combined with NZD stablecoin enabled trade.Typical Project Payback18–36 months, Fastest ROI: Financial services, Government services, Health, Agriculture/food exports, SMEs (eInvoicing + KYB reuse)Public-Sector Efficiency20–30 % reduction in manual verification tasks (cross-agency VC issuance & consent dashboards)Fraud / Leakage Reduction30–50 % decrease in high-risk processes (banking, benefits, e-commerce, health)

Complete sector breakdown:

SectorEstimated 5-Year Value (NZD)Primary Value LeversFinancial Services$1.2–2.0 BReusable KYC/KYB, instant onboarding, fraud loss reduction, account credentialsGovernment & Social Services$1.0–1.8 BGovernment app / wallet upgrade with reusable entitlement credentials, e-signatures, e-voting pilotsHealth & Aged Care$1.3–2.2 BPatient/provider credentials, e-prescriptions, research data sharingEducation & Skills$0.4–0.8 BSkills passports, micro-credential walletsAgriculture & Food$1.1–1.9 BExport provenance, license to operate, biosecurity, monitoring, product passportsTransport & Logistics$0.7–1.4 BChain-of-custody, border clearance, verified telematics, traceabilityEnergy & Utilities$0.5–1.0 BSmart-meter attestations, carbon/REC trackingConstruction & Property$0.6–1.1 BDigital building consents, product passportsTourism & Visitor Economy$0.5–0.9 BVerified traveller profiles, seamless border flows, personalised conciergeRetail & Consumer$0.6–1.2 BAge assurance, product authenticity, loyalty portabilityMedia & Creative$0.3–0.7 BContent provenance credentials, creator rights & royaltiesSMEs & Professional Services$0.9–1.6 BeInvoicing, verified suppliers, payroll/workforce credentials, automation

The post Digital Identity NZ – Major initiatives for 2025-26: appeared first on Digital Identity New Zealand.


Internet Safety Labs (Me2B)

Reusable ISL Graphics

The post Reusable ISL Graphics appeared first on Internet Safety Labs.

Resources on this page are made available under Creative Commons Attribution Non-Commercial ShareAlike 4.0 International Public License as found at: https://creativecommons.org/licenses/by/4.0/legalcode

2022 K12 Edtech Benchmark Infographics 2022 K12 Edtech Benchmark Infographics: Findings Report 1

  2022 K12 Edtech Benchmark Infographics: Findings Report 2

2022 K12 Edtech Benchmark Infographics: Findings Report 3

Consumer Sentiment Report

Did You Knows

Do You Know Where Your Data Is

Principles of Safe Software

Miscellaneous

The post Reusable ISL Graphics appeared first on Internet Safety Labs.


We Are Open co-op

Conscious Discourse for Activists and Educators

Reflections on helping Amnesty International UK using Open Source technologies.
Based on an original by Visual Thinkery for WAO

As part of their wider digital transformation plan, we’re currently helping Amnesty International UK (AIUK) with a new community platform for activists and educators. As we’ve done for almost a decade now, and as befits the name of our cooperative, we’re working openly on this project. In fact, this post is informed by one we wrote for the AIUK Community Platform Project blog.

Our last post about the AIUK project on this blog talked about the importance of community calls. In this one we want to talk about community building for the kinds of conversations that activists and educators need to have. We will be using Discourse to power the community platform after our research earlier this year led to a longlist of 29 platforms and a shortlist of 4 platforms. You can check out our comparison spreadsheet here.

A note about Discourse

This is not a post about Discourse as a platform per se, but it is important to note that, by default, it provides many useful features and settings. The stated aim of its co-founders is to “democratize online community and teamwork by raising the standard of civilized discourse on the Internet.” As such, it has features such as user trust levels, content warnings, and role/status badges that have been provided thoughtfully in a way that other systems haven’t yet managed.

It’s also Open Source, allowing AIUK to be able to host it wherever they choose—something which is increasingly important given the global rise of authoritarianism. Human rights organisations need to, sadly, be prepared for security breaches like hacks, bots or fake accounts that could  compromise discussions.

Conscious configuration A series of personas, created using Open Peeps

The only way to know how users will interact with a system you have created and/or configured is to put them in front of it. They will surprise you in both positive and negative ways, which you can then make a note of and reconfigure the system accordingly. 

The platform or software you choose constrains what is or is not possible for users. It provides a set of “affordances” creating an environment which offers the individual different options. For example, we have decided against the affordances provided by real-time chat apps such as Slack or Rocket.Chat in favour of a more ‘discussion area’ vibe with Discourse.  

Over and above the intentional decisions around platform choice, we also have to be mindful about the way we initially configure it for our target audience. Unlike some other platforms, almost everything is configurable in Discourse. This means we need to do the work to make the platform as easy to use and “intuitive” to community members as possible. 

AIUK has a widespread and diverse community, so we need to think about how that community currently exists, while preparing for a move to a new platform. We now have a very long configuration document documenting these conversations and reminding ourselves why we made certain decisions around set up and defaults. This includes everything from the theme components we have installed, to the way we’re dealing with user permissions, through to the names of buttons we’ve changed.  

It is very unlikely that we get everything right the first time around. We will receive useful feedback during the training and piloting phases and  we will also change things based on what we observe users actually doing (rather than just what they say they will do!)

The wider ecosystem Image CC BY Visual Thinkery for WAO

The new AIUK community platform does not sit alone in a vacuum. Nor is it the answer to every situation in which activists and educators may want or need to interact. For example, end-to-end encrypted communications are much better dealt with in a secure app such as Signal. As a result, we are advising AIUK community members to see the new platform as one part of a wider ecosystem.

This ecosystem also includes a new website and knowledge hub which is being developed by Torchbox. Therefore, in addition to thinking through how activists and educators interact within the community platform, we need to consider how different types of users might move through the entire ecosystem. How do they become aware of what is available? How do we meet their needs? How might we enable them to meet their activism and education goals?

The demographics of AIUK skew both young and old. That is to say, there is a majority in the 50+ age group, but there are also many young people and university students who are actively involved with Amnesty campaigns. Our research showed that these two groups tend to use very different communication tools. The older demographic tend to use email as their primary means of communication, whereas the younger demographic tend to use social media.

Our aim is for the community platform to meet the needs of as many different AIUK groups as possible. For example, an important requirement was the ability to receive updates from the platform via email, and also for community members to be able to reply to discussions from the comfort of their inbox. 

We cannot solve everything with the community platform, but we are configuring a solution that can respond to users’ needs over time.

Get involved!

Whether or not you currently consider yourself part of the AIUK community, you are very welcome to lend your thoughts and expertise to this project. Follow the project blog over on the Amnesty UK website, share positive examples you have of activists and educators engaging in constructive discussion, and help us build a space which helps the AIUK community protect people wherever justice, freedom, truth and dignity are denied.

Friday, 26. September 2025

DIF Blog

DIF at the UN: Bringing Decentralized Identity and AI to the Global Stage

Today, two Decentralized Identity Foundation (DIF) members brought their expertise to the "Trusted Digital Identity for People & AI” panel at the "Digital@UNGA", part of the 80th U.N. General Assembly.


Today, two Decentralized Identity Foundation (DIF) members brought their expertise to an influential global audience at Digital@UNGA, a high-level event convened by the ITU, UNDP and WDTA during the 80th UN General Assembly. 

In a critical session titled “Trusted Digital Identity for People & AI,” the discussion moved beyond theory to address the real-world challenges of deploying Digital Public Infrastructure (DPI) that is secure, equitable, and future-proof. In partnership with Gambian Ambassador Muhammadou Kah, Chairman of the UN Commission on Science and Technology for Development, the session focused on turning the UN’s digital identity strategy into a deployable reality, grounded in the core principles of interoperability, privacy by design, and inclusion for all.

The core challenge addressed by the panel was the persistent gap between policy and production. While global goals like SDG 16.9 are clear, the goal of providing legal identity for all by 2030 is often stalled by protocol fragmentation and the lack of a robust architectural model for a world where both people and AI agents are first-class citizens. The session explored how to bridge this gap by encoding principles as measurable engineering requirements, ensuring that concepts like privacy and interoperability can be verified through rigorous, evidence-based testing before procurement and large-scale deployment.

Representing DIF, Matt McKinney, CEO of AIGNE, an ArcBlock company, and Co-Chair of the DID Method Spec Working Group, presented a framework for building this next generation of DPI. His talk focused on the necessity of a symmetrical architecture that serves both people and AI agents with the same high standards of security and control. He argued that for this mixed-initiative future to be safe, AI agents must use controller-bound credentials, operate with least-privilege, time-boxed permissions, and be subject to fast, verifiable revocation. He outlined a phased, low-risk path for policymakers to move from architectural requirements to a multi-vendor sandbox, then to a pilot, and finally to a scalable rollout, all based on open standards and anchored to objective conformance proofs.

Nicola Gallo, Co-Founder of Nitro Agility and Co-Chair of DIF's new Trusted AI Agents Working Group, also presented at length. He emphasized the importance of building a trust stack for AI agents, one that clarifies the types of trust required, considers the impact of AI on social and market structures, and defines protocols that can effectively address these concerns. In his view, a sustainable path is to anchor trust in the identities of the executors themselves, enabling distributed chains of attested actions. Without this granular and auditable accountability, we risk relying too heavily on impersonation models, where the role of the actual executor may be unclear and trust becomes harder to govern or verify. Ultimately, the key lies in giving workloads their own flexible and verifiable identities, making it possible to trace and govern responsibilities across distributed systems, thus envisioning a new Internet of Trust

As a next step, the members will publish a 2-page outcomes brief and an implementer checklist for policy makers and practitioners within the next two weeks. Stay tuned for links to these practical documents to be open-sourced by DIF.


FIDO Alliance

TechGenyz: Password-Free Future: How Biometrics & Passkeys Unlock True Security 

While biometrics offer convenience, passkeys provide the backbone for the next stage in authentication. Developed as a part of a global effort by Apple, Google, Microsoft, and the FIDO Alliance, […]

While biometrics offer convenience, passkeys provide the backbone for the next stage in authentication. Developed as a part of a global effort by Apple, Google, Microsoft, and the FIDO Alliance, passkeys replace traditional passwords with cryptographic keys stored securely on a user’s device. Instead of typing in a word or a phrase, users can confirm their identity through a fingerprint, a face scan, or a prompt in a trusted device. 


Forbes: The iPhone’s New Camera? Whatever. The iPhone’s New Wallet? Cool. 

Apple’s approach to identity in wallets is built on open standards, including the W3C’s Digital Credentials API and FIDO Alliance protocols. This is important to identity nerds like me because […]

Apple’s approach to identity in wallets is built on open standards, including the W3C’s Digital Credentials API and FIDO Alliance protocols. This is important to identity nerds like me because they are standards that enable privacy-enhancing exchanges of digital credentials, allowing consumers to (crucially) prove what they are (over 18, entitled to drive, holding a valid ticket) without having to divulge who they are.  


Biometric Update: Bitwarden among first to implement FIDO credential exchange standards on iOS 26

Apple iOS 26 has landed, and it includes support for FIDO Alliance Credential Exchange standards to enable secure, end-to-end encrypted transfers of passkeys, passwords and other credentials across platforms and […]

Apple iOS 26 has landed, and it includes support for FIDO Alliance Credential Exchange standards to enable secure, end-to-end encrypted transfers of passkeys, passwords and other credentials across platforms and apps. A release from large open-source login management service Bitwarden says it is “among the first credential managers on iOS 26 to implement the Credential Exchange standards, helping lead passkey and password portability with a secure, standardized way for users to move credentials between Apple Passwords, Bitwarden and other compatible services.” 


Human Colossus Foundation

EHDS Promises & Pitfalls: The Case of Genomic Data Integration in Personalised Medicines

Human Colossus Foundation co-organised the NextGen Pre-event at MyData 2025: Genomic Data and the Future of the European Health Data Space Helsinki, 24 September 2025 — The European Health Data Space (EHDS) is set to redefine digital health across Europe. With the potential to benefit more than 250 million citizens, it promises to transform clinical research, innovation, an

Human Colossus Foundation co-organised the NextGen Pre-event at MyData 2025: Genomic Data and the Future of the European Health Data Space

Helsinki, 24 September 2025 — The European Health Data Space (EHDS) is set to redefine digital health across Europe. With the potential to benefit more than 250 million citizens, it promises to transform clinical research, innovation, and patient care. But can it truly deliver?

A successful rollout of EHDS would restore trust—an essential prerequisite for unlocking the value of health data not only for medical purposes but for the entire health economy. Driven by the AI revolution, new revenue opportunities for European innovators could reach billions of euros. Done right, EHDS could become a flagship success story, showcasing the competitive edge of well-organised data ecosystems.

If it fails, however, the consequences would be so severe that its absence of success could only be described as a doomsday scenario.

Meeting this challenge requires a clear understanding of the barriers to building a health data ecosystem at a continental scale.

The NextGen EU Horizon project tackles some of the most complex data challenges in cardiovascular personalised medicine. Serving as a kind of “mini-EHDS,” NextGen acts as a testing ground for digital tools that enable the creation of interoperable data spaces.

Against this backdrop, the high-level pre-conference event at MyData 2025—titled “EHDS Promises & Pitfalls: The Case of Genomic Data Integration in Personalised Medicines”—took place on Tuesday, September 23. Over three and a half hours, the session addressed topics for researchers, clinicians, regulators, policymakers, and all those who recognise that maintaining the status quo is the simpler—but ultimately false—choice.

The central question was clear: How can genomic data—among the most sensitive and valuable forms of health data—be safely and effectively integrated into EHDS for the benefit of all?

A distinguished panel of experts shared their insights:

The fundamentals of EHDS — Mikael Rinnetmäki, Finnish Innovation Fund Sitra, introduced EHDS and explored the challenges posed by both the primary and secondary use of health data in Europe.

The legal dimension and the Finnish approach — Sofia Kuitunen, Senior Legal Counsel, FinnGen, examined the secondary use of health and genomic data in Finland.

Overcoming implementation barriers in the Netherlands — Johan Bokslag, Programme Manager, Health Data Space Utrecht, addressed the practical challenges of building an EHDS-compliant infrastructure in the Netherlands and how these can be turned into opportunities for transformation.

End-user expectations — Dr. Petra Ijäs, Head Physician at Helsinki University Hospital, presented a clinical case highlighting how EHDS could help overcome barriers in cardiovascular risk prediction for carotid artery stenosis.

Moderated by Philippe Page (NextGen & Human Colossus Foundation), the session illuminated both the opportunities and the serious pitfalls of Europe’s flagship health data initiative.

The key conclusions were:

EHDS implementation offers a historic opportunity to bring Europe’s healthcare system into the digital era.

Major pitfalls exist, with the restoration of trust and confidence standing as the most critical.

If EHDS fails to deliver a secure and efficient data space, global competition risks driving Europe’s health innovation elsewhere.

A detailed summary of the discussions and participant takeaways is in preparation and will provide further insights.

From the Human Colossus Foundation’s perspective

The EHDS vision responds to the urgent need to unlock the value of health data. It aims to build a human-centred data ecosystem, where “human” encompasses patients, healthcare professionals, public health authorities, private actors, regulators, and policymakers. Creating such an ecosystem requires three conditions:

Distributed governance — Governance must be shared across legitimate authorities representing regions, communities, professionals, and individuals. This requires a federated digital design that builds upon existing frameworks while advancing them into the digital era.

Respect for Europe’s diversity — Diversity is a source of creativity; complexity is merely an implementation challenge to be overcome. EHDS should prioritise harmonisation, not standardisation, especially in data models. The semantics—the meaning of data—should remain as close as possible to the collection point. Mechanisms must ensure data is structured and its integrity preserved before it reaches AI tools, training datasets, or other uses.

True digital identity — Both individuals and organisations need digital identities that uphold privacy and fundamental rights as protected by EU and Member State ethical, regulatory, and governance frameworks. Achieving this requires a truly decentralised authentication architecture capable of accommodating Europe’s diverse sovereignties.

Together, these requirements form the foundation for introducing sovereignty in the digital era. Regaining control over our data demands governance, integrity, and authenticity in every data exchange.

Permalink

Thursday, 25. September 2025

FIDO Alliance

Driving Automotive Innovation with FIDO Standards and Certification

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition […]

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition brings an unprecedented opportunity to innovate and capitalize on new business models (such as in-vehicle commerce and subscription services) but also introduces significant cybersecurity threats and user experience challenges. 


Driving Automotive Innovation with FIDO Standards and Certification

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition […]

Attendees joined this webcast to hear how FIDO Alliance standards and certification can support the automotive industry as it transitions toward software-defined vehicles, autonomous technologies, and connected services. This transition brings an unprecedented opportunity to innovate and capitalize on new business models (such as in-vehicle commerce and subscription services) but also introduces significant cybersecurity threats and user experience challenges. 

This session builded upon the FIDO Alliance’s recently published white paper, Addressing Cybersecurity Challenges in the Automotive Industry, exploring how the FIDO Alliance is uniquely positioned to address these challenges using passkeys, FIDO Device Onboard (FDO), and existing and future certification programs.

Watch the presentation:

Wednesday, 24. September 2025

MyData

Your health, your data: A personal health account for healthcare

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. What if managing your […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. What if managing your […]

Human Colossus Foundation

Human Colossus Joins “SNV on Tour” at EPFL — Deepfake Trust & Verification

On September 23, Human Colossus participated in SNV on Tour at EPFL, an event organized by the Swiss Association for Standardization (SNV). This year's event focused on artificial intelligence and deepfakes, two of the most pressing challenges in today's digital landscape. The event brought together leaders from academia, industry, government, and civil society to discuss the future of trust,

On September 23, Human Colossus participated in SNV on Tour at EPFL, an event organized by the Swiss Association for Standardization (SNV). This year's event focused on artificial intelligence and deepfakes, two of the most pressing challenges in today's digital landscape.

The event brought together leaders from academia, industry, government, and civil society to discuss the future of trust, authenticity, and digital integrity in an era increasingly shaped by artificial intelligence and synthetic media.

Our participation highlighted Human Colossus’s commitment to strengthening the foundations of digital trust through technology, standards, collaboration, and international leadership.

Deepfakes and the Three-Fold Strategy of Authenticity

During the event, Professor Touradj Ebrahimi of EPFL presented a clear framework for addressing the growing challenge of deepfakes and manipulated media. He identified three complementary strategies:

Reactive: Detecting manipulation by developing forensic methods for spotting tampering, anomalies, and adversarial examples in audio, video, and images.

Proactive: Authenticity and integrity. This involves embedding cryptographic seals, provenance metadata, or integrity markers directly into content to certify its authenticity at the source.

Collaborative: Verification as a vector of trust. This involves building mechanisms for evidence collection, community-based verification, and shared governance to enable stronger ecosystem responses.

Of these three strategies, the proactive and collaborative approaches align closely with what Human Colossus is building.

Proactive Authenticity: Our work on verifiable provenance, integrity seals, and trustworthy digital infrastructure supports Prof. Ebrahimi’s call to certify authenticity at the source.

Collaborative verification: Human Colossus is pioneering frameworks for evidence sharing, community-driven verification, and governance models that establish verification as a source of trust.

Together, these approaches create a foundation for digital ecosystems that are resilient, transparent, and accountable, going beyond detection alone.

Strengthening Switzerland’s Role in Global Standardization

Another key message was the importance of standardization. Human Colossus is deeply committed to aligning its cutting-edge solutions with international frameworks to ensure interoperability and long-term sustainability.

At SNV on Tour, we reaffirmed our dedication to:

supporting Switzerland’s role as a hub for neutrality, transparency, and innovation in standardization.

actively contributing to European and global discussions on content authenticity and trustworthy AI;

shaping common protocols so that authenticity systems can scale across borders and industries.

This work strengthens Switzerland’s leadership in EU and global standardization and ensures that trust technologies evolve with fairness, accountability, and technical rigor.

Looking Ahead

Participating in SNV on Tour reinforced the importance of collaboration among technology providers, policymakers, researchers, and communities. Tackling deepfakes and synthetic media requires a multidisciplinary approach. It is not just an engineering problem, but a societal challenge that requires reactive detection, proactive authenticity, and collaborative verification.

Human Colossus is dedicated to building infrastructures of trust and contributing to standardization processes that ensure reliability, interoperability, and future-proofing.


Next Level Supply Chain Podcast with GS1

How a Family Recipe Turned Into a National Supply Chain Story

From Sunday suppers to 2,000 stores… and almost losing them all. Andrew Arbogast turned his dad's cheese dip recipe into a fast-growing CPG brand, only to face the harsh realities of shelf life, co-packing, and retailer expectations. In this episode, he joins Liz Sertl to share how he scaled Arbo's Cheese Dip, what nearly sank the business, and the turning point that gave him a second chanc

From Sunday suppers to 2,000 stores… and almost losing them all. Andrew Arbogast turned his dad's cheese dip recipe into a fast-growing CPG brand, only to face the harsh realities of shelf life, co-packing, and retailer expectations. In this episode, he joins Liz Sertl to share how he scaled Arbo's Cheese Dip, what nearly sank the business, and the turning point that gave him a second chance.

Listeners will hear the unfiltered story behind bringing a homemade recipe to the national stage, and the resilience, partnerships, and supply chain decisions that made survival possible.

In this episode, you'll learn:

How shelf life testing and packaging decisions directly impact scalability

Why rapid national expansion without brand awareness creates costly supply and demand mismatches

What strategies helped Arbo stabilize its operations

Jump into the conversation: (00:00) Introducing Next Level Supply Chain

(02:33) Family traditions that inspired Arbo's recipe

(04:25) Why barcodes matter for brand credibility

(06:03) Shelf life challenges with early co-packers

(08:11) Reformulating products for large-scale production

(09:24) Rapid retail expansion that backfired

(11:31) Winning Walmart Open Call and scaling mistakes

(13:28) Debt crisis and asking for help

(17:33) Grants and innovation with Real California Milk

(19:31) Favorite flavors and unexpected uses for cheese dip

Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn

Connect with the guests: Andrew Arbogast on LinkedIn Check out Arbo's Cheese Dip

Tuesday, 23. September 2025

FIDO Alliance

The Passkey Playbook | HID Global

Explore how to deploy passwordless authentication at scale — securely, strategically and with minimal disruption. Download the playbook to learn: Phishing-resistant authentication isn’t a one-size-fits-all. It’s a journey. Complete the […]

Explore how to deploy passwordless authentication at scale — securely, strategically and with minimal disruption.

Download the playbook to learn:

The different types of passkeys and how to choose the right option for your organization A phased deployment strategy that helps you start small, scale confidently and standardize passkey adoption across your organization The ROI of passkeys backed by data from the FIDO Alliance

Phishing-resistant authentication isn’t a one-size-fits-all. It’s a journey. Complete the form to get our Passkey Playbook to start yours.


Podcast: The Passwordless Shift: Rethinking Identity for the Modern Enterprise

In the inaugural episode of Imprivata’s new podcast, Access Point, hosts Joel Burleson-Davis and Chip Hughes sit down with Andrew Shikiar, CEO of the FIDO Alliance, to explore the global […]

In the inaugural episode of Imprivata’s new podcast, Access Point, hosts Joel Burleson-Davis and Chip Hughes sit down with Andrew Shikiar, CEO of the FIDO Alliance, to explore the global movement toward passwordless authentication.

Listen to Episode 1 here: https://ow.ly/Y2Zl50X0vEJ

Monday, 22. September 2025

FIDO Alliance

The Indian Express: ‘Password resets cost businesses more than they realise’: Zoho exec on ROI of going passwordless

The world is rapidly moving away from traditional security methods. With FIDO standards in place, more companies are shifting toward passwordless authentication. Many industry players are already phasing out passwords […]

The world is rapidly moving away from traditional security methods. With FIDO standards in place, more companies are shifting toward passwordless authentication. Many industry players are already phasing out passwords from their authenticator apps.

In India, the passwordless market is estimated at $411 million in 2024 and projected to reach more than $1.5 billion by 2030. This reflects how businesses are opting for faster, smarter, and safer login experiences. To understand what’s driving this trend and how companies are adapting, indianexpress.com spoke with Chandramouli Dorai, chief evangelist, cyber solutions and digital signatures at Zoho Corp.


Biometric Update: To build trust in biometrics, Vietnam banks should adopt FIDO passkeys: report

VinCSS has released an industry first report on the authentication experience in apps for Vietnamese banks, and it shows a “strong shift from traditional to modern authentication methods” in the country’s […]

VinCSS has released an industry first report on the authentication experience in apps for Vietnamese banks, and it shows a “strong shift from traditional to modern authentication methods” in the country’s banking ecosystem.

Biometrics rank as the most commonly used authentication methods for high risk transactions. It’s also rated as the most convenient, with 58.3 respondents listing it as such.

As usual, there are corresponding concerns about data privacy. One in three people worry their biometric data or digital credentials could be stolen or faked, leading to identity fraud. One in Authentication data theft is a top fear. “Many users feel that biometric authentication, though widely implemented, still is not secure enough for them or their digital assets.”


Back End News: HID offers passwordless authentication to support BSP compliance

HID, a company that provides secure identity solutions, announced the availability of its updated FIDO-certified authentication solutions in the Philippines, to help financial institutions and enterprises comply with the Bangko […]

HID, a company that provides secure identity solutions, announced the availability of its updated FIDO-certified authentication solutions in the Philippines, to help financial institutions and enterprises comply with the Bangko Sentral ng Pilipinas’ (BSP) new rules on IT risk management under the Anti-Financial Account Scamming Act (AFASA).

BSP requires organizations under its supervision to strengthen fraud management and identity verification by June 25, 2026. The directive calls for the adoption of secure, phishing-resistant methods, such as passwordless authentication through FIDO standards.

The measure comes amid rising online scams and fraud cases in the country. 


Security Boulevard: Beyond Passwords: A Guide to Choosing the Right Passkey

For many market analysts, cybersecurity agencies and authentication experts, passkeys, based on FIDO2 standard protocol, appear as the future proof authentication technology that will become mainstream within the next years. […]

For many market analysts, cybersecurity agencies and authentication experts, passkeys, based on FIDO2 standard protocol, appear as the future proof authentication technology that will become mainstream within the next years.

“By 2027, more than 90% of MFA transactions using a token will be based on FIDO protocols natively supported in IAM tools.”


Oasis Open

Discover the new XLIFF 2.2: Join Us for an Interactive Webinar on October 7!

XLIFF 2.2 is the new version of the OASIS XLIFF (XML Localisation Interchange File Format). Developed by the OASIS XLIFF Technical Committee, XLIFF is the main bilingual bitext format in the localisation industry. This interoperability standard defines a normative method for storing and exchanging localisable data across the various stages of the localisation process. This […] The post Discover

By Lucía Morado, OASIS XLIFF TC Co-Chair

XLIFF 2.2 is the new version of the OASIS XLIFF (XML Localisation Interchange File Format). Developed by the OASIS XLIFF Technical Committee, XLIFF is the main bilingual bitext format in the localisation industry. This interoperability standard defines a normative method for storing and exchanging localisable data across the various stages of the localisation process. This new version (XLIFF 2.2) introduces valuable enhancements while remaining backward compatible with the previous ones (XLIFF 2.1 and 2.0). 

XLIFF 2.2: What’s New?

One of the main changes of the new version is the new presentation structure of its specification. XLIFF 2.2 is now available in two formats:

XLIFF 2.2 Core: Contains only the essential information needed to create a valid XLIFF file. XLIFF 2.2 Extended: Includes the XLIFF Core as well as all the additional modules.

For those unfamiliar with our terminology:

XLIFF Core is the minimal set of XML elements and attributes that allows to define a set of translation units organised by source and target language. If a tool developer wishes to claim support for XLIFF 2.2, they must implement XLIFF Core.

XLIFF Modules offer additional set of XML elements and attributes that allow the inclusion of  (potentially useful) information about specific processes. For example, the Size and Length Restriction Module provides a mechanism to annotate possible constraints on content size and length. Tool developers may choose to support the modules that are most relevant for their specific use cases. XLIFF 2.2 includes 9 modules: Translation Candidates, Glossary, Format Style, Metadata, Resource Data, Size and Length Restriction, Validation, ITS,and Plural, Gender and Select.

By introducing a simplified version of the specification (XLIFF Core), which contains only the essential information needed to implement XLIFF, we aim to facilitate adoption among developers who are primarily interested in supporting the core functionality of the standard.

The other major change in XLIFF 2.2 is the release of the new Plural, Gender and Select Module. This module, which was proposed by Mihai Nita (Google), provides a method to store information required to represent and process messages with variants, such as plural forms or gender distinctions.

Upcoming Webinar

On October 7, members of the OASIS XLIFF TC will host a free webinar to present XLIFF 2.2. This event will cover the aforementioned key changes introduced in this new version, along with other minor enhancements. The main presentation will be followed by a live Q&A session, offering attendees the chance to engage directly with the experts behind the standard.

This is a unique opportunity for anyone interested in this influential localisation standard to learn about its latest developments, from the people maintaining and developing it. Do not miss it!

For those unable to attend live, a recording of the webinar will be made available on the official OASIS XLIFF TC website after the event.

We also encourage everyone who wishes to share comments or suggestions about the standard with the OASIS XLIFF TC to use the official public mailing list, which is open for community feedback.

The post Discover the new XLIFF 2.2: Join Us for an Interactive Webinar on October 7! appeared first on OASIS Open.